Skip to main content
Ollama ExplorerBeta
Languageintermediate

olmo2

Nomic AIOther

OLMo 2 is a new family of 7B and 13B models trained on up to 5T tokens. These models are on par with or better than equivalently sized fully open models, and competitive with open-weight models such as Llama 3.1 on English academic benchmarks.

3.5M pullsUpdated Feb 26, 20259 tags4K context

Quick start

ollama run olmo2

Available sizes

TagSizeQuantizationContextMin RAM
olmo2:latest4.5GBq4_k_m4K context5.6 GB
olmo2:13b8.4GBq4_k_m4K context10.5 GB

Strengths & Limitations

Strengths

  • Strong performance on English academic benchmarks
  • Competitive with Llama 3
  • Trained on a large dataset (5T tokens)

Related models