Skip to main content
Ollama ExplorerBeta
LanguageintermediateTools

mistral-nemo

Mistral AIMistral

A state-of-the-art 12B model with 128k context length, built by Mistral AI in collaboration with NVIDIA.

3.4M pullsUpdated Jul 26, 202517 tags1M context

Quick start

ollama run mistral-nemo

Available sizes

TagSizeQuantizationContextMin RAM
mistral-nemo:latest7.1GBq4_k_m1000K context8.9 GB

Run with

Claude Code
ollama launch claude --model mistral-nemo
Codex
ollama launch codex --model mistral-nemo
OpenCode
ollama launch opencode --model mistral-nemo
OpenClaw
ollama launch openclaw --model mistral-nemo

Strengths & Limitations

Strengths

  • Long context handling
  • State-of-the-art performance
  • Efficient architecture

Related models