Quick start
ollama run llama3.1Available sizes
| Tag | Size | Quantization | Context | Min RAM |
|---|---|---|---|---|
| llama3.1:latest | 4.9GB | q4_k_m | 128K context | 6.1 GB |
| llama3.1:70b | 43GB | q4_k_m | 128K context | 53.8 GB |
| llama3.1:405b | 243GB | q4_k_m | 128K context | 303.8 GB |
Run with
Claude Code
ollama launch claude --model llama3.1Codex
ollama launch codex --model llama3.1OpenCode
ollama launch opencode --model llama3.1OpenClaw
ollama launch openclaw --model llama3.1Strengths & Limitations
Strengths
- State-of-the-art performance
- Multiple parameter sizes available
- New model from Meta
Related models
llama3.2Language
Meta's Llama 3.2 goes small with 1B and 3B models.
58.0M pullsmistralLanguage
The 7B model released by Mistral AI, updated to version 0.3.
25.6M pullsqwen2.5Language
Qwen2.5 models are pretrained on Alibaba's latest large-scale dataset, encompassing up to 18 trillion tokens. The model supports up to 128K tokens and has multilingual support.
22.0M pullsqwen3Language
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.
19.8M pulls