Quick start
ollama run wizardlmAvailable sizes
| Tag | Size | Quantization | Context | Min RAM |
|---|---|---|---|---|
| wizardlm:7b-q2_K | 2.8GB | q2_k | 2K context | 3.5 GB |
| wizardlm:7b-q3_K_S | 2.9GB | q3_k_s | 2K context | 3.6 GB |
| wizardlm:7b-q3_K_M | 3.3GB | q3_k_m | 2K context | 4.1 GB |
| wizardlm:7b-q3_K_L | 3.6GB | q3_k_l | 2K context | 4.5 GB |
| wizardlm:7b-q4_0 | 3.8GB | q4_0 | 2K context | 4.8 GB |
| wizardlm:7b-q4_K_S | 3.9GB | q4_k_s | 2K context | 4.9 GB |
| wizardlm:7b-q4_K_M | 4.1GB | q4_k_m | 2K context | 5.1 GB |
| wizardlm:7b-q4_1 | 4.2GB | q4_1 | 2K context | 5.2 GB |
| wizardlm:7b-q5_0 | 4.7GB | q5_0 | 2K context | 5.9 GB |
| wizardlm:7b-q5_K_S | 4.7GB | q5_k_s | 2K | 5.9 GB |
| wizardlm:7b-q5_K_M | 4.8GB | q5_k_m | 2K context | 6 GB |
| wizardlm:7b-q5_1 | 5.1GB | q5_1 | 2K context | 6.4 GB |
| wizardlm:13b-llama2-q2_K | 5.4GB | q2_k | 4K context | 6.8 GB |
| wizardlm:13b-q2_K | 5.4GB | q2_k | 2K context | 6.8 GB |
| wizardlm:7b-q6_K | 5.5GB | q6_k | 2K context | 6.9 GB |
| wizardlm:13b-llama2-q3_K_S | 5.7GB | q3_k_s | 4K context | 7.1 GB |
| wizardlm:13b-q3_K_S | 5.7GB | q3_k_s | 2K context | 7.1 GB |
| wizardlm:13b-llama2-q3_K_M | 6.3GB | q3_k_m | 4K context | 7.9 GB |
| wizardlm:13b-q3_K_M | 6.3GB | q3_k_m | 2K context | 7.9 GB |
| wizardlm:13b-llama2-q3_K_L | 6.9GB | q3_k_l | 4K context | 8.6 GB |
| wizardlm:13b-q3_K_L | 6.9GB | q3_k_l | 2K context | 8.6 GB |
| wizardlm:7b-q8_0 | 7.2GB | q8_0 | 2K context | 9 GB |
| wizardlm:13b-llama2-q4_0 | 7.4GB | q4_0 | 4K context | 9.2 GB |
| wizardlm:13b-llama2-q4_K_S | 7.4GB | q4_k_s | 4K | 9.2 GB |
| wizardlm:13b-q4_0 | 7.4GB | q4_0 | 2K context | 9.2 GB |
| wizardlm:13b-q4_K_S | 7.4GB | q4_k_s | 2K | 9.2 GB |
| wizardlm:13b-llama2-q4_K_M | 7.9GB | q4_k_m | 4K context | 9.9 GB |
| wizardlm:13b-q4_K_M | 7.9GB | q4_k_m | 2K context | 9.9 GB |
| wizardlm:13b-llama2-q4_1 | 8.2GB | q4_1 | 4K context | 10.2 GB |
| wizardlm:13b-q4_1 | 8.2GB | q4_1 | 2K context | 10.2 GB |
| wizardlm:13b-llama2-q5_0 | 9.0GB | q5_0 | 4K context | 11.2 GB |
| wizardlm:13b-llama2-q5_K_S | 9.0GB | q5_k_s | 4K | 11.2 GB |
| wizardlm:13b-q5_0 | 9.0GB | q5_0 | 2K context | 11.2 GB |
| wizardlm:13b-q5_K_S | 9.0GB | q5_k_s | 2K | 11.2 GB |
| wizardlm:13b-llama2-q5_K_M | 9.2GB | q5_k_m | 4K context | 11.5 GB |
| wizardlm:13b-q5_K_M | 9.2GB | q5_k_m | 2K context | 11.5 GB |
| wizardlm:13b-llama2-q5_1 | 9.8GB | q5_1 | 4K context | 12.2 GB |
| wizardlm:13b-q5_1 | 9.8GB | q5_1 | 2K context | 12.2 GB |
| wizardlm:13b-llama2-q6_K | 11GB | q6_k | 4K context | 13.8 GB |
| wizardlm:13b-q6_K | 11GB | q6_k | 2K context | 13.8 GB |
| wizardlm:7b-fp16 | 13GB | fp16 | 2K context | 16.2 GB |
| wizardlm:13b-llama2-q8_0 | 14GB | q8_0 | 4K context | 17.5 GB |
| wizardlm:13b-q8_0 | 14GB | q8_0 | 2K context | 17.5 GB |
| wizardlm:30b-q2_K | 14GB | q2_k | 2K | 17.5 GB |
| wizardlm:30b-q3_K_M | 16GB | q3_k_m | 2K context | 20 GB |
| wizardlm:30b-q3_K_L | 17GB | q3_k_l | 2K context | 21.2 GB |
| wizardlm:30b-q4_0 | 18GB | q4_0 | 2K context | 22.5 GB |
| wizardlm:30b-q4_K_S | 18GB | q4_k_s | 2K | 22.5 GB |
| wizardlm:30b-q4_1 | 20GB | q4_1 | 2K context | 25 GB |
| wizardlm:30b-q4_K_M | 20GB | q4_k_m | 2K | 25 GB |
| wizardlm:30b-q5_0 | 22GB | q5_0 | 2K context | 27.5 GB |
| wizardlm:30b-q5_K_S | 22GB | q5_k_s | 2K | 27.5 GB |
| wizardlm:30b-q5_K_M | 23GB | q5_k_m | 2K context | 28.8 GB |
| wizardlm:30b-q5_1 | 24GB | q5_1 | 2K context | 30 GB |
| wizardlm:13b-llama2-fp16 | 26GB | fp16 | 4K context | 32.5 GB |
| wizardlm:13b-fp16 | 26GB | fp16 | 2K context | 32.5 GB |
| wizardlm:30b-q6_K | 27GB | q6_k | 2K context | 33.8 GB |
| wizardlm:70b-llama2-q2_K | 29GB | q2_k | 4K context | 36.2 GB |
| wizardlm:70b-llama2-q3_K_S | 30GB | q3_k_s | 4K context | 37.5 GB |
| wizardlm:70b-llama2-q3_K_M | 33GB | q3_k_m | 4K context | 41.2 GB |
| wizardlm:30b-q8_0 | 35GB | q8_0 | 2K context | 43.8 GB |
| wizardlm:70b-llama2-q3_K_L | 36GB | q3_k_l | 4K context | 45 GB |
| wizardlm:70b-llama2-q4_0 | 39GB | q4_0 | 4K context | 48.8 GB |
| wizardlm:70b-llama2-q4_K_S | 39GB | q4_k_s | 4K | 48.8 GB |
| wizardlm:70b-llama2-q4_K_M | 41GB | q4_k_m | 4K context | 51.2 GB |
| wizardlm:70b-llama2-q4_1 | 43GB | q4_1 | 4K context | 53.8 GB |
| wizardlm:70b-llama2-q5_0 | 47GB | q5_0 | 4K context | 58.8 GB |
| wizardlm:70b-llama2-q5_K_S | 47GB | q5_k_s | 4K | 58.8 GB |
| wizardlm:70b-llama2-q5_K_M | 49GB | q5_k_m | 4K context | 61.2 GB |
| wizardlm:70b-llama2-q6_K | 57GB | q6_k | 4K context | 71.2 GB |
| wizardlm:30b-fp16 | 65GB | fp16 | 2K context | 81.2 GB |
| wizardlm:70b-llama2-q8_0 | 73GB | q8_0 | 4K context | 91.2 GB |
Strengths & Limitations
Strengths
- General knowledge
- Creative text formats
- Code generation
Related models
gemma3General
The current, most capable model that runs on a single GPU.
32.1M pullsllama3General
Meta Llama 3: The most capable openly available LLM to date
16.1M pullsgpt-ossGeneral
OpenAI’s open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases.
7.1M pullsdolphin3General
Dolphin 3.0 Llama 3.1 8B 🐬 is the next generation of the Dolphin series of instruct-tuned models designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases.
3.6M pulls