Quick start
ollama run gemma3Available sizes
| Tag | Size | Quantization | Context | Min RAM |
|---|---|---|---|---|
| gemma3:latest | 3.3GB | q4_k_m | 128K context | 4.1 GB |
| gemma3:12b | 8.1GB | q4_k_m | 128K context | 10.1 GB |
| gemma3:27b | 17GB | q4_k_m | 128K context | 21.2 GB |
Strengths & Limitations
Strengths
- Runs on a single GPU
- Most capable model in its class
- Efficient performance
Benchmarks
| Benchmark | Score | Unit |
|---|---|---|
| HellaSwag | 0 | — |
| PIQA | 0 | — |
| ARC-c | 0 | — |
| WinoGrande | 0 | — |
| HellaSwag | 10 | — |
| ARC-c | 25 | — |
| ARC-e | 0 | — |
Related models
llama3General
Meta Llama 3: The most capable openly available LLM to date
16.1M pullsgpt-ossGeneral
OpenAI’s open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases.
7.1M pullsdolphin3General
Dolphin 3.0 Llama 3.1 8B 🐬 is the next generation of the Dolphin series of instruct-tuned models designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases.
3.6M pullsorca-miniGeneral
A general-purpose model ranging from 3 billion parameters to 70 billion, suitable for entry-level hardware.
2.0M pulls