Quick start
ollama run gemma3nAvailable sizes
| Tag | Size | Quantization | Context | Min RAM |
|---|---|---|---|---|
| gemma3n:e2b | 5.6GB | q4_k_m | 32K context | 7 GB |
| gemma3n:latest | 7.5GB | q4_k_m | 32K context | 9.4 GB |
Strengths & Limitations
Strengths
- Efficient execution
- Designed for everyday devices
- Good for on-device tasks
Benchmarks
| Benchmark | Score | Unit |
|---|---|---|
| HellaSwag | 10 | Accuracy |
| PIQA | 0 | Accuracy |
| ARC-c | 25 | Accuracy |
| ARC-e | 0 | Accuracy |
| WinoGrande | 5 | Accuracy |
| MMLU (ProX) | 0 | Accuracy |
| MMLU | 0 | Accuracy |
Related models
gemma3General
The current, most capable model that runs on a single GPU.
32.1M pullsllama3General
Meta Llama 3: The most capable openly available LLM to date
16.1M pullsgpt-ossGeneral
OpenAI’s open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases.
7.1M pullsdolphin3General
Dolphin 3.0 Llama 3.1 8B 🐬 is the next generation of the Dolphin series of instruct-tuned models designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases.
3.6M pulls