Quick start
ollama run granite3.2-visionAvailable sizes
| Tag | Size | Quantization | Context | Min RAM |
|---|---|---|---|---|
| granite3.2-vision:latest | 2.4GB | q4_k_m | 16K context | 3 GB |
Run with
Claude Code
ollama launch claude --model granite3.2-visionCodex
ollama launch codex --model granite3.2-visionOpenCode
ollama launch opencode --model granite3.2-visionOpenClaw
ollama launch openclaw --model granite3.2-visionStrengths & Limitations
Strengths
- Visual document understanding
- Automated content extraction
- Efficient and compact design
Related models
llama3.1Language
Llama 3.1 is a new state-of-the-art model from Meta available in 8B, 70B and 405B parameter sizes.
110.5M pullsdeepseek-r1Reasoning
DeepSeek-R1 is a family of open reasoning models with performance approaching that of leading models, such as O3 and Gemini 2.5 Pro.
78.6M pullsllama3.2Language
Meta's Llama 3.2 goes small with 1B and 3B models.
58.0M pullsgemma3General
The current, most capable model that runs on a single GPU.
32.1M pulls