Quick start
ollama run granite3-guardianAvailable sizes
| Tag | Size | Quantization | Context | Min RAM |
|---|---|---|---|---|
| granite3-guardian:latest | 2.7GB | q4_k_m | 8K context | 3.4 GB |
| granite3-guardian:8b | 5.8GB | q4_k_m | 8K context | 7.2 GB |
Strengths & Limitations
Strengths
- Risk detection
- Prompt analysis
- Response filtering
Related models
gemma3General
The current, most capable model that runs on a single GPU.
32.1M pullsllama3General
Meta Llama 3: The most capable openly available LLM to date
16.1M pullsgpt-ossGeneral
OpenAI’s open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases.
7.1M pullsdolphin3General
Dolphin 3.0 Llama 3.1 8B 🐬 is the next generation of the Dolphin series of instruct-tuned models designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases.
3.6M pulls