Quick start
ollama run openhermesAvailable sizes
| Tag | Size | Quantization | Context | Min RAM |
|---|---|---|---|---|
| openhermes:7b-mistral-v2-q2_K | 3.1GB | q2_k | 32K context | 3.9 GB |
| openhermes:7b-mistral-v2.5-q2_K | 3.1GB | q2_k | 32K | 3.9 GB |
| openhermes:7b-mistral-v2-q3_K_S | 3.2GB | q3_k_s | 32K context | 4 GB |
| openhermes:7b-mistral-v2.5-q3_K_S | 3.2GB | q3_k_s | 32K | 4 GB |
| openhermes:7b-mistral-v2-q3_K_M | 3.5GB | q3_k_m | 32K context | 4.4 GB |
| openhermes:7b-mistral-v2.5-q3_K_M | 3.5GB | q3_k_m | 32K | 4.4 GB |
| openhermes:7b-mistral-v2-q3_K_L | 3.8GB | q3_k_l | 32K context | 4.8 GB |
| openhermes:7b-mistral-v2.5-q3_K_L | 3.8GB | q3_k_l | 32K | 4.8 GB |
| openhermes:latest | 4.1GB | q4_k_m | 32K context | 5.1 GB |
| openhermes:v2 | 4.1GB | q4_k_m | 32K | 5.1 GB |
| openhermes:7b-mistral-v2-q4_K_M | 4.4GB | q4_k_m | 32K context | 5.5 GB |
| openhermes:7b-mistral-v2.5-q4_K_M | 4.4GB | q4_k_m | 32K | 5.5 GB |
| openhermes:7b-mistral-v2-q4_1 | 4.6GB | q4_1 | 32K context | 5.8 GB |
| openhermes:7b-mistral-v2.5-q4_1 | 4.6GB | q4_1 | 32K | 5.8 GB |
| openhermes:7b-mistral-v2-q5_0 | 5.0GB | q5_0 | 32K context | 6.2 GB |
| openhermes:7b-mistral-v2-q5_K_S | 5.0GB | q5_k_s | 32K | 6.2 GB |
| openhermes:7b-mistral-v2-q5_K_M | 5.1GB | q5_k_m | 32K context | 6.4 GB |
| openhermes:7b-mistral-v2.5-q5_K_M | 5.1GB | q5_k_m | 32K | 6.4 GB |
| openhermes:7b-mistral-v2-q5_1 | 5.4GB | q5_1 | 32K context | 6.8 GB |
| openhermes:7b-mistral-v2.5-q5_1 | 5.4GB | q5_1 | 32K | 6.8 GB |
| openhermes:7b-mistral-v2-q6_K | 5.9GB | q6_k | 32K context | 7.4 GB |
| openhermes:7b-mistral-v2.5-q6_K | 5.9GB | q6_k | 32K | 7.4 GB |
| openhermes:7b-mistral-v2-q8_0 | 7.7GB | q8_0 | 32K context | 9.6 GB |
| openhermes:7b-mistral-v2.5-q8_0 | 7.7GB | q8_0 | 32K | 9.6 GB |
| openhermes:7b-mistral-v2-fp16 | 14GB | fp16 | 32K context | 17.5 GB |
| openhermes:7b-mistral-v2.5-fp16 | 14GB | fp16 | 32K | 17.5 GB |
Strengths & Limitations
Strengths
- Fine-tuned on Mistral.
- Uses fully open datasets.
- 7B parameter size.
Related models
llama3.1Language
Llama 3.1 is a new state-of-the-art model from Meta available in 8B, 70B and 405B parameter sizes.
110.5M pullsllama3.2Language
Meta's Llama 3.2 goes small with 1B and 3B models.
58.0M pullsmistralLanguage
The 7B model released by Mistral AI, updated to version 0.3.
25.6M pullsqwen2.5Language
Qwen2.5 models are pretrained on Alibaba's latest large-scale dataset, encompassing up to 18 trillion tokens. The model supports up to 128K tokens and has multilingual support.
22.0M pulls