Skip to main content
Ollama ExplorerBeta
Languageintermediate

openhermes

TekniumOpenHermes

OpenHermes 2.5 is a 7B model fine-tuned by Teknium on Mistral with fully open datasets.

462K pullsUpdated Feb 26, 202435 tags32K context

Quick start

ollama run openhermes

Available sizes

TagSizeQuantizationContextMin RAM
openhermes:7b-mistral-v2-q2_K3.1GBq2_k32K context3.9 GB
openhermes:7b-mistral-v2.5-q2_K3.1GBq2_k32K3.9 GB
openhermes:7b-mistral-v2-q3_K_S3.2GBq3_k_s32K context4 GB
openhermes:7b-mistral-v2.5-q3_K_S3.2GBq3_k_s32K4 GB
openhermes:7b-mistral-v2-q3_K_M3.5GBq3_k_m32K context4.4 GB
openhermes:7b-mistral-v2.5-q3_K_M3.5GBq3_k_m32K4.4 GB
openhermes:7b-mistral-v2-q3_K_L3.8GBq3_k_l32K context4.8 GB
openhermes:7b-mistral-v2.5-q3_K_L3.8GBq3_k_l32K4.8 GB
openhermes:latest4.1GBq4_k_m32K context5.1 GB
openhermes:v24.1GBq4_k_m32K5.1 GB
openhermes:7b-mistral-v2-q4_K_M4.4GBq4_k_m32K context5.5 GB
openhermes:7b-mistral-v2.5-q4_K_M4.4GBq4_k_m32K5.5 GB
openhermes:7b-mistral-v2-q4_14.6GBq4_132K context5.8 GB
openhermes:7b-mistral-v2.5-q4_14.6GBq4_132K5.8 GB
openhermes:7b-mistral-v2-q5_05.0GBq5_032K context6.2 GB
openhermes:7b-mistral-v2-q5_K_S5.0GBq5_k_s32K6.2 GB
openhermes:7b-mistral-v2-q5_K_M5.1GBq5_k_m32K context6.4 GB
openhermes:7b-mistral-v2.5-q5_K_M5.1GBq5_k_m32K6.4 GB
openhermes:7b-mistral-v2-q5_15.4GBq5_132K context6.8 GB
openhermes:7b-mistral-v2.5-q5_15.4GBq5_132K6.8 GB
openhermes:7b-mistral-v2-q6_K5.9GBq6_k32K context7.4 GB
openhermes:7b-mistral-v2.5-q6_K5.9GBq6_k32K7.4 GB
openhermes:7b-mistral-v2-q8_07.7GBq8_032K context9.6 GB
openhermes:7b-mistral-v2.5-q8_07.7GBq8_032K9.6 GB
openhermes:7b-mistral-v2-fp1614GBfp1632K context17.5 GB
openhermes:7b-mistral-v2.5-fp1614GBfp1632K17.5 GB

Strengths & Limitations

Strengths

  • Fine-tuned on Mistral.
  • Uses fully open datasets.
  • 7B parameter size.

Related models