Skip to main content
Ollama ExplorerBeta
LanguageintermediateToolsThinking

glm-4.7-flash

Alibaba CloudOther

As the strongest model in the 30B class, GLM-4.7-Flash offers a new option for lightweight deployment that balances performance and efficiency.

301K pullsUpdated Jan 26, 20264 tags198K context

Quick start

ollama run glm-4.7-flash

Available sizes

TagSizeQuantizationContextMin RAM
glm-4.7-flash:latest19GBq4_k_m198K context23.8 GB
glm-4.7-flash:q4_K_M19GBq4_k_m198K23.8 GB
glm-4.7-flash:q8_032GBq8_0198K context40 GB
glm-4.7-flash:bf1660GBbf16198K context75 GB

Run with

Claude Code
ollama launch claude --model glm-4.7-flash
Codex
ollama launch codex --model glm-4.7-flash
OpenCode
ollama launch opencode --model glm-4.7-flash
OpenClaw
ollama launch openclaw --model glm-4.7-flash

Strengths & Limitations

Strengths

  • Strong performance in 30B class
  • Lightweight deployment option
  • Balances performance and efficiency

Benchmarks

BenchmarkScoreUnit
AIME25
GPQA75.2
SWE-bench Verified59.2

Related models