Skip to main content
Ollama ExplorerBeta
MultimodalintermediateVisionToolsThinkingCloud

kimi-k2.5

Other

Kimi K2.5 is an open-source, native multimodal agentic model that seamlessly integrates vision and language understanding with advanced agentic capabilities, instant and thinking modes, as well as conversational and agentic paradigms.

91K pullsUpdated Jan 26, 20260 tags

Quick start

ollama run kimi-k2.5

Available sizes

TagSizeQuantizationContextMin RAM

Run with

Claude Code
ollama launch claude --model kimi-k2.5:cloud
Codex
ollama launch codex --model kimi-k2.5:cloud
OpenCode
ollama launch opencode --model kimi-k2.5:cloud
OpenClaw
ollama launch openclaw --model kimi-k2.5:cloud

Strengths & Limitations

Strengths

  • Multimodal understanding
  • Advanced agentic capabilities
  • Seamless vision and language integration

Related models