Skip to main content
Ollama ExplorerBeta
MultimodalintermediateVision

minicpm-v

Other

A series of multimodal LLMs (MLLMs) designed for vision-language understanding.

4.6M pullsUpdated Feb 26, 202517 tags32K context

Quick start

ollama run minicpm-v

Available sizes

TagSizeQuantizationContextMin RAM
minicpm-v:latest5.5GBq4_k_m32K context6.9 GB

Strengths & Limitations

Strengths

  • Vision-language understanding
  • Multimodal capabilities
  • Designed for MLLMs

Related models