Quick start
ollama run qwen3.5Available sizes
| Tag | Size | Quantization | Context | Min RAM |
|---|---|---|---|---|
| qwen3.5:35b | 24GB | q4_k_m | 256K context | 30 GB |
| qwen3.5:122b | 81GB | q4_k_m | 256K context | 101.2 GB |
Run with
Claude Code
ollama launch claude --model qwen3.5:35bCodex
ollama launch codex --model qwen3.5:35bOpenCode
ollama launch opencode --model qwen3.5:35bOpenClaw
ollama launch openclaw --model qwen3.5:35bStrengths & Limitations
Strengths
- Exceptional utility
- High performance
- Open-source
Benchmarks
| Benchmark | Score | Unit |
|---|---|---|
| MMLU-Pro | 87.4 | % |
| MMLU-Redux | 95 | % |
| GPQA | 92.4 | % |
| MMLU-ProX | 83.7 | % |
| PIQA | 90.9 | % |
| SWE-bench Verified | 80 | % |
| SWE-bench Multilingual | 72 | % |
Related models
llavaMultimodal
🌋 LLaVA is a novel end-to-end trained large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding. Updated to version 1.6.
12.9M pullsminicpm-vMultimodal
A series of multimodal LLMs (MLLMs) designed for vision-language understanding.
4.6M pullsllava-llama3Multimodal
A LLaVA model fine-tuned from Llama 3 Instruct with better scores in several benchmarks.
2.1M pullsqwen3-vlMultimodal
The most powerful vision-language model in the Qwen model family to date.
1.6M pulls