Command Palette
Search for a command to run

Qwen3 32B

by Qwen

Specifications

Input
Output
Context window
33K tokens
Released
Apr 2025

Performance

Speed
23 t/s
TTFT
241 ms
Latency
Intelligence

Pricing

Input
€0.09
per 1M tokens
Output
€0.25
per 1M tokens

About this model

Qwen3 32B is a base foundation model with 32 billion parameters and 262K native context, designed for fine-tuning and custom adaptations. Pre-trained on diverse multilingual data covering 77.5% of languages, providing strong general capabilities across text understanding, code, mathematics, and reasoning. Serves as the foundation for specialized models and custom fine-tuning projects requiring a powerful mid-sized base. Ideal starting point for domain-specific adaptations and research applications.

Technical specifications

Capabilities
Input modalities
Output modalities
Reasoning
Hybrid Default off

Knowledge horizon

Knowledge cutoff Mar 2024
Released Apr 2025
Today
Training to release 13 mo Since release 13 mo

See also

Add Model to Comparison
Search for a model to add
Command Palette
Search for a command to run