Discover AI models for every task
Showing 1-24 of 28 models
by Qwen
Qwen3 VL 235B A22B Instruct is Alibaba's vision-language MoE model with 235B total / 22B active parameters. Combines state-of-the-art text and vision understanding with excellent performance on multimodal reasoning tasks.
Qwen3 235B A22B Instruct is a Mixture-of-Experts model with 235B total parameters and 22B activated, featuring 128 experts with 8 activated per token. Native 262K context extended to 1M tokens via Dual Chunk Attention. Achieves SOTA: 83.0 MMLU-Pro, 70.3 AIME25, 41.8 ARC-AGI, 79.2 Arena-Hard v2, 51.8 LiveCodeBench, 70.9 BFCL-v3. Non-thinking mode focused on direct task execution with enhanced instruction following, logical reasoning, and long-tail knowledge across multiple languages. Dramatically more efficient than full 235B models.
Qwen3 235B A22B Thinking is the reasoning-enhanced MoE variant with 235B total / 22B activated parameters and 128 experts. Features explicit thinking mode for complex problem-solving with native 262K context extending to 1M tokens. Excels at deep reasoning tasks requiring multi-step deliberation including advanced mathematics, logical inference, and complex coding challenges. Built on same architecture as Instruct version but optimized for reasoning-heavy workloads with tool integration and agentic capabilities.
High-end multimodal model delivering strong vision-language reasoning with long-context support.
Qwen3 30B A3B Instruct is a compact Mixture-of-Experts model with 30B total parameters and 3B activated per token, offering excellent efficiency for general-purpose tasks. Features 262K native context with extension to 1M tokens, strong multilingual capabilities, and enhanced instruction following. Balances performance and computational efficiency with support for tool calling, code generation, and logical reasoning. Ideal for deployment scenarios requiring lower resource usage while maintaining quality across diverse task types.
Qwen3 Coder 480B A35B Instruct is a specialized Mixture-of-Experts coding model with 480B total parameters and 35B activated. Optimized specifically for code generation, code understanding, debugging, and software engineering tasks. Features 262K native context for handling large codebases, strong performance on coding benchmarks including LiveCodeBench and HumanEval, and support for multiple programming languages. Excels at complex algorithmic problems, code refactoring, and technical documentation generation.
Qwen2.5 72B Instruct is Alibaba's instruction-tuned large language model with 72B parameters. Excels at following complex instructions, coding, mathematical reasoning, and multilingual tasks. Features 128K context window.
Qwen3 Coder 30B A3B Instruct is an efficient Mixture-of-Experts coding model with 30B total parameters and 3B activated per token. Specialized for code generation, debugging, and software engineering with excellent computational efficiency. Features 262K native context for processing large codebases, strong multi-language programming support, and optimized for practical coding tasks. Balances coding performance with lower resource requirements, ideal for development environments and real-time code assistance.
by Mistral
Devstral 2 123B is Mistral AI's flagship agentic coding model, featuring 123B parameters optimized for software engineering tasks. Achieves 72.2% on SWE-bench Verified and 61.3% on SWE-bench Multilingual. Excels at codebase exploration, multi-file editing, and agentic workflows with tool use. Supports 200K context window with enhanced function calling and structured output. Designed for IDE integration via Mistral Vibe CLI. Released under modified MIT license for unrestricted commercial use.
Mistral Small 3.2 24B Instruct is a multimodal instruction-tuned model supporting both vision and text with 24B parameters and 128K context. Major improvements over 3.1 include better instruction following (84.78%), 2x reduction in repetition errors, and robust function calling. Achieves 65.33% on Wildbench v2, 43.1% on Arena Hard v2, 92.90% on HumanEval Pass@5. Vision benchmarks: 87.4% ChartQA, 94.86% DocVQA, 62.50% MMMU. Supports up to 10 images per prompt with integrated vision-based function calling.
by Meta
Meta's flagship 405B parameter model representing the pinnacle of open-source AI. Exceptional reasoning and comprehensive knowledge for demanding applications.
Meta Llama 3.3 70B Instruct is a multilingual instruction-tuned model optimized for dialogue. Trained on ~15 trillion tokens with cutoff December 2023, it outperforms many open-source and closed models. Major improvements include 92.1% on IFEval (steerability), 88.4% on HumanEval (code), 77.0% on MATH, and 91.1% on MGSM (multilingual). Features 128K context, Grouped-Query Attention, and supports 8 languages including English, German, French, Spanish, Italian, Portuguese, Hindi, and Thai. Trained on 7M GPU hours with 100% renewable energy.
Meta Llama 3.1 8B Instruct is an efficient multilingual instruction-tuned model optimized for dialogue and assistant use cases. With 8 billion parameters and 128K context length, it provides strong performance across general tasks, code generation, and multilingual understanding. Supports function calling and tool use with Grouped-Query Attention architecture. Ideal for deployment scenarios requiring lower compute resources while maintaining quality across English and 7 additional languages including German, French, Spanish, and Hindi.
by intfloat
intfloat E5-Mistral-7B-Instruct is a state-of-the-art instruction-following embedding model built on Mistral 7B architecture. Combines strong language understanding from Mistral with specialized embedding training for retrieval tasks. Features instruction-based embedding generation allowing natural language queries to guide semantic search. Excels at complex retrieval scenarios, multi-hop reasoning in document search, and instruction-guided similarity tasks. Provides significantly improved zero-shot retrieval performance compared to traditional embedding models.
Mistral Small 4 is a 119B-parameter Mixture-of-Experts model (128 experts, 4 active per token, 6.5B active parameters) that unifies instruct, reasoning, and coding capabilities into a single multimodal model. It accepts text and image inputs, supports function calling, structured outputs, and configurable reasoning effort (none for fast responses, high for deep step-by-step reasoning). With a 256K context window and Apache 2.0 license, it delivers 40% lower latency and 3x higher throughput compared to Mistral Small 3.
by Moonshot
Moonshot Kimi K2 Instruct is a 1 trillion parameter Mixture-of-Experts model with 32B activated parameters, featuring 384 experts and 128K context length. Pre-trained on 15.5T tokens with Muon optimizer at unprecedented scale achieving zero instability. Achieves SOTA on LiveCodeBench (53.7%), SWE-bench Verified (71.6%), AIME 2024 (69.6%), and MATH-500 (97.4%). Specifically designed for agentic intelligence with exceptional tool calling, code generation, and mathematical reasoning capabilities.
intfloat Multilingual E5 Large Instruct is an instruction-tuned multilingual embedding model combining strong cross-lingual capabilities with instruction-following for guided retrieval. Supports 100+ languages with natural language instructions to customize embedding behavior. Features enhanced zero-shot retrieval performance through instruction-based query understanding. Ideal for complex multilingual search scenarios, domain-specific retrieval tasks, and applications requiring adaptive semantic understanding across languages.
Specialized 13B coding model with advanced code infilling capabilities. Excels in code generation, completion, debugging across multiple programming languages.
Qwen3.5-122B-A10B is Alibaba Cloud's native multimodal agent model with 122B total parameters (10B activated). Features 240K context, vision capabilities, hybrid reasoning with extended thinking, function calling, and support for 201 languages. Apache 2.0 licensed.
Qwen3 32B is a base foundation model with 32 billion parameters and 262K native context, designed for fine-tuning and custom adaptations. Pre-trained on diverse multilingual data covering 77.5% of languages, providing strong general capabilities across text understanding, code, mathematics, and reasoning. Serves as the foundation for specialized models and custom fine-tuning projects requiring a powerful mid-sized base. Ideal starting point for domain-specific adaptations and research applications.
Qwen 3.5 397B A17B is a 397B-parameter mixture-of-experts vision-language foundation model with a gated delta network architecture and a vision encoder. It supports a native context window of 262,144 tokens (extendable to over 1 million) and operates in a default thinking mode that can be disabled. The model achieves strong results such as 87.8% on MMLU‑Pro, 85.0% on MMMU, and 88.6% on MathVision benchmarks. It is released under the Apache 2.0 license.
Qwen3 30B A3B Thinking is the reasoning-focused MoE variant with 30B total / 3B activated parameters. Features explicit thinking mode for complex problem-solving with 262K native context extending to 1M tokens. Optimized for mathematical reasoning, logical inference, and multi-step problem decomposition while maintaining computational efficiency. Provides strong reasoning capabilities at a fraction of the compute cost of larger thinking models, ideal for resource-conscious deployments requiring deep reasoning.
Qwen's "thinking-optimized" 80B model designed for sustained multi-step reasoning, structured deliberation, and high-precision problem-solving across math, code, and complex planning tasks.
Qwen3 Embedding 8B is a dense retrieval embedding model with 8 billion parameters, optimized for semantic search, text similarity, and feature extraction. Trained on diverse multilingual data providing strong cross-lingual retrieval capabilities. Supports 262K context for embedding long documents and extensive text passages. Excels at document retrieval, semantic search, clustering, and recommendation systems. Compatible with standard embedding frameworks and optimized for production deployment with efficient inference.