Discover AI models for every task
Showing 1-18 of 18 models
by MiniMax
MiniMax M2.5 is a state-of-the-art reasoning MoE model with 229B total / 10B active parameters. Extensively trained with reinforcement learning across 200,000+ real-world environments, achieving SOTA performance in coding (80.2% SWE-Bench Verified), agentic tool use, search, and office productivity tasks. Features 197K context window, efficient MoE inference, and strong multilingual support.
MiniMax M2.7 is a MiniMaxM2 architecture model with an undisclosed parameter count, optimized for advanced reasoning and agentic workflows. It excels at complex tool use, self‑evolution, and professional software engineering tasks, achieving a 66.6% medal rate on MLE Bench Lite and a 56.2% score on SWE Bench Pro. The model also attains an ELO of 1495 on GDPval‑AA, surpassing other open‑weight models. Available under an Other license.
MiniMax M2 is a powerful MoE model with 200B total / 10B active parameters. Optimized for reasoning and coding tasks with excellent performance in multilingual scenarios. Features 128K context window and efficient inference.
MiniMax M2.1 is a state-of-the-art MoE model with 230B total / 10B active parameters, optimized for agentic coding and complex multi-step workflows. Excels at multilingual programming, tool use, and long-horizon planning. Matches Claude Sonnet 4.5 on code benchmarks and exceeds it in multilingual scenarios. Features 196K context window with FP8 efficiency. Released under Modified-MIT license for commercial use.
by ZAI
ZAI GLM 5.1 is a 744B parameter Mixture-of-Experts language model built with the GLM‑MoE DSA architecture. It excels at agentic engineering, achieving state-of-the-art performance on benchmarks such as HLE with tools (52.3), SWE‑Bench Pro (58.4) and AIME 2026 (95.3). The model supports extensive tool use and long‑horizon reasoning, with a large context window of up to 128K tokens. It is released under the MIT license.
by Qwen
Qwen 3.5 9B is a 9B‑parameter multimodal large language model with a gated‑delta mixture‑of‑experts architecture and a vision encoder. It supports a native context window of 262,144 tokens and operates in a default thinking mode that can be disabled. The model achieves strong results such as 82.5% on MMLU‑Pro, 88.2% on C‑Eval, and 78.4% on MMMU benchmarks. It is released under the Apache 2.0 license.
Qwen 3.5 397B A17B is a 397B-parameter mixture-of-experts vision-language foundation model with a gated delta network architecture and a vision encoder. It supports a native context window of 262,144 tokens (extendable to over 1 million) and operates in a default thinking mode that can be disabled. The model achieves strong results such as 87.8% on MMLU‑Pro, 85.0% on MMMU, and 88.6% on MathVision benchmarks. It is released under the Apache 2.0 license.
High-end multimodal model delivering strong vision-language reasoning with long-context support.
by Mistral
Mistral Small 3.2 24B Instruct is a multimodal instruction-tuned model supporting both vision and text with 24B parameters and 128K context. Major improvements over 3.1 include better instruction following (84.78%), 2x reduction in repetition errors, and robust function calling. Achieves 65.33% on Wildbench v2, 43.1% on Arena Hard v2, 92.90% on HumanEval Pass@5. Vision benchmarks: 87.4% ChartQA, 94.86% DocVQA, 62.50% MMMU. Supports up to 10 images per prompt with integrated vision-based function calling.
by intfloat
intfloat E5-Mistral-7B-Instruct is a state-of-the-art instruction-following embedding model built on Mistral 7B architecture. Combines strong language understanding from Mistral with specialized embedding training for retrieval tasks. Features instruction-based embedding generation allowing natural language queries to guide semantic search. Excels at complex retrieval scenarios, multi-hop reasoning in document search, and instruction-guided similarity tasks. Provides significantly improved zero-shot retrieval performance compared to traditional embedding models.
GLM-4.5 Air is a compact 106B parameter Mixture-of-Experts model with 12B active parameters, optimized for efficiency while maintaining strong performance. Scores 59.8 across 12 industry benchmarks with superior resource efficiency compared to full GLM-4.5. Features hybrid reasoning mode with 128K context, supports intelligent agent functions and tool calling. Released under MIT license with commercial use allowed. Ideal for deployment scenarios requiring balance between capability and computational cost.
GLM-4.5 is a 355B parameter Mixture-of-Experts foundation model with 32B active parameters, designed for intelligent agents. Features hybrid reasoning mode with configurable thinking enabled by default. Ranks 3rd place at 63.2 across 12 industry benchmarks among all proprietary and open-source models. Released under MIT license with 128K context, supports reasoning, coding, and intelligent agent functions including OpenAI-style tool calling. Incorporates MTP (Multi-Token Prediction) layers with speculative decoding for efficient inference.
Qwen2.5 72B Instruct is Alibaba's instruction-tuned large language model with 72B parameters. Excels at following complex instructions, coding, mathematical reasoning, and multilingual tasks. Features 128K context window.
by Moonshot
Moonshot AI's most powerful native multimodal agentic model. Features 1T parameters (32B activated), 256K context, vision capabilities, and advanced reasoning with agent swarm support.
Mistral Small 4 is a 119B-parameter Mixture-of-Experts model (128 experts, 4 active per token, 6.5B active parameters) that unifies instruct, reasoning, and coding capabilities into a single multimodal model. It accepts text and image inputs, supports function calling, structured outputs, and configurable reasoning effort (none for fast responses, high for deep step-by-step reasoning). With a 256K context window and Apache 2.0 license, it delivers 40% lower latency and 3x higher throughput compared to Mistral Small 3.
Qwen3.5-122B-A10B is Alibaba Cloud's native multimodal agent model with 122B total parameters (10B activated). Features 240K context, vision capabilities, hybrid reasoning with extended thinking, function calling, and support for 201 languages. Apache 2.0 licensed.
by BAAI
BAAI BGE Large EN V1.5 is a state-of-the-art English dense retrieval embedding model with 1024-dimensional embeddings and 512 token sequence length. Achieves 64.23 average on MTEB leaderboard across 56 tasks with 54.29 on retrieval. Pre-trained with RetroMAE and fine-tuned on large-scale contrastive learning data. V1.5 improvements include better similarity distribution and flexible usage without query instructions. Ideal for semantic search, document retrieval, re-ranking pipelines, and sentence similarity tasks. Production-ready with 3.4M+ downloads/month.
ZAI's frontier 744B MoE model (40B activated) with 203K context. Excels at agentic engineering, coding (SWE-bench 77.8%), reasoning, and tool use. Built with asynchronous RL and MIT licensed.