Discover AI models for every task
Showing 1-10 of 10 models
by Mistral
Mistral's 12B parameter vision-language model. Capable of understanding and reasoning about images alongside text.
Devstral 2 123B is Mistral AI's flagship agentic coding model, featuring 123B parameters optimized for software engineering tasks. Achieves 72.2% on SWE-bench Verified and 61.3% on SWE-bench Multilingual. Excels at codebase exploration, multi-file editing, and agentic workflows with tool use. Supports 200K context window with enhanced function calling and structured output. Designed for IDE integration via Mistral Vibe CLI. Released under modified MIT license for unrestricted commercial use.
Mistral Voxtral Small 24B is a multimodal model supporting both text and audio inputs with 24B parameters. Enables natural voice conversations and audio understanding alongside text processing. Features audio transcription, audio-based reasoning, and voice-to-text capabilities. Built on Mistral architecture with specific training for audio modalities. Ideal for voice assistants, audio analysis applications, and multimodal AI systems requiring combined text and speech processing.
by NVIDIA
Nemotron Nano 12B V2 is a unified reasoning and chat model with controllable inference via /think and /no_think directives. Features hybrid Mamba-2 + MLP layers + 6 Attention layers architecture with 128K context. Achieves 76.25% AIME25, 97.75% MATH500, 70.79% LiveCodeBench, 66.98% BFCL v3. Supports runtime thinking budget control for accuracy-latency tradeoffs. Pre-trained on ~20T tokens with cutoff September 2024. Optimized for NVIDIA GPUs (A10G, H100, Jetson AGX Thor) with efficient Mamba-2 SSM for long-context handling. Includes native function calling and tool integration.
Mistral Small 3.2 24B Instruct is a multimodal instruction-tuned model supporting both vision and text with 24B parameters and 128K context. Major improvements over 3.1 include better instruction following (84.78%), 2x reduction in repetition errors, and robust function calling. Achieves 65.33% on Wildbench v2, 43.1% on Arena Hard v2, 92.90% on HumanEval Pass@5. Vision benchmarks: 87.4% ChartQA, 94.86% DocVQA, 62.50% MMMU. Supports up to 10 images per prompt with integrated vision-based function calling.
by intfloat
intfloat E5-Mistral-7B-Instruct is a state-of-the-art instruction-following embedding model built on Mistral 7B architecture. Combines strong language understanding from Mistral with specialized embedding training for retrieval tasks. Features instruction-based embedding generation allowing natural language queries to guide semantic search. Excels at complex retrieval scenarios, multi-hop reasoning in document search, and instruction-guided similarity tasks. Provides significantly improved zero-shot retrieval performance compared to traditional embedding models.
NVIDIA Nemotron 3 Super 120B A12B FP8 is a 120B parameter (12B active) LatentMixture-of-Experts hybrid model with Mamba-2, MoE and Multi-Token Prediction layers, supporting up to 1M tokens context. It achieves 94.73% on HMMT Feb25 (with tools) and 83.73% on MMLU‑Pro, and scores 73.88% on Arena‑Hard‑V2 (Hard Prompt). The model supports configurable reasoning via an enable_thinking flag, tool use, and structured output. It is available under the NVIDIA Nemotron Open Model License.
Mistral Small 4 is a 119B-parameter Mixture-of-Experts model (128 experts, 4 active per token, 6.5B active parameters) that unifies instruct, reasoning, and coding capabilities into a single multimodal model. It accepts text and image inputs, supports function calling, structured outputs, and configurable reasoning effort (none for fast responses, high for deep step-by-step reasoning). With a 256K context window and Apache 2.0 license, it delivers 40% lower latency and 3x higher throughput compared to Mistral Small 3.
by Qwen
Qwen3.5-122B-A10B is Alibaba Cloud's native multimodal agent model with 122B total parameters (10B activated). Features 240K context, vision capabilities, hybrid reasoning with extended thinking, function calling, and support for 201 languages. Apache 2.0 licensed.
by OpenAI
GPT-OSS 120B is a powerful 117B parameter Mixture-of-Experts reasoning model with 5.1B active parameters, released under Apache 2.0. Features configurable reasoning effort (low/medium/high), full chain-of-thought visibility, and runs on a single 80GB GPU thanks to MXFP4 quantization. Native support for function calling, web browsing, Python code execution, and structured outputs. Designed for agentic tasks and complex reasoning with production-grade performance. Fully customizable for specialized use cases on single H100/MI300X.