Discover AI models for every task
Showing 1-9 of 9 models
by NVIDIA
Nemotron Nano 12B V2 is a unified reasoning and chat model with controllable inference via /think and /no_think directives. Features hybrid Mamba-2 + MLP layers + 6 Attention layers architecture with 128K context. Achieves 76.25% AIME25, 97.75% MATH500, 70.79% LiveCodeBench, 66.98% BFCL v3. Supports runtime thinking budget control for accuracy-latency tradeoffs. Pre-trained on ~20T tokens with cutoff September 2024. Optimized for NVIDIA GPUs (A10G, H100, Jetson AGX Thor) with efficient Mamba-2 SSM for long-context handling. Includes native function calling and tool integration.
NVIDIA Nemotron 3 Nano is a highly efficient hybrid Mamba-Transformer MoE model with 30B total / 3.5B active parameters. Features 128K context window extensible to 1M tokens. Excels at agentic AI, reasoning, and tool calling tasks. Trained on 25T tokens with state-of-the-art efficiency. Supports English, German, French, Spanish, Italian, and Japanese. Open weights with commercial license.
NVIDIA Nemotron 3 Super 120B A12B FP8 is a 120B parameter (12B active) LatentMixture-of-Experts hybrid model with Mamba-2, MoE and Multi-Token Prediction layers, supporting up to 1M tokens context. It achieves 94.73% on HMMT Feb25 (with tools) and 83.73% on MMLU‑Pro, and scores 73.88% on Arena‑Hard‑V2 (Hard Prompt). The model supports configurable reasoning via an enable_thinking flag, tool use, and structured output. It is available under the NVIDIA Nemotron Open Model License.
by Mistral
Mistral's 12B parameter vision-language model. Capable of understanding and reasoning about images alongside text.
by Qwen
Qwen3.5-122B-A10B is Alibaba Cloud's native multimodal agent model with 122B total parameters (10B activated). Features 240K context, vision capabilities, hybrid reasoning with extended thinking, function calling, and support for 201 languages. Apache 2.0 licensed.
by OpenAI
GPT-OSS 120B is a powerful 117B parameter Mixture-of-Experts reasoning model with 5.1B active parameters, released under Apache 2.0. Features configurable reasoning effort (low/medium/high), full chain-of-thought visibility, and runs on a single 80GB GPU thanks to MXFP4 quantization. Native support for function calling, web browsing, Python code execution, and structured outputs. Designed for agentic tasks and complex reasoning with production-grade performance. Fully customizable for specialized use cases on single H100/MI300X.
Qwen3-Coder-Next is an open-weight coding model optimized for agents and local development, delivering high performance, long-context reasoning, and efficient IDE integration with minimal active parameters.
Devstral 2 123B is Mistral AI's flagship agentic coding model, featuring 123B parameters optimized for software engineering tasks. Achieves 72.2% on SWE-bench Verified and 61.3% on SWE-bench Multilingual. Excels at codebase exploration, multi-file editing, and agentic workflows with tool use. Supports 200K context window with enhanced function calling and structured output. Designed for IDE integration via Mistral Vibe CLI. Released under modified MIT license for unrestricted commercial use.
Qwen's "thinking-optimized" 80B model designed for sustained multi-step reasoning, structured deliberation, and high-precision problem-solving across math, code, and complex planning tasks.