Discover AI models for every task
Showing 1-5 of 5 models
by OpenAI
GPT-OSS 20B is a compact 21B parameter Mixture-of-Experts model with 3.6B active parameters, designed for lower latency and local deployment. Runs within 16GB memory with configurable reasoning effort, full chain-of-thought access, and native agentic capabilities including function calling and structured outputs. Released under Apache 2.0 license, ideal for specialized fine-tuning on consumer hardware. Companion model to GPT-OSS 120B optimized for speed while maintaining strong reasoning capabilities.
GPT-OSS 120B is a powerful 117B parameter Mixture-of-Experts reasoning model with 5.1B active parameters, released under Apache 2.0. Features configurable reasoning effort (low/medium/high), full chain-of-thought visibility, and runs on a single 80GB GPU thanks to MXFP4 quantization. Native support for function calling, web browsing, Python code execution, and structured outputs. Designed for agentic tasks and complex reasoning with production-grade performance. Fully customizable for specialized use cases on single H100/MI300X.
by NVIDIA
NVIDIA Nemotron 3 Super 120B A12B FP8 is a 120B parameter (12B active) LatentMixture-of-Experts hybrid model with Mamba-2, MoE and Multi-Token Prediction layers, supporting up to 1M tokens context. It achieves 94.73% on HMMT Feb25 (with tools) and 83.73% on MMLU‑Pro, and scores 73.88% on Arena‑Hard‑V2 (Hard Prompt). The model supports configurable reasoning via an enable_thinking flag, tool use, and structured output. It is available under the NVIDIA Nemotron Open Model License.
by Google
Larger Gemma model delivering high-quality chat and coding with efficient inference.
Gemma 3 27B IT is a cutting-edge multimodal vision-language model with 27 billion parameters, built on Gemini technology. Trained on 14 trillion tokens, it handles both text and image inputs with a 128K context window and supports 140+ languages. Excels at visual understanding, code generation, mathematical reasoning, and multilingual tasks. Achieves 78.6 on MMLU, 82.6 on GSM8K, 85.6 on DocVQA, and 76.3 on ChartQA. Lightweight enough for laptop deployment with strong safety improvements over previous Gemma versions.