Discover AI models for every task
Showing 1-15 of 15 models
by Black Forest Labs
Black Forest Labs FLUX.1 [schnell] is the fastest variant of the FLUX.1 family, optimized for rapid text-to-image generation with fewer inference steps. Built on the same 12B parameter rectified flow transformer architecture as FLUX.1 [dev] but distilled for maximum speed. Generates high-quality 1024x1024 images in 1-4 steps compared to 20-50 steps for standard models. Ideal for real-time applications, interactive tools, and high-throughput image generation scenarios. Apache 2.0 licensed for unrestricted use including commercial applications.
Black Forest Labs FLUX.1 [dev] is a cutting-edge 12 billion parameter rectified flow transformer for text-to-image generation. Second only to FLUX.1 [pro] with strong prompt following matching closed-source alternatives. Features guidance distillation for efficient inference, high-resolution generation (1024x1024), accurate text rendering, and detailed composition. Supports both text-to-image and image-to-image generation. Open weights enable scientific research and innovative workflows.
Black Forest Labs FLUX.1 [dev] with LoRA adapter support. This variant enables fine-tuned generation with custom trained LoRA weights for specialized styles, characters, or concepts. Based on the full 12B parameter FLUX.1 [dev] model with all its capabilities including high-resolution generation, accurate text rendering, and detailed composition. Perfect for custom workflows and specialized image generation tasks.
Black Forest Labs FLUX.2 [klein] 4B is a lightweight, fast image generation model optimized for speed and efficiency. With 4 billion parameters, it delivers quick image generation while maintaining good quality. Perfect for rapid prototyping, bulk generation, and applications requiring low latency. Supports both text-to-image and image-to-image generation with excellent cost-efficiency.
Black Forest Labs FLUX.2 [klein] 9B is a balanced image generation model offering excellent quality-to-speed ratio. With 9 billion parameters, it provides better detail and composition than the 4B variant while remaining faster than full-size models. Ideal for production workloads requiring a balance between quality, speed, and cost. Supports both text-to-image and image-to-image generation.
Black Forest Labs FLUX.2 [dev] is the latest generation text-to-image model with significant improvements over FLUX.1. Features enhanced prompt following, superior image quality, and faster generation. Built on the proven rectified flow transformer architecture with optimizations for better detail, composition, and text rendering. Excellent for creative workflows, concept art, and high-quality image generation with both text-to-image and image-to-image capabilities.
by Mistral
Mistral Small 4 is a 119B-parameter Mixture-of-Experts model (128 experts, 4 active per token, 6.5B active parameters) that unifies instruct, reasoning, and coding capabilities into a single multimodal model. It accepts text and image inputs, supports function calling, structured outputs, and configurable reasoning effort (none for fast responses, high for deep step-by-step reasoning). With a 256K context window and Apache 2.0 license, it delivers 40% lower latency and 3x higher throughput compared to Mistral Small 3.
by ZAI
ZAI GLM 5.1 is a 744B parameter Mixture-of-Experts language model built with the GLM‑MoE DSA architecture. It excels at agentic engineering, achieving state-of-the-art performance on benchmarks such as HLE with tools (52.3), SWE‑Bench Pro (58.4) and AIME 2026 (95.3). The model supports extensive tool use and long‑horizon reasoning, with a large context window of up to 128K tokens. It is released under the MIT license.
by Meta
Meta Llama 3.1 8B Instruct is an efficient multilingual instruction-tuned model optimized for dialogue and assistant use cases. With 8 billion parameters and 128K context length, it provides strong performance across general tasks, code generation, and multilingual understanding. Supports function calling and tool use with Grouped-Query Attention architecture. Ideal for deployment scenarios requiring lower compute resources while maintaining quality across English and 7 additional languages including German, French, Spanish, and Hindi.
by MiniMax
MiniMax M2.1 is a state-of-the-art MoE model with 230B total / 10B active parameters, optimized for agentic coding and complex multi-step workflows. Excels at multilingual programming, tool use, and long-horizon planning. Matches Claude Sonnet 4.5 on code benchmarks and exceeds it in multilingual scenarios. Features 196K context window with FP8 efficiency. Released under Modified-MIT license for commercial use.
by Deepseek
DeepSeek V3.1 is an optimized variant of DeepSeek V3 with enhanced chat capabilities. Offers excellent cost-efficiency with 685B MoE architecture and improved response quality for conversational tasks.
Meta's flagship 405B parameter model representing the pinnacle of open-source AI. Exceptional reasoning and comprehensive knowledge for demanding applications.
Mistral Small 3.2 24B Instruct is a multimodal instruction-tuned model supporting both vision and text with 24B parameters and 128K context. Major improvements over 3.1 include better instruction following (84.78%), 2x reduction in repetition errors, and robust function calling. Achieves 65.33% on Wildbench v2, 43.1% on Arena Hard v2, 92.90% on HumanEval Pass@5. Vision benchmarks: 87.4% ChartQA, 94.86% DocVQA, 62.50% MMMU. Supports up to 10 images per prompt with integrated vision-based function calling.
by NVIDIA
NVIDIA Nemotron 3 Super 120B A12B FP8 is a 120B parameter (12B active) LatentMixture-of-Experts hybrid model with Mamba-2, MoE and Multi-Token Prediction layers, supporting up to 1M tokens context. It achieves 94.73% on HMMT Feb25 (with tools) and 83.73% on MMLU‑Pro, and scores 73.88% on Arena‑Hard‑V2 (Hard Prompt). The model supports configurable reasoning via an enable_thinking flag, tool use, and structured output. It is available under the NVIDIA Nemotron Open Model License.
Mistral Voxtral Small 24B is a multimodal model supporting both text and audio inputs with 24B parameters. Enables natural voice conversations and audio understanding alongside text processing. Features audio transcription, audio-based reasoning, and voice-to-text capabilities. Built on Mistral architecture with specific training for audio modalities. Ideal for voice assistants, audio analysis applications, and multimodal AI systems requiring combined text and speech processing.