Discover AI models for every task
Showing 1-15 of 15 models
by Google
Larger Gemma model delivering high-quality chat and coding with efficient inference.
Gemma 3 27B IT is a cutting-edge multimodal vision-language model with 27 billion parameters, built on Gemini technology. Trained on 14 trillion tokens, it handles both text and image inputs with a 128K context window and supports 140+ languages. Excels at visual understanding, code generation, mathematical reasoning, and multilingual tasks. Achieves 78.6 on MMLU, 82.6 on GSM8K, 85.6 on DocVQA, and 76.3 on ChartQA. Lightweight enough for laptop deployment with strong safety improvements over previous Gemma versions.
Google Gemma 4 31B is a 31B parameter dense multimodal language model with a 256K context window. It processes text, images, and video inputs and generates text output, featuring a configurable thinking mode for step‑by‑step reasoning. The model achieves 85.2% on MMLU Pro, 80.0% on LiveCodeBench v6, and 88.4% on MMMLU, demonstrating strong performance across reasoning and multimodal benchmarks. Available under the Apache 2.0 license.
by BAAI
BAAI BGE Multilingual Gemma2 is a multilingual dense retrieval embedding model built on Gemma 2 architecture, supporting 100+ languages for cross-lingual semantic search and retrieval. Delivers strong performance across diverse language families including English, Chinese, Spanish, Arabic, Hindi, and many more. Ideal for multilingual search systems, cross-lingual document retrieval, international content recommendation, and global knowledge bases. Trained on large-scale multilingual data with balanced language representation.
by Black Forest Labs
Black Forest Labs FLUX.2 [klein] 9B is a balanced image generation model offering excellent quality-to-speed ratio. With 9 billion parameters, it provides better detail and composition than the 4B variant while remaining faster than full-size models. Ideal for production workloads requiring a balance between quality, speed, and cost. Supports both text-to-image and image-to-image generation.
by Qwen
Qwen 3.5 9B is a 9B‑parameter multimodal large language model with a gated‑delta mixture‑of‑experts architecture and a vision encoder. It supports a native context window of 262,144 tokens and operates in a default thinking mode that can be disabled. The model achieves strong results such as 82.5% on MMLU‑Pro, 88.2% on C‑Eval, and 78.4% on MMMU benchmarks. It is released under the Apache 2.0 license.
by Mistral
Mistral Small 3.2 24B Instruct is a multimodal instruction-tuned model supporting both vision and text with 24B parameters and 128K context. Major improvements over 3.1 include better instruction following (84.78%), 2x reduction in repetition errors, and robust function calling. Achieves 65.33% on Wildbench v2, 43.1% on Arena Hard v2, 92.90% on HumanEval Pass@5. Vision benchmarks: 87.4% ChartQA, 94.86% DocVQA, 62.50% MMMU. Supports up to 10 images per prompt with integrated vision-based function calling.
Devstral 2 123B is Mistral AI's flagship agentic coding model, featuring 123B parameters optimized for software engineering tasks. Achieves 72.2% on SWE-bench Verified and 61.3% on SWE-bench Multilingual. Excels at codebase exploration, multi-file editing, and agentic workflows with tool use. Supports 200K context window with enhanced function calling and structured output. Designed for IDE integration via Mistral Vibe CLI. Released under modified MIT license for unrestricted commercial use.
by Meta
Moderation model providing robust safety classification and policy enforcement.
Black Forest Labs FLUX.2 [klein] 4B is a lightweight, fast image generation model optimized for speed and efficiency. With 4 billion parameters, it delivers quick image generation while maintaining good quality. Perfect for rapid prototyping, bulk generation, and applications requiring low latency. Supports both text-to-image and image-to-image generation with excellent cost-efficiency.
Black Forest Labs FLUX.2 [dev] is the latest generation text-to-image model with significant improvements over FLUX.1. Features enhanced prompt following, superior image quality, and faster generation. Built on the proven rectified flow transformer architecture with optimizations for better detail, composition, and text rendering. Excellent for creative workflows, concept art, and high-quality image generation with both text-to-image and image-to-image capabilities.
Mistral Small 4 is a 119B-parameter Mixture-of-Experts model (128 experts, 4 active per token, 6.5B active parameters) that unifies instruct, reasoning, and coding capabilities into a single multimodal model. It accepts text and image inputs, supports function calling, structured outputs, and configurable reasoning effort (none for fast responses, high for deep step-by-step reasoning). With a 256K context window and Apache 2.0 license, it delivers 40% lower latency and 3x higher throughput compared to Mistral Small 3.
by Deepseek
DeepSeek V3.2 is the latest iteration of the DeepSeek V3 series with significant performance improvements. Features enhanced reasoning, coding capabilities, and better instruction following across diverse tasks.
Specialized 13B coding model with advanced code infilling capabilities. Excels in code generation, completion, debugging across multiple programming languages.
Meta Llama 3.3 70B Instruct is a multilingual instruction-tuned model optimized for dialogue. Trained on ~15 trillion tokens with cutoff December 2023, it outperforms many open-source and closed models. Major improvements include 92.1% on IFEval (steerability), 88.4% on HumanEval (code), 77.0% on MATH, and 91.1% on MGSM (multilingual). Features 128K context, Grouped-Query Attention, and supports 8 languages including English, German, French, Spanish, Italian, Portuguese, Hindi, and Thai. Trained on 7M GPU hours with 100% renewable energy.