Discover AI models for every task
Showing 1-6 of 6 models
by Mistral
Mistral Voxtral Small 24B is a multimodal model supporting both text and audio inputs with 24B parameters. Enables natural voice conversations and audio understanding alongside text processing. Features audio transcription, audio-based reasoning, and voice-to-text capabilities. Built on Mistral architecture with specific training for audio modalities. Ideal for voice assistants, audio analysis applications, and multimodal AI systems requiring combined text and speech processing.
Mistral Small 3.2 24B Instruct is a multimodal instruction-tuned model supporting both vision and text with 24B parameters and 128K context. Major improvements over 3.1 include better instruction following (84.78%), 2x reduction in repetition errors, and robust function calling. Achieves 65.33% on Wildbench v2, 43.1% on Arena Hard v2, 92.90% on HumanEval Pass@5. Vision benchmarks: 87.4% ChartQA, 94.86% DocVQA, 62.50% MMMU. Supports up to 10 images per prompt with integrated vision-based function calling.
Mistral Small 4 is a 119B-parameter Mixture-of-Experts model (128 experts, 4 active per token, 6.5B active parameters) that unifies instruct, reasoning, and coding capabilities into a single multimodal model. It accepts text and image inputs, supports function calling, structured outputs, and configurable reasoning effort (none for fast responses, high for deep step-by-step reasoning). With a 256K context window and Apache 2.0 license, it delivers 40% lower latency and 3x higher throughput compared to Mistral Small 3.
Mistral's 12B parameter vision-language model. Capable of understanding and reasoning about images alongside text.
Devstral 2 123B is Mistral AI's flagship agentic coding model, featuring 123B parameters optimized for software engineering tasks. Achieves 72.2% on SWE-bench Verified and 61.3% on SWE-bench Multilingual. Excels at codebase exploration, multi-file editing, and agentic workflows with tool use. Supports 200K context window with enhanced function calling and structured output. Designed for IDE integration via Mistral Vibe CLI. Released under modified MIT license for unrestricted commercial use.
by intfloat
intfloat E5-Mistral-7B-Instruct is a state-of-the-art instruction-following embedding model built on Mistral 7B architecture. Combines strong language understanding from Mistral with specialized embedding training for retrieval tasks. Features instruction-based embedding generation allowing natural language queries to guide semantic search. Excels at complex retrieval scenarios, multi-hop reasoning in document search, and instruction-guided similarity tasks. Provides significantly improved zero-shot retrieval performance compared to traditional embedding models.