Discover AI models for every task
Showing 1-8 of 8 models
by OpenAI
OpenAI Whisper Large V3 Turbo is an optimized variant of Whisper V3 with significantly faster inference while maintaining high accuracy across 99 languages. Features architectural optimizations for reduced latency including faster encoder-decoder inference and efficient attention mechanisms. Delivers near-V3 accuracy with 2-3x speed improvement, ideal for real-time transcription applications, live subtitling, and high-throughput ASR workloads. Supports full multilingual capabilities, timestamps, and speech translation to English. Perfect for production deployments requiring both quality and speed.
OpenAI Whisper Large V3 is a state-of-the-art automatic speech recognition model with 1550M parameters supporting 99 languages. Achieves 10-20% WER reduction compared to V2, trained on 1M hours weakly labeled + 4M hours pseudo-labeled audio. Features 128 Mel frequency bins (increased from 80), improved robustness to accents and background noise, and new Cantonese language support. Supports speech transcription and speech-to-English translation with sentence and word-level timestamps. Optimized with torch.compile for 4.5x speedup. Ideal for accessibility tools, multilingual transcription, and enterprise ASR applications.
by BAAI
BAAI BGE Large EN V1.5 is a state-of-the-art English dense retrieval embedding model with 1024-dimensional embeddings and 512 token sequence length. Achieves 64.23 average on MTEB leaderboard across 56 tasks with 54.29 on retrieval. Pre-trained with RetroMAE and fine-tuned on large-scale contrastive learning data. V1.5 improvements include better similarity distribution and flexible usage without query instructions. Ideal for semantic search, document retrieval, re-ranking pipelines, and sentence similarity tasks. Production-ready with 3.4M+ downloads/month.
by intfloat
intfloat Multilingual E5 Large Instruct is an instruction-tuned multilingual embedding model combining strong cross-lingual capabilities with instruction-following for guided retrieval. Supports 100+ languages with natural language instructions to customize embedding behavior. Features enhanced zero-shot retrieval performance through instruction-based query understanding. Ideal for complex multilingual search scenarios, domain-specific retrieval tasks, and applications requiring adaptive semantic understanding across languages.
intfloat Multilingual E5 Large is a powerful multilingual dense retrieval embedding model supporting 100+ languages with strong cross-lingual capabilities. Features 1024-dimensional embeddings optimized for semantic search, document retrieval, and text similarity across diverse language families. Pre-trained on large-scale multilingual data with contrastive learning for robust cross-lingual transfer. Ideal for international search systems, multilingual document retrieval, and global content recommendation platforms requiring high-quality semantic understanding.
by Deepseek
DeepSeek V3.2 is the latest iteration of the DeepSeek V3 series with significant performance improvements. Features enhanced reasoning, coding capabilities, and better instruction following across diverse tasks.
Highly efficient DeepSeek flagship engineered for fast, capable reasoning and low-cost inference.
DeepSeek V3.1 is an optimized variant of DeepSeek V3 with enhanced chat capabilities. Offers excellent cost-efficiency with 685B MoE architecture and improved response quality for conversational tasks.