Discover AI models for every task
Showing 1-7 of 7 models
by BAAI
BAAI BGE EN ICL is an in-context learning enabled English embedding model supporting dynamic query understanding through examples. Features innovative ICL approach allowing users to provide examples to guide retrieval behavior without retraining. Excels at domain-specific retrieval tasks where query intent can be demonstrated through few-shot examples. Ideal for specialized search applications, adaptive retrieval systems, and scenarios requiring customizable semantic understanding. Released July 2024 with state-of-the-art ICL embedding capabilities.
BAAI BGE Large EN V1.5 is a state-of-the-art English dense retrieval embedding model with 1024-dimensional embeddings and 512 token sequence length. Achieves 64.23 average on MTEB leaderboard across 56 tasks with 54.29 on retrieval. Pre-trained with RetroMAE and fine-tuned on large-scale contrastive learning data. V1.5 improvements include better similarity distribution and flexible usage without query instructions. Ideal for semantic search, document retrieval, re-ranking pipelines, and sentence similarity tasks. Production-ready with 3.4M+ downloads/month.
BAAI BGE Multilingual Gemma2 is a multilingual dense retrieval embedding model built on Gemma 2 architecture, supporting 100+ languages for cross-lingual semantic search and retrieval. Delivers strong performance across diverse language families including English, Chinese, Spanish, Arabic, Hindi, and many more. Ideal for multilingual search systems, cross-lingual document retrieval, international content recommendation, and global knowledge bases. Trained on large-scale multilingual data with balanced language representation.
BAAI BGE-M3 is a versatile multilingual embedding model supporting dense, sparse, and multi-vector retrieval in a unified architecture. Handles 100+ languages with strong cross-lingual capabilities and flexible retrieval modes for different use cases. Features hybrid retrieval combining dense embeddings for semantic similarity, sparse representations for lexical matching, and multi-vector approaches for fine-grained relevance. Ideal for multilingual search engines, hybrid retrieval systems, and complex information retrieval scenarios requiring multiple matching strategies.
by Qwen
Image generation model from the Qwen series with advanced text rendering and precise image editing capabilities.
by intfloat
intfloat Multilingual E5 Large Instruct is an instruction-tuned multilingual embedding model combining strong cross-lingual capabilities with instruction-following for guided retrieval. Supports 100+ languages with natural language instructions to customize embedding behavior. Features enhanced zero-shot retrieval performance through instruction-based query understanding. Ideal for complex multilingual search scenarios, domain-specific retrieval tasks, and applications requiring adaptive semantic understanding across languages.
intfloat Multilingual E5 Large is a powerful multilingual dense retrieval embedding model supporting 100+ languages with strong cross-lingual capabilities. Features 1024-dimensional embeddings optimized for semantic search, document retrieval, and text similarity across diverse language families. Pre-trained on large-scale multilingual data with contrastive learning for robust cross-lingual transfer. Ideal for international search systems, multilingual document retrieval, and global content recommendation platforms requiring high-quality semantic understanding.