Logo
Sign in
Product Logo
MongoDB Atlas Vector SearchMongoDB

Build intelligent applications powered by semantic search and generative AI over any type of data.

Vendor

Vendor

MongoDB

Company Website

Company Website

VectorSearch_ValueProp-04_BrandShape.svg
VectorSearch_ValueProp-02_BrandShape.svg
VectorSearch_ValueProp-03_BrandShape.svg
VectorSearch_ValueProp-01_BrandShape.svg
Product details

What is vector search?

Generative AI uses vectors to enable intelligent semantic search over unstructured data (text, images, and audio). Vectors are critical in building recommendation engines, anomaly detection, and conversational AI. The wide range of use cases, made possible with native capabilities in MongoDB, deliver transformative user experiences.

Unparalleled simplicity Avoid the synchronization tax. With Atlas Vector Search built into the core database, there’s no need to sync data between your operational and vector databases—saving time, reducing complexity, and preventing errors. Your operational and vector data stay in one place.

Powerful query capabilities Easily combine vector queries with filters on meta-data, graph lookups, aggregation pipelines, geo-spatial search, and lexical search for powerful hybrid search use cases within a single database.

Superior scaling for vector search apps Unlike other solutions, MongoDB’s distributed architecture scales vector search independently from the core database. This enables true workload isolation and optimization for vector queries, resulting in superior performance at scale.

Enterprise-ready vector database Security and high availability are built in. Because vector data is stored directly in Atlas with your operational data, you can rest assured your workloads are running with the same trusted enterprise-grade security and availability MongoDB is known for.

Features

  • **Fully managed developer data platform: **Unlike standalone vector databases, Atlas lets you store and work with operational data, metadata, and vectors, all in a unified, secure, scalable data platform.
  • **Flexibility and agility with the document model: **Use rich, nested data structures for effortless organization and querying. Model multiple fields with embedding models and jointly consider them at query time for optimal performance.
  • **Independent scaling with Search Nodes: **Ensure higher availability and performance with independent scalability through workload isolation and memory-optimized, multi-cloud dedicated infrastructure.
  • **Cost efficiency with vector quantization: **Increase scale and lower costs by compressing vectors for more efficient storage, processing, and retrieval while preserving search accuracy.