25 min
Oct 14, 2025
10:05 am

Vector Store: Uber’s Embedding Platform

Learn how Uber’s Vector Store powers embeddings at scale for GenAI, RAG, semantic search, and predictive models.

About this session

With the rise of Generative AI and large-scale semantic search, embeddings have become a foundational building block for enabling high-value ML and AI use cases across Uber. From personalized recommendations on Uber Eats to conversational assistants, embeddings now power Retrieval-Augmented Generation (RAG), our GenAI platform, semantic search, and predictive models at global scale. Over the past year, the Michelangelo team has elevated embeddings to first-class citizens within Uber's ML ecosystem—building a unified platform to simplify their generation, ingestion, versioning, and use across diverse applications. Today, this platform powers GenAI use cases and semantic search within the Uber App as well as internal systems. This talk introduces Vector Store, Uber’s scalable platform for managing the full lifecycle of embeddings—including offline/streaming generation, batch/real-time ingestion, standardized retrieval APIs, and automated model switching—all backed by centralized metadata and governance.

Join our Slack channel to stay up to date on all the latest feature store news, including early notifications for conference updates.