25 min
Oct 11, 2023
1:10 pm

Delivering Personalized & Realtime Context for LLM: Databricks Feature & Function Serving

Discover how to harness Databricks Feature and Function Serving with data in your Lakehouse for real-time context ingestion and retrieval.

About this session

Large Language Model (LLM) performance can be greatly improved by providing relevant context for the problem the model is employed to solve. In this talk we show how you can harness Databricks Feature and Function Serving powered by data in your Lakehouse to provide real-time context ingestion and retrieval for your LLMs, thus improving accuracy and relevance in Natural Language Processing applications. Furthermore, the Lakehouse architecture offers robust security and governance features, ensuring protection from data leaks in external systems.

Join our Slack channel to stay up to date on all the latest feature store news, including early notifications for conference updates.