Going from unstructured data to deployed AI assistants.
Speakers


Recorded
Overview
Watch this recording for a hands-on webinar where we’ll guide you through the end-to-end process of building a Retrieval Augmented Generation (RAG) application—from raw, unstructured data to a production-ready chatbot. In this session, you’ll learn how to turn your enterprise data into a powerful foundation for a context-aware AI assistant using Databricks and Unstructured.
We’ll show you how to clean, structure, and intelligently chunk documents with Unstructured, then walk through how to store the results and their embeddings in Delta Tables—optimizing for both performance and manageability. You’ll see how Databricks makes it easy to set up and maintain Vector Search endpoints, enabling fast and accurate retrieval during chatbot interactions.
Throughout the recording, we’ll cover the critical stages of a RAG pipeline, including data preparation, embedding generation, vector search implementation, and LangChain-based chatbot deployment. We’ll also share practical tips to help you deploy your RAG application.
Technical Overview
In this recording, you will:
- Understand the process of preparing raw data for RAG, including document partitioning, chunking, embedding and storing it in Delta Tables.
- Learn how to create and manage Vector Search endpoints for efficient search requests.
- See how to use the Vector Search index to find contextually relevant documents for the chatbot.
- Discover how to deploy and configure a chatbot using Langchain and Databricks.
- See how Databricks and Unstructured simplify the development lifecycle from data to production-ready AI applications.
Visit Unstructured.io/databricks to get started.
BTS
Brian Godsey, Datastax, brian.godsey@datastax.com
Sara Hardy, Unstructured, sara.hardy@unstructured.io
Avie Magner, DMP, avie@digitalmarketingpartners.biz
Marc Lapides, DMP, marc@digitalmarketingpartners.biz