Semantic Chunking for RAG

5 months ago
21

Semantic chunking for RAG allows us to build more concise chunks for our RAG pipelines, chatbots, and AI agents. We can pair this with various LLMs and embedding models from OpenAI, Cohere, Anthropic, etc, and libraries like LangChain or CrewAI to build potentially improved Retrieval Augmented Generation (RAG) pipelines.

📌 Code:
https://github.com/pinecone-io/examples/blob/master/learn/generation/better-rag/02b-semantic-chunking.ipynb

🌲 Subscribe for Latest Articles and Videos:
https://www.pinecone.io/newsletter-signup/

👋🏼 AI Consulting:
https://aurelio.ai

👾 Discord:
https://discord.gg/c5QtDB9RAP

Twitter: https://twitter.com/jamescalam
LinkedIn: https://www.linkedin.com/in/jamescalam/

00:00 Semantic Chunking for RAG
00:45 What is Semantic Chunking
03:31 Semantic Chunking in Python
12:17 Adding Context to Chunks
13:41 Providing LLMs with More Context
18:11 Indexing our Chunks
20:27 Creating Chunks for the LLM
27:18 Querying for Chunks

#artificialintelligence #ai #nlp #chatbot #openai

Loading comments...