Learn how to optionally enhance your RAG pipeline with language model integration
This guide explains how to optionally integrate and configure Language Models (LLMs) in GraphorLM to generate natural language responses based on retrieved information.
While GraphorLM focuses primarily on high-quality information retrieval, you can optionally add an LLM component to your pipeline to transform retrieved information into natural language responses. It’s important to note that using an LLM node is completely optional - many users can achieve their goals using just the Retrieval component as the final output.
In GraphorLM’s Flow Builder, the LLM component takes retrieved document chunks as input and uses them to generate natural language responses:This component is where you configure which language model to use and how it should process the retrieved information to generate responses.
The system prompt defines the LLM’s behavior and instructions for generating responses. You can customize this prompt to:
Define the assistant’s personality and tone
Specify formatting requirements
Provide domain-specific context
Set boundaries for what the assistant should and shouldn’t do
Example system prompt:
Copy
You are a helpful assistant that answers questions based on the provided context.Always cite your sources from the context.If you don't know the answer, say "I don't have enough information" rather than making up an answer.