Glossary
What is LLM Grounding?
Connecting AI outputs to verified sources to reduce hallucinations.
What is grounding?
Grounding connects LLM outputs to verified external sources like documents, databases, or APIs. This reduces hallucinations by ensuring the model's responses are based on factual, retrievable information rather than just parametric knowledge.
Grounding Techniques
- RAG: Retrieve relevant documents before generation
- Tool use: Call APIs for real-time data
- Citations: Require sources for claims
- Knowledge graphs: Structured fact verification
Grounding Limitations
Grounding helps but doesn't solve everything:
- Models can misinterpret retrieved sources
- Can hallucinate connections between facts
- May generate claims not supported by sources
- Still requires monitoring for accuracy
Does grounding eliminate hallucinations?
No. Grounding significantly reduces hallucinations but doesn't eliminate them. Models can still misinterpret sources, hallucinate connections, or generate unsupported claims. Monitor grounded outputs for factual accuracy.
Monitor grounded AI outputs
Start Free