FaaF: Facts As A Function For Evaluating RAG
There has been instances where another Language Model is used to vet the RAG output, which fails to detect incorrect and incomplete generated data.
The Problem
Practical factual recall evaluation in RAG systems are problematic for the following reasons:
Not much attention has been given to automatically verifying truthful, independent statements in poorly generated text and simulating low-quality Retrieval Augmented Generation (RAG) scenarios. As compared to focusing on accuracy in language model generated text.
Given that a single generated text may contain multiple facts requiring verification, the current method of verifying each fact independently can be overly time-consuming and resource-intensive.
RAG systems involve numerous components, such as knowledge base, retrieval, prompt formulation, and language model, which demand substantial tuning. Therefore, efficiency is crucial for practical implementation.
Exact matching of ground truth text in the generated text is susceptible to false negatives because the ground truth information might exist in the generated text but expressed differently.
And when the ground truth information is longer than a few words, the chances of exact match become too slim.
The study states that factual recall evaluation framework with FaaF has been open-sourced as a python package (pip install faaf
). However, I was not able to install it.
The Solution (FaaF)
A complete factual recall evaluation framework which is tailored to RAG systems. Which can be used to create a test dataset and perform automated factual recall evaluation.
Evaluation data is augmented with ground truth facts and human annotation. WikiEval features question and answer pairs with answers of variable factual quality which enable simulating deficient RAG responses.
Facts as a Function (FaaF) is a new fact verification formulation which out-performs fact verification via prompting in all examined conditions and reduces the required number of LM calls and completion tokens by more than 5 times.
Considering the image below, a constructor dynamically creates a function object from a set of facts.
Function calling allows LMeval to verify all facts within a single call when provided with an input text.
FaaF reduces the error rate in identifying unsupported facts by up to 40 percentage points compared to prompting whilst reducing the number of LMeval calls and output tokens by more than 5 times.
And considering the image below, given a set of ground truth Answers, facts are extracted via LMf. The Hypothesised responses of the RAG (in this instance Ungrounded Answer and Poor Answer) are then tested for recall against the extracted facts.
In Conclusion
The study found that relying on prompts for fact verification can often overestimate the truthfulness of statements, especially when the text lacks important information.
This method can have error rates as high as 50% when dealing with incomplete texts.
However, presenting facts as a function to the language model (LM) greatly improves the accuracy and efficiency of verification.
FaaF shows that text with somewhat relevant or inaccurate information are more likely to produce false positives than those with missing or incomplete details.
The study also discovered that including a not clear option alongside True/False choices improves overall accuracy. Additionally, asking for citations before verifying facts can be helpful in some cases, but it may lead to false negatives if the text indirectly supports the fact without providing direct citations.
Finally, using FaaF significantly reduces both the number of LM calls and tokens required for verification, making the process more efficient in terms of cost and time.
I’m currently the Chief Evangelist @ Kore AI. I explore & write about all things at the intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.