Comparing Evaluation Results
In this section, you'll learn how to compare evaluation results for the same set of test cases, enabling you to identify improvements and regressions in your LLM performance.
Detecting regressions is crucial as they reveal areas where your LLM's performance has unexpectedly declined.
Hyperparameters Iteration Recap
In the previous section, we updated our medical chatbot's model, temperature, and prompt template settings, and re-evaluated them on the same test cases and metrics, which resulted in the following report:
We found that while all previously failing test cases now pass, one test case has regressed. The first test case, which previously achieved near-perfect scores, is now failing Faithfulness and Professionalism.
While addressing failing test cases is critical, it’s equally important to be evaluating improvements. Examining specific test cases where scores have increased and understanding the reasons behind these changes ensures that these improvements align with our desired outcomes.
Confident AI provides an easy and simple way to compare evaluation results for the same test cases. In the next step, we’ll explore how to use this feature.
Comparing Evaluations
To compare two evaluations, navigate to the Comparing Test Runs page, located as the third tab on the left navigation bar. Then, select the test run ID of the evaluation results you want to compare with your new results.
A test run on Confident AI represents a single evaluation of a collection of test cases using a defined set of metrics.
Once you select the test run to compare with, Confident AI will automatically align the test cases and visually highlight the differences—improvements are marked with green rows, while regressions are shown in red.
Confident AI matches test cases based on the input
of each LLMTestCase
. If no matching test cases are found, no comparisons will be displayed.
You can analyze each test case further by clicking on it to inspect individual regressing and improving metric scores. For instance, test cases 2, 4, and 5 show significant improvements in previously failing metrics, with their updated outputs aligning with our expectations.
Let’s take a closer look at the regressing test case:
Here, we observe that introducing additional flexibility into the prompt template may have inadvertently caused some confusion during the generation proccess. As a result, the chatbot appears uncertain about whether to proceed with diagnosing the patient or to request further details, and ultimately fails to meet the standards for Professionalism and Faithfulness.
Increasing the complexity of your prompt template can make it harder for an LLM to process queries effectively. Upgrading the LLM model is one way to address this challenge.
Runing one Final Evaluation
Let’s iterate on our hyperparameters one last time by upgrading the underlying LLM model to GPT-4o. We'll re-compute the outputs and re-run the evaluation. Here are the final results:
We've finally managed to pass all test cases! After multiple iterations, our medical chatbot has successfully passed all the test cases, despite initially failing the majority of them.
However, we've only evaluated 5 test cases so far. To truly evaluate your LLM application at scale, you'll need a larger and more diverse evaluation dataset. Such a dataset should include challenging scenarios and edge cases to rigorously test your model's capabilities. While you could manually curate this dataset, doing so can be both time-intensive and expensive.
In the next section, we'll dive into how you can generate synthetic data using DeepEval to efficiently scale the evaluation of your LLM application.