๐ RAG Evaluation
Note: Visit our Dashboard (opens in a new tab) for a comprehensive, interactive RAG Evaluation experience.
Evaluating Embedding Models
Test Query Generation
Before starting evaluation, please first generate test queries used in the RAG evaluation for a data source:
client.generate_test_queries(source_name='City')
Using customized queries in the evaluation will be supported soon.
Evaluation
To evaluate the retrieval performance of a list of selected embedding models:
results = client.evaluate(source_name='City',
embedding_models=['bge-base-en-v1.5', 'text-embedding-3-large'])
Note: Please grab a coffee and allow a few minutes for the evaluation to complete.
Please see here (opens in a new tab) for a list of embedding models that are currently supported on our platform.
After evaluation, to switch the default embedding model for a data source (e.g., to text-embedding-3-large
):
client.set_source_embedding_model(source_name='City', embedding_model='text-embedding-3-large')