Api reference
Consistency evaluation¶
Streamlit labelling app¶
Small tool for labeling data.
pip install streamlit streamlit run labellingapp.py
Expects the metadata csv and the topic csv in the data
directory.
update_this_relevancy(var_, topic_)
¶
Helper function to bind the variables to scope.
Source code in tools/labellingapp.py
113 114 115 |
|
Merging labels¶
merge_labels()
¶
Description : Merge labels from multiple JSON label files into a single dictionary.
Source code in tools/merge_labels.py
20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
|
Run Batch Training¶
ExperimentRunner
¶
Description: This class is used to run all the experiments. If you want to modify any behavior, change the functions in this class according to what you want. You may also want to check out ResponseParser.
Source code in evaluation/training_utils.py
122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 |
|
aggregate_multiple_queries(qa_dataset, data_metadata, types_of_llm_apply)
¶
Description: Aggregate the results of multiple queries into a single dataframe and count the number of times a dataset appears in the results. This was done here and not in evaluate to make it a little easier to manage as each of them requires a different chroma_db and config
Source code in evaluation/training_utils.py
246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 |
|
ResponseParser
¶
Bases: ResponseParser
Source code in evaluation/training_utils.py
77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 |
|
load_paths()
¶
Description: Load paths from paths.json
Source code in evaluation/training_utils.py
78 79 80 81 82 83 |
|
parse_and_update_response(metadata)
¶
Description: Parse the response from the RAG and LLM services and update the metadata based on the response
Source code in evaluation/training_utils.py
85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 |
|
exp_0(process_query_elastic_search, eval_path, query_key_dict)
¶
EXPERIMENT 0 Get results from elastic search
Source code in evaluation/experiments.py
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
|
exp_1(eval_path, config, list_of_embedding_models, list_of_llm_models, subset_ids, query_key_dict)
¶
EXPERIMENT 1 Main evaluation loop that is used to run the base experiments using different models and embeddings. Takes into account the following: original data ingestion pipeline : combine a string of all metadata fields and the dataset description and embeds them with no pre-processing list_of_embedding_models = [ "BAAI/bge-large-en-v1.5", "BAAI/bge-base-en-v1.5", "Snowflake/snowflake-arctic-embed-l", ] list_of_llm_models = ["llama3", "phi3"] types_of_llm_apply : llm applied as filter before the RAG pipeline, llm applied as reranker after the RAG pipeline, llm not used at all
Source code in evaluation/experiments.py
39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
|
exp_2(eval_path, config, subset_ids, query_key_dict)
¶
EXPERIMENT 2 Evaluating temperature = 1 (default was 0.95) Takes into account the following: original data ingestion pipeline : combine a string of all metadata fields and the dataset description and embeds them with no pre-processing list_of_embedding_models = [ "BAAI/bge-large-en-v1.5", ] list_of_llm_models = ["llama3"] types_of_llm_apply : llm applied as filter before the RAG pipeline, llm applied as reranker after the RAG pipeline, llm not used at all
Source code in evaluation/experiments.py
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 |
|
exp_3(eval_path, config, subset_ids, query_key_dict)
¶
EXPERIMENT 3 Evaluating search type [mmr, similarity_score_threshold] (default was similarity) Takes into account the following: original data ingestion pipeline : combine a string of all metadata fields and the dataset description and embeds them with no pre-processing list_of_embedding_models = [ "BAAI/bge-large-en-v1.5", ] list_of_llm_models = ["llama3"] types_of_llm_apply : llm applied as reranker after the RAG pipeline
Source code in evaluation/experiments.py
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 |
|
exp_4(eval_path, config, subset_ids, query_key_dict)
¶
EXPERIMENT 4 Evaluating chunk size. The default is 1000, trying out 512,128 Takes into account the following: original data ingestion pipeline : combine a string of all metadata fields and the dataset description and embeds them with no pre-processing list_of_embedding_models = [ "BAAI/bge-large-en-v1.5", ] list_of_llm_models = ["llama3"] types_of_llm_apply : llm applied as reranker after the RAG pipeline
Source code in evaluation/experiments.py
147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 |
|
get_queries(query_templates, load_eval_queries)
¶
Get queries from the dataset templates and format it
Source code in evaluation/training_utils.py
318 319 320 321 322 323 324 325 326 327 328 |
|
ollama_setup(list_of_llm_models)
¶
Description: Setup Ollama server and pull the llm_model that is being used
Source code in evaluation/training_utils.py
63 64 65 66 67 68 69 70 71 72 73 |
|
process_embedding_model_name_hf(name)
¶
Description: This function processes the name of the embedding model from Hugging Face to use as experiment name.
Input: name (str) - name of the embedding model from Hugging Face.
Returns: name (str) - processed name of the embedding model.
Source code in evaluation/training_utils.py
31 32 33 34 35 36 37 38 39 |
|
process_llm_model_name_ollama(name)
¶
Description: This function processes the name of the llm model from Ollama to use as experiment name.
Input: name (str) - name of the llm model from Ollama.
Returns: name (str) - processed name of the llm model.
Source code in evaluation/training_utils.py
42 43 44 45 46 47 48 49 50 |
|
process_query_elastic_search(query, dataset_id)
¶
Get the results from elastic search opemml server
Source code in evaluation/training_utils.py
331 332 333 334 335 336 337 |
|
Evaluation Utils¶
EvaluationProcessor
¶
Description: Process all the evaluated results, add the required metrics and save results as a csv/generate plots
Source code in evaluation/evaluation_utils.py
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 |
|
add_map(grouped_df)
staticmethod
¶
Description: Compute the mean average precision metric for each group in the dataframe
Source code in evaluation/evaluation_utils.py
173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 |
|
add_precision(grouped_df)
staticmethod
¶
Description: Compute the precision metric for each group in the dataframe
Source code in evaluation/evaluation_utils.py
150 151 152 153 154 155 156 157 158 159 |
|
add_recall(grouped_df)
staticmethod
¶
Description: Compute the recall metric for each group in the dataframe
Source code in evaluation/evaluation_utils.py
161 162 163 164 165 166 167 168 169 170 171 |
|
create_query_key_dict()
¶
Description: Use the manual evaluation to create a dictionary of queries and their corresponding ground truth dataset ids. eg: Math,"45617,43383,2,45748"
Source code in evaluation/evaluation_utils.py
126 127 128 129 130 131 132 133 134 135 136 |
|
generate_results(csv_files)
¶
Description: Load the results from the csv files, group them and compute metrics for each group. Then merge the results and sort them by the metric specified.
Source code in evaluation/evaluation_utils.py
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 |
|
load_queries_from_csv()
¶
Description: Load the queries from the csv file
Source code in evaluation/evaluation_utils.py
108 109 110 111 112 113 114 115 |
|
load_query_templates()
¶
Description: Load the query templates from the txt file. This is used to generate the queries for the evaluation process. eg: {query_template} {query} {find me a dataset about} {cancer}
Source code in evaluation/evaluation_utils.py
117 118 119 120 121 122 123 124 |
|
load_result_files()
¶
Description: Find all the csv files in the evaluation directory.
Source code in evaluation/evaluation_utils.py
40 41 42 43 44 45 |
|
preprocess_results(results_df)
¶
Description: Preprocess the results dataframe by filling missing values and converting the columns to the correct data types.
Source code in evaluation/evaluation_utils.py
138 139 140 141 142 143 144 145 146 147 148 |
|
run()
¶
Description: Load files, Run the evaluation process and display the results
Source code in evaluation/evaluation_utils.py
30 31 32 33 34 35 36 37 38 |
|