Structured query¶
Vector store¶
Structured Query¶
get_structured_query(query)
async
¶
Description: Get the query, replace %20 with space and invoke the chain to get the answers based on the prompt.
Source code in structured_query/llm_service_structured_query.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
|
Deprecated¶
- This section has the API reference that does not use Structured query processing from langchain. It is not used but is left in for future reference.
get_llm_query(query)
async
¶
Description: Get the query, replace %20 (url spacing) with space and invoke the chain to get the answers based on the prompt
Source code in llm_service/llm_service.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
|
create_chain(prompt, model='llama3', temperature=0, base_url='http://localhost:11434')
¶
Description: Create a langchain chain with the given prompt and model and the temperature. The lower the temperature, the less "creative" the model will be.
Source code in llm_service/llm_service_utils.py
7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
parse_answers_initial(response, patterns, prompt_dict)
¶
Description: Parse the answers from the initial response - if the response contains a ? and a new line then join the next line with it (sometimes the LLM adds a new line after the ? instead of just printing it on the same line)
Source code in llm_service/llm_service_utils.py
23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
|