Skip to content

Natural Language Responder#

NLResponder class can be utilized to transform a database output into a natural language response to user queries.

flowchart LR
    Q[Do we have any Data Scientists?]
    Result["`[{'experience': 5, 'position': 'Data Scientist'},
        {'experience': 2, 'position': 'Data Scientist'}]`"]
    Responder[Natural Language Responder]

    Q --> Responder
    Result --> Responder
    Responder --> Answer["Yes, we have 2 Data scientists in our company."]

The method used to generate the response is self.generate_response. It essentially generates a natural language response to the user's question using a LLM. Here's a breakdown of the steps:

  1. The results of the query execution are converted to a Markdown table.
  2. The number of tokens in nl_responder_prompt_template supplemented with the table from point 1 is counted.
  3. If the token count exceeds a predefined maximum (self.max_tokens_count), a response is generated using a iql_explainer_prompt_template. It will return a description of the results based on the query, omitting table analysis step. Otherwise, a response is generated using a nl_responder_prompt_template.

Tip

To understand general idea better, visit the NL Responder concept page.

dbally.nl_responder.nl_responder.NLResponder #

NLResponder(llm: LLM, prompt_template: Optional[PromptTemplate[NLResponsePromptFormat]] = None, explainer_prompt_template: Optional[PromptTemplate[QueryExplanationPromptFormat]] = None, max_tokens_count: int = 4096)

Class used to generate natural language response from the database output.

Constructs a new NLResponder instance.

PARAMETER DESCRIPTION
llm

LLM used to generate natural language response.

TYPE: LLM

prompt_template

Template for the prompt used to generate the NL response if not set defaults to NL_RESPONSE_TEMPLATE. explainer_prompt_template: Template for the prompt used to generate the iql explanation if not set defaults to QUERY_EXPLANATION_TEMPLATE.

TYPE: Optional[PromptTemplate[NLResponsePromptFormat]] DEFAULT: None

max_tokens_count

Maximum number of tokens that can be used in the prompt.

TYPE: int DEFAULT: 4096

Source code in src/dbally/nl_responder/nl_responder.py
def __init__(
    self,
    llm: LLM,
    prompt_template: Optional[PromptTemplate[NLResponsePromptFormat]] = None,
    explainer_prompt_template: Optional[PromptTemplate[QueryExplanationPromptFormat]] = None,
    max_tokens_count: int = 4096,
) -> None:
    """
    Constructs a new NLResponder instance.

    Args:
        llm: LLM used to generate natural language response.
        prompt_template: Template for the prompt used to generate the NL response
            if not set defaults to `NL_RESPONSE_TEMPLATE`.
         explainer_prompt_template: Template for the prompt used to generate the iql explanation
            if not set defaults to `QUERY_EXPLANATION_TEMPLATE`.
        max_tokens_count: Maximum number of tokens that can be used in the prompt.
    """
    self._llm = llm
    self._prompt_template = prompt_template or NL_RESPONSE_TEMPLATE
    self._explainer_prompt_template = explainer_prompt_template or QUERY_EXPLANATION_TEMPLATE
    self._max_tokens_count = max_tokens_count

generate_response async #

generate_response(result: ViewExecutionResult, question: str, event_tracker: EventTracker, llm_options: Optional[LLMOptions] = None) -> str

Uses LLM to generate a response in natural language form.

PARAMETER DESCRIPTION
result

Object representing the result of the query execution.

TYPE: ViewExecutionResult

question

User question.

TYPE: str

event_tracker

Event store used to audit the generation process.

TYPE: EventTracker

llm_options

Options to use for the LLM client.

TYPE: Optional[LLMOptions] DEFAULT: None

RETURNS DESCRIPTION
str

Natural language response to the user question.

RAISES DESCRIPTION
LLMError

If LLM text generation fails.

Source code in src/dbally/nl_responder/nl_responder.py
async def generate_response(
    self,
    result: ViewExecutionResult,
    question: str,
    event_tracker: EventTracker,
    llm_options: Optional[LLMOptions] = None,
) -> str:
    """
    Uses LLM to generate a response in natural language form.

    Args:
        result: Object representing the result of the query execution.
        question: User question.
        event_tracker: Event store used to audit the generation process.
        llm_options: Options to use for the LLM client.

    Returns:
        Natural language response to the user question.

    Raises:
        LLMError: If LLM text generation fails.
    """
    prompt_format = NLResponsePromptFormat(
        question=question,
        results=result.results,
    )
    formatted_prompt = self._prompt_template.format_prompt(prompt_format)
    tokens_count = self._llm.count_tokens(formatted_prompt)

    if tokens_count > self._max_tokens_count:
        prompt_format = QueryExplanationPromptFormat(
            question=question,
            context=result.context,
            results=result.results,
        )
        formatted_prompt = self._explainer_prompt_template.format_prompt(prompt_format)
        llm_response = await self._llm.generate_text(
            prompt=formatted_prompt,
            event_tracker=event_tracker,
            options=llm_options,
        )
        return llm_response

    llm_response = await self._llm.generate_text(
        prompt=formatted_prompt,
        event_tracker=event_tracker,
        options=llm_options,
    )
    return llm_response