LiteLLM#
dbally.llms.litellm.LiteLLM
#
LiteLLM(model_name: str = 'gpt-3.5-turbo', default_options: Optional[LiteLLMOptions] = None, *, base_url: Optional[str] = None, api_key: Optional[str] = None, api_version: Optional[str] = None)
Bases: LLM[LiteLLMOptions]
Class for interaction with any LLM supported by LiteLLM API.
Constructs a new LiteLLM instance.
PARAMETER | DESCRIPTION |
---|---|
model_name |
Name of the LiteLLM supported model to be used. Default is "gpt-3.5-turbo".
TYPE:
|
default_options |
Default options to be used.
TYPE:
|
base_url |
Base URL of the LLM API.
TYPE:
|
api_key |
API key to be used. API key to be used. If not specified, an environment variable will be used, for more information, follow the instructions for your specific vendor in the LiteLLM documentation.
TYPE:
|
api_version |
API version to be used. If not specified, the default version will be used.
TYPE:
|
RAISES | DESCRIPTION |
---|---|
ImportError
|
If the litellm package is not installed. |
Source code in src/dbally/llms/litellm.py
generate_text
async
#
generate_text(prompt: PromptTemplate, *, event_tracker: Optional[EventTracker] = None, options: Optional[LLMOptions] = None) -> str
Prepares and sends a prompt to the LLM and returns the response.
PARAMETER | DESCRIPTION |
---|---|
prompt |
Formatted prompt template with conversation and response parsing configuration.
TYPE:
|
event_tracker |
Event store used to audit the generation process.
TYPE:
|
options |
Options to use for the LLM client.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
str
|
Text response from LLM. |
RAISES | DESCRIPTION |
---|---|
LLMError
|
If LLM text generation fails. |
Source code in src/dbally/llms/base.py
count_tokens
#
count_tokens(prompt: PromptTemplate) -> int
Counts tokens in the prompt.
PARAMETER | DESCRIPTION |
---|---|
prompt |
Formatted prompt template with conversation and response parsing configuration.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
int
|
Number of tokens in the prompt. |
Source code in src/dbally/llms/litellm.py
dbally.llms.clients.litellm.LiteLLMClient
#
LiteLLMClient(model_name: str, *, base_url: Optional[str] = None, api_key: Optional[str] = None, api_version: Optional[str] = None)
Bases: LLMClient[LiteLLMOptions]
Client for the LiteLLM that supports calls to 100+ LLMs APIs, including OpenAI, Anthropic, VertexAI, Hugging Face and others.
Constructs a new LiteLLMClient instance.
PARAMETER | DESCRIPTION |
---|---|
model_name |
Name of the model to use.
TYPE:
|
base_url |
Base URL of the LLM API.
TYPE:
|
api_key |
API key used to authenticate with the LLM API.
TYPE:
|
api_version |
API version of the LLM API.
TYPE:
|
RAISES | DESCRIPTION |
---|---|
ImportError
|
If the litellm package is not installed. |
Source code in src/dbally/llms/clients/litellm.py
call
async
#
call(conversation: ChatFormat, options: LiteLLMOptions, event: LLMEvent, json_mode: bool = False) -> str
Calls the appropriate LLM endpoint with the given prompt and options.
PARAMETER | DESCRIPTION |
---|---|
conversation |
List of dicts with "role" and "content" keys, representing the chat history so far.
TYPE:
|
options |
Additional settings used by the LLM.
TYPE:
|
event |
Container with the prompt, LLM response and call metrics.
TYPE:
|
json_mode |
Force the response to be in JSON format.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
str
|
Response string from LLM. |
RAISES | DESCRIPTION |
---|---|
LLMConnectionError
|
If there is a connection error with the LLM API. |
LLMStatusError
|
If the LLM API returns an error status code. |
LLMResponseError
|
If the LLM API response is invalid. |
Source code in src/dbally/llms/clients/litellm.py
dbally.llms.clients.litellm.LiteLLMOptions
dataclass
#
LiteLLMOptions(frequency_penalty: Union[Optional[float], NotGiven] = NOT_GIVEN, max_tokens: Union[Optional[int], NotGiven] = NOT_GIVEN, n: Union[Optional[int], NotGiven] = NOT_GIVEN, presence_penalty: Union[Optional[float], NotGiven] = NOT_GIVEN, seed: Union[Optional[int], NotGiven] = NOT_GIVEN, stop: Union[Optional[Union[str, List[str]]], NotGiven] = NOT_GIVEN, temperature: Union[Optional[float], NotGiven] = NOT_GIVEN, top_p: Union[Optional[float], NotGiven] = NOT_GIVEN)
Bases: LLMOptions
Dataclass that represents all available LLM call options for the LiteLLM client. Each of them is described in the LiteLLM documentation.
frequency_penalty
class-attribute
instance-attribute
#
max_tokens
class-attribute
instance-attribute
#
presence_penalty
class-attribute
instance-attribute
#
stop
class-attribute
instance-attribute
#
temperature
class-attribute
instance-attribute
#
dict
#
Creates a dictionary representation of the LLMOptions instance. If a value is None, it will be replaced with a provider-specific not-given sentinel.
RETURNS | DESCRIPTION |
---|---|
Dict[str, Any]
|
A dictionary representation of the LLMOptions instance. |