Skip to content

Track usage

track_usage(response)

Parameters:

Name Type Description Default
response ChatCompletion

The response from the OpenAI API

required

OpenAI Model Price

Model Name Input Cost (per 1M tokens) Output Cost (per 1M tokens)
gpt-3.5-turbo $0.5 $1.5
gpt-4o $5 $15
gpt-4-turbo $10 $30
gpt-4 $30 $60

Returns:

Name Type Description
total_cost float

The total cost of the response

Source code in Docs2KG/utils/llm/track_usage.py
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
def track_usage(response: ChatCompletion) -> float:
    """
    Args:
        response: The response from the OpenAI API

    OpenAI Model Price

    | Model Name    | Input Cost (per 1M tokens) | Output Cost (per 1M tokens) |
    |---------------|----------------------------|-----------------------------|
    | gpt-3.5-turbo | $0.5                       | $1.5                        |
    | gpt-4o        | $5                         | $15                         |
    | gpt-4-turbo   | $10                        | $30                         |
    | gpt-4         | $30                        | $60                         |

    Returns:
        total_cost (float): The total cost of the response
    """
    llm_model = response.model
    prompt_tokens = response.usage.prompt_tokens
    completion_tokens = response.usage.completion_tokens
    input_cost = (
        OPENAI_MODEL_PRICE[llm_model]["input_cost"]
        * (prompt_tokens + completion_tokens)
        / 1e6
    )
    output_cost = OPENAI_MODEL_PRICE[llm_model]["output_cost"] * completion_tokens / 1e6
    logger.debug(f"Input Cost: ${input_cost}")
    logger.debug(f"Output Cost: ${output_cost}")
    total_cost = input_cost + output_cost
    logger.debug(f"Total Cost: ${total_cost}")
    return total_cost