Skip to content

The chat completion object

Represents the chat completion response returned by the model based on the provided input.


choices array
A list of chat completion choices.

Show properties

finish_reason object
The reason why the model stopped generating. stop: natural stop or provided stop sequence. length: maximum token limit reached. content_filter: content filtered out. tool_calls: model invoked a tool.


index integer
The index of this choice in the list.


message object
A chat message generated by the model.

Show properties

role string
The role of the author of this message.


content string or null
The content of the message.


reasoning_content string or null
For reasoning model only. The reasoning contents of the assistant message, before the final answer.


refusal string or null
The refusal message generated by the model.


tool_calls array
The tool calls generated by the model, such as function calls.

Show properties

function object

Show properties

arguments string
Arguments for the function call, generated by the model in JSON format. Note: The model may hallucinate fields or generate invalid JSON. Always validate before use.


name string
The name of the function to call.


id string
The ID of the tool call.


type string
The type of the tool. Currently, only function is supported.


created integer
The Unix timestamp (in seconds) of when the chat completion was created.


id string
A unique identifier for the chat completion.


model string
The model used for the chat completion.


object string
The object type, which is always chat.completion.


system_fingerprint string
This fingerprint represents the backend configuration that the model runs with.
Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.


usage object
Usage statistics for the completion request.

Show properties

completion_tokens integer
Number of tokens in the generated completion.


prompt_tokens integer
Number of tokens in the prompt.


total_tokens integer
Total number of tokens used in the request (prompt + completion).