Create chat completion
POST /v1/chat/completions
Follow the quickstart guide to get your own API Key.
Replace $YOUR_API_KEY
with the actual API key you generated in the previous step.
Make sure to replace $MODEL_ID
with ID of the model to use.
Parameter support can differ depending on the model used to generate the response, particularly for newer reasoning models.
Request body
messages array Required
A list of messages comprising the conversation so far.
Show possible types
System message object
Developer-provided instructions that the model should follow, regardless of messages sent by the user.
Show properties
content string or array Required
The contents of the system message.
role string Required
The role of the messages author, in this case system
.
name string Optional
An optional name for the participant.
User message object
Messages sent by an end user.
Show properties
content string or array Required
The contents of the user message.
role string Required
The role of the messages author, in this case user
.
name string Optional
Optional name for the participant.
Assistant message object
Messages sent by the model in response to user messages.
Show properties
role string or array Required
The role of the messages author, in this case assistant
.
content object or null Optional
The contents of the assistant message. Required unless tool_calls
is specified.
name string Optional
Optional name for the participant.
refusal object or null Optional
The refusal message by the assistant.
tool_calls object or null Optional
The refusal message by the assistant.
Tool message object
Show properties
content string or array Required
The contents of the tool message.
role string Required
The role of the messages author, in this case tool
.
tool_call_id string Optional
Tool call that this message is responding to.
model string Required
Model ID used to generate the response. OPE.AI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.
frequency_penalty number or null Optional Defaults to 0
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
max_tokens integer or null Optional
The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API.
parallel_tool_calls boolean Optional Defaults to true
Whether to enable parallel function calling during tool use.
presence_penalty number or null Optional Defaults to 0
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
seed integer or null Optional
This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed
and parameters should return the same result. Determinism is not guaranteed, and you should refer to the system_fingerprint
response parameter to monitor changes in the backend.
stop string / array / null Optional Defaults to null
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
stream boolean or null Optional Defaults to false
Specifies whether to use streaming output. Valid values:
・false
: The model delivers the complete response at a time.
・true
: The model returns output in chunks as content is generated. You need to obtain each part in real time to get the full response.
stream_options object or null Optional Defaults to null
Options for streaming response. Only set this when you set stream: true
.
Show possible types
include_usage
boolean
Optional
If set, an additional chunk will be streamed before the data: [DONE]
message. The usage
field on this chunk shows the token usage statistics for the entire request, and the choices
field will always be an empty array.
All other chunks will also include a usage
field, but with a null value. NOTE: If the stream is interrupted, you may not receive the final usage chunk which contains the total token usage for the request.
temperature number or null Optional Defaults to 1
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p
but not both.
tool_choice string or object Optional
Controls which (if any) tool is called by the model.
none
means the model will not call any tool and instead generates a message.
auto
means the model can pick between generating a message or calling one or more tools.
required
means the model must call one or more tools.
Specifying a particular tool {"type": "function", "function": {"name": "my_function"}}
forces the model to call that tool.
none
is the default when no tools are present.
auto
is the default if tools are present.
Show possible types
string
none
means the model will not call any tool and instead generates a message. auto
means the model can pick between generating a message or calling one or more tools. required
means the model must call one or more tools.
object
Specifies a tool the model should use. Use to force the model to call a specific function.
Show possible types
function object Required
Show properties
name
string
Required
The name of the function to call.
type
string
Required
The type of the tool. Currently, only function
is supported.
tools array Optional
A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.
Show possible types
function object Required
Show properties
name string Required The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
strict
boolean or null
Optional
Defaults to false
Whether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the parameters
field. Only a subset of JSON Schema is supported when strict
is true
. Learn more about Structured Outputs in the function calling guide.
description string Optional A description of what the function does, used by the model to choose when and how to call the function.
parameters
object
Optional
The parameters
the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.
Omitting parameters defines a function with an empty parameter list.
type
string
Required
The type of the tool. Currently, only function
is supported.
top_p number or null Optional Defaults to 1
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature
, but not both.
top_k number or null Optional Defaults to 50
top_k changes how the model selects tokens for output. A top_k of 1
means the next selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top_k of 3
means that the next token is selected from among the three most probable tokens by using temperature.
For each token selection step, the top-k tokens with the highest probabilities are sampled. Then tokens are further filtered based on top-p with the final token selected using temperature sampling.
Specify a lower value for less random responses and a higher value for more random responses.
Returns
Returns a chat completion object, or a streamed sequence of chat completion chunk objects if the request is streamed.