Completions
Completion Request
The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.
Default: ""
The model used for completion. Available options: llama3.1-8b
, llama-3.3-70b
If set, partial message deltas will be sent. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE]
message.
Default: false
Return raw tokens instead of text.
Default: false
The maximum number of tokens that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length.
Default: null
The minimum number of tokens to generate for a completion. If not specified or set to 0, the model will generate as many tokens as it deems necessary. Setting to -1 sets to max sequence length.
Default: null
The grammar root used for structured output generation.
Supported values: root
, fcall
, nofcall
, insidevalue
, value
, object
, array
, string
, number
, funcarray
, func
, ws
.
Default: null
If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed
and parameters should return the same result. Determinism is not guaranteed.
Default: null
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
Default: null
What sampling temperature to use, between 0 and 1.5. Higher values (e.g., 0.8) will make the output more random, while lower values (e.g., 0.2) will make it more focused and deterministic. We generally recommend altering this or top_p
but not both.
Default: 1.0
An alternative to sampling with temperature, called nucleus sampling, where the model considers the tokens with top_p probability mass. For example, 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature
but not both.
Default: 1.0
Echo back the prompt in addition to the completion. Incompatible with return_raw_tokens=True
.
Default: false
A unique identifier representing your end-user, which can help Cerebras to monitor and detect abuse.
Default: null
Whether to return log probabilities of the output tokens or not.
Default: False
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.
Completion Response
The list of completion choices the model generated for the input prompt.
The Unix timestamp (in seconds) of when the completion was created.
A unique identifier for the completion.
The model used for completion.
The object type, which is always “text_completion”
This fingerprint represents the backend configuration that the model runs with.
Can be used in conjunction with the seed
request parameter to understand when backend changes have been made that might impact determinism.
Usage statistics for the completion request.
Was this page helpful?