import osfrom cerebras.cloud.sdk import Cerebrasclient = Cerebras( api_key=os.environ.get("CEREBRAS_API_KEY"), # This is the default and can be omitted)completion = client.completions.create( prompt="It was a dark and stormy night", max_tokens=100, model="llama3.1-8b",)print(completion)
Copy
Ask AI
{ "id": "chatcmpl-b8718798-d389-4421-9242-13b07e84983b", "choices": [ { "finish_reason": "length", "index": 0, "text": " when I stumbled upon a small, quirky shop tucked away in a quiet alley. The sign above the door read \"Curios and Wonders,\" and the windows were filled with a dazzling array of strange and exotic items. I pushed open the door and stepped inside, my eyes adjusting to the dim light within.\n\nThe shop was a treasure trove of oddities, with shelves upon shelves of peculiar objects that seemed to defy explanation. There were vintage taxidermy animals, antique medical equipment, and" } ], "created": 1731597024, "model": "llama3.1-8b", "system_fingerprint": "fp_e8eacef18a", "object": "text_completion", "usage": { "prompt_tokens": 10, "completion_tokens": 100, "total_tokens": 110 }, "time_info": { "queue_time": 4.673e-05, "prompt_time": 0.0004940576161616161, "completion_time": 0.045957338383838385, "total_time": 0.058876991271972656, "created": 1731597024 }}
Endpoints
Completions
Copy
Ask AI
import osfrom cerebras.cloud.sdk import Cerebrasclient = Cerebras( api_key=os.environ.get("CEREBRAS_API_KEY"), # This is the default and can be omitted)completion = client.completions.create( prompt="It was a dark and stormy night", max_tokens=100, model="llama3.1-8b",)print(completion)
Copy
Ask AI
{ "id": "chatcmpl-b8718798-d389-4421-9242-13b07e84983b", "choices": [ { "finish_reason": "length", "index": 0, "text": " when I stumbled upon a small, quirky shop tucked away in a quiet alley. The sign above the door read \"Curios and Wonders,\" and the windows were filled with a dazzling array of strange and exotic items. I pushed open the door and stepped inside, my eyes adjusting to the dim light within.\n\nThe shop was a treasure trove of oddities, with shelves upon shelves of peculiar objects that seemed to defy explanation. There were vintage taxidermy animals, antique medical equipment, and" } ], "created": 1731597024, "model": "llama3.1-8b", "system_fingerprint": "fp_e8eacef18a", "object": "text_completion", "usage": { "prompt_tokens": 10, "completion_tokens": 100, "total_tokens": 110 }, "time_info": { "queue_time": 4.673e-05, "prompt_time": 0.0004940576161616161, "completion_time": 0.045957338383838385, "total_time": 0.058876991271972656, "created": 1731597024 }}
If set, partial message deltas will be sent. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.
Default: false
The maximum number of tokens that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length.
Default: null
The minimum number of tokens to generate for a completion. If not specified or set to 0, the model will generate as many tokens as it deems necessary. Setting to -1 sets to max sequence length.
Default: null
If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed.
Default: null
What sampling temperature to use, between 0 and 1.5. Higher values (e.g., 0.8) will make the output more random, while lower values (e.g., 0.2) will make it more focused and deterministic. We generally recommend altering this or top_p but not both.
Default: 1.0
An alternative to sampling with temperature, called nucleus sampling, where the model considers the tokens with top_p probability mass. For example, 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
Default: 1.0
The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, or content_filter if content was omitted due to a flag from our content filters.
This fingerprint represents the backend configuration that the model runs with.Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.
import osfrom cerebras.cloud.sdk import Cerebrasclient = Cerebras( api_key=os.environ.get("CEREBRAS_API_KEY"), # This is the default and can be omitted)completion = client.completions.create( prompt="It was a dark and stormy night", max_tokens=100, model="llama3.1-8b",)print(completion)
Copy
Ask AI
{ "id": "chatcmpl-b8718798-d389-4421-9242-13b07e84983b", "choices": [ { "finish_reason": "length", "index": 0, "text": " when I stumbled upon a small, quirky shop tucked away in a quiet alley. The sign above the door read \"Curios and Wonders,\" and the windows were filled with a dazzling array of strange and exotic items. I pushed open the door and stepped inside, my eyes adjusting to the dim light within.\n\nThe shop was a treasure trove of oddities, with shelves upon shelves of peculiar objects that seemed to defy explanation. There were vintage taxidermy animals, antique medical equipment, and" } ], "created": 1731597024, "model": "llama3.1-8b", "system_fingerprint": "fp_e8eacef18a", "object": "text_completion", "usage": { "prompt_tokens": 10, "completion_tokens": 100, "total_tokens": 110 }, "time_info": { "queue_time": 4.673e-05, "prompt_time": 0.0004940576161616161, "completion_time": 0.045957338383838385, "total_time": 0.058876991271972656, "created": 1731597024 }}