ChatRequest - Go SDK
ChatRequest type definition
The Go SDK and docs are currently in beta. Report issues on GitHub.
Chat completion request parameters
Fields
| Field | Type | Required | Description | Example |
|---|---|---|---|---|
Provider | optionalnullable.OptionalNullable[components.ProviderPreferences] | ➖ | When multiple model providers are available, optionally indicate your routing preference. | {"allow_fallbacks": true} |
Plugins | []components.ChatRequestPlugin | ➖ | Plugins you want to enable for this request, including their settings. | |
User | *string | ➖ | Unique user identifier | user-123 |
SessionID | *string | ➖ | A unique identifier for grouping related requests (e.g., a conversation or agent workflow) for observability. If provided in both the request body and the x-session-id header, the body value takes precedence. Maximum of 256 characters. | |
Trace | *components.TraceConfig | ➖ | Metadata for observability and tracing. Known keys (trace_id, trace_name, span_name, generation_name, parent_span_id) have special handling. Additional keys are passed through as custom metadata to configured broadcast destinations. | {"trace_id": "trace-abc123","trace_name": "my-app-trace"} |
Messages | []components.ChatMessages | ✔️ | List of messages for the conversation | [{"role": "user","content": "Hello!"}] |
Model | *string | ➖ | Model to use for completion | openai/gpt-4 |
Models | []string | ➖ | Models to use for completion | [ “openai/gpt-4”, “openai/gpt-4o” ] |
FrequencyPenalty | *float64 | ➖ | Frequency penalty (-2.0 to 2.0) | 0 |
LogitBias | optionalnullable.OptionalNullable[map[string]float64] | ➖ | Token logit bias adjustments | {"50256": -100} |
Logprobs | optionalnullable.OptionalNullable[bool] | ➖ | Return log probabilities | false |
TopLogprobs | *int64 | ➖ | Number of top log probabilities to return (0-20) | 5 |
MaxCompletionTokens | *int64 | ➖ | Maximum tokens in completion | 100 |
MaxTokens | *int64 | ➖ | Maximum tokens (deprecated, use max_completion_tokens). Note: some providers enforce a minimum of 16. | 100 |
Metadata | map[string]string | ➖ | Key-value pairs for additional object information (max 16 pairs, 64 char keys, 512 char values) | {"user_id": "user-123","session_id": "session-456"} |
PresencePenalty | *float64 | ➖ | Presence penalty (-2.0 to 2.0) | 0 |
Reasoning | *components.Reasoning | ➖ | Configuration options for reasoning models | {"effort": "medium","summary": "concise"} |
ResponseFormat | *components.ResponseFormat | ➖ | Response format configuration | {"type": "json_object"} |
Seed | *int64 | ➖ | Random seed for deterministic outputs | 42 |
Stop | optionalnullable.OptionalNullable[components.Stop] | ➖ | Stop sequences (up to 4) | [ "" ] |
Stream | *bool | ➖ | Enable streaming response | false |
StreamOptions | optionalnullable.OptionalNullable[components.ChatStreamOptions] | ➖ | Streaming configuration options | {"include_usage": true} |
Temperature | *float64 | ➖ | Sampling temperature (0-2) | 0.7 |
ParallelToolCalls | optionalnullable.OptionalNullable[bool] | ➖ | Whether to enable parallel function calling during tool use. When true, the model may generate multiple tool calls in a single response. | true |
ToolChoice | *components.ChatToolChoice | ➖ | Tool choice configuration | auto |
Tools | []components.ChatFunctionTool | ➖ | Available tools for function calling | [{"type": "function","function": {"name": "get_weather","description": "Get weather"}} ] |
TopP | *float64 | ➖ | Nucleus sampling parameter (0-1) | 1 |
Debug | *components.ChatDebugOptions | ➖ | Debug options for inspecting request transformations (streaming only) | {"echo_upstream_body": true} |
ImageConfig | map[string]components.ChatRequestImageConfig | ➖ | Provider-specific image configuration options. Keys and values vary by model/provider. See https://openrouter.ai/docs/guides/overview/multimodal/image-generation for more details. | {"aspect_ratio": "16:9"} |
Modalities | []components.Modality | ➖ | Output modalities for the response. Supported values are “text”, “image”, and “audio”. | [ “text”, “image” ] |
CacheControl | *components.AnthropicCacheControlDirective | ➖ | N/A | {"type": "ephemeral"} |
ServiceTier | optionalnullable.OptionalNullable[components.ChatRequestServiceTier] | ➖ | The service tier to use for processing this request. | auto |