Adapters
Built-in adapters
apiexec ships four adapters, all registered at process startup via static initialisers. Pass
the adapter name as the first argument to stream_create (or the equivalent in your binding).
No explicit registration call is needed.
| Adapter | Description |
|---|---|
generic_rest | Cursor-paginated REST APIs |
datadog_metrics | Datadog Metrics API v1 (time-window) |
openai | OpenAI Chat Completions (token cost tracking) |
anthropic | Anthropic Messages API (token cost tracking) |
generic_rest
Works with any JSON REST endpoint that returns a next-page cursor. The adapter extracts a
batch of records on each page, advances the cursor, and signals exhaustion when the cursor
field is null.
Config keys
| Key | Required | Default | Description |
|---|---|---|---|
base_url | Yes | - | Full base URL of the API endpoint |
auth_header | No | "" | Value for the Authorization header (e.g. "Bearer sk-...") |
data_field | No | "data" | JSON key holding the records array |
next_token_field | No | "next" | JSON key holding the next-page cursor |
page_param | No | "cursor" | Query parameter name for the cursor |
page_size | No | 100 | Records requested per page (page_size query param) |
Example
{
"base_url": "https://api.example.com/v1/events",
"auth_header": "Bearer sk-abc123",
"data_field": "events",
"next_token_field": "next_cursor",
"page_size": 200
}StreamHandle* s = stream_create("generic_rest", config_json, policy_json);datadog_metrics
Queries the Datadog Metrics API v1. Splits the time range into windows and streams one
window per batch. The window size adapts automatically: it shrinks on 429 responses and
grows when fetches succeed, converging toward max_window_ms.
Config keys
| Key | Required | Default | Description |
|---|---|---|---|
base_url | No | "https://api.datadoghq.com" | Datadog API base URL (useful for EU site) |
api_key | Yes | - | Datadog API key (DD-API-KEY header) |
app_key | Yes | - | Datadog application key (DD-APPLICATION-KEY header) |
query | Yes | - | Metrics query string (e.g. "avg:system.cpu.user{*}") |
window_ms | No | 3600000 | Initial time window size in milliseconds (default 1 hour) |
Example
{
"api_key": "YOUR_DD_API_KEY",
"app_key": "YOUR_DD_APP_KEY",
"query": "avg:system.cpu.user{host:web-01}",
"window_ms": 1800000
}StreamHandle* s = stream_create("datadog_metrics", config_json, NULL);openai
Sends one prompt per page through the OpenAI Chat Completions API. Token usage from each
response is reported via response_cost() so that CostAwarePolicy can enforce a budget.
Config keys
| Key | Required | Default | Description |
|---|---|---|---|
base_url | No | "https://api.openai.com" | OpenAI API base URL |
api_key | Yes | - | OpenAI API key (Authorization: Bearer header) |
model | Yes | - | Model ID (e.g. "gpt-4o", "gpt-4o-mini") |
prompts | Yes | - | JSON array of prompt strings to stream through |
max_tokens | No | 1024 | Maximum tokens per response |
temperature | No | 1.0 | Sampling temperature (0.0–2.0) |
Example
{
"api_key": "sk-...",
"model": "gpt-4o-mini",
"prompts": ["Summarise the following text:", "Classify the sentiment:"],
"max_tokens": 512,
"temperature": 0.7
}Cost budgets
Pair the openai adapter with a budget_tokens policy to halt the stream automatically
when token spend reaches the limit. The engine returns ErrBudgetExhausted to the caller.
See Examples - Cost budget for a full example.
anthropic
Sends one prompt per page through the Anthropic Messages API. Token usage is tracked the
same way as the openai adapter.
Config keys
| Key | Required | Default | Description |
|---|---|---|---|
base_url | No | "https://api.anthropic.com" | Anthropic API base URL |
api_key | Yes | - | Anthropic API key (x-api-key header) |
model | Yes | - | Model ID (e.g. "claude-opus-4-6", "claude-haiku-4-5-20251001") |
prompts | Yes | - | JSON array of prompt strings to stream through |
max_tokens | No | 1024 | Maximum tokens per response |
Example
{
"api_key": "sk-ant-...",
"model": "claude-haiku-4-5-20251001",
"prompts": ["Summarise: ...", "Extract entities from: ..."],
"max_tokens": 256
}Custom adapters
To add your own adapter, implement the VendorAdapter<T> interface and register it with
StreamRegistry. See Architecture - Adapter Interface
for the full interface definition.
Policy configuration
The third argument to stream_create is an optional JSON policy object that overrides
execution defaults. Pass NULL to use all defaults.
{
"max_retries": 5,
"base_backoff_ms": 100,
"window_grow_factor": 1.5,
"window_shrink_factor": 0.5,
"min_window_ms": 60000,
"max_window_ms": 86400000,
"prefetch_depth": 1
}| Field | Default | Description |
|---|---|---|
max_retries | 3 | Maximum retry attempts per request (429 + 5xx + network) |
base_backoff_ms | 1000 | Initial backoff interval before the first retry |
window_grow_factor | 2.0 | Multiplier applied to the time window after a successful fetch |
window_shrink_factor | 0.5 | Multiplier applied to the time window after a 429 |
min_window_ms | 60000 | Minimum time window size (1 minute) |
max_window_ms | 86400000 | Maximum time window size (24 hours) |
prefetch_depth | 1 | Batches fetched ahead of the caller (0 = no prefetch) |
window_grow_factor and window_shrink_factor only affect the datadog_metrics adapter;
they are ignored by generic_rest, openai, and anthropic.
