Quick Start
Prerequisites
- CMake 3.22 or later
- A C++17 compiler (GCC 10+, Clang 12+)
- libcurl development headers
- nlohmann-json (usually available via package manager)
- A Datadog account with API key and Application key
bash
sudo apt-get install -y cmake libcurl4-openssl-dev nlohmann-json3-dev
bash
brew install cmake curl nlohmann-json
Build
Build the core library and your language binding of choice:
bash
# Build core library cmake -B build -DCMAKE_BUILD_TYPE=Release cmake --build build -j$(nproc) # Python uses ctypes - no compilation needed for the binding export LD_LIBRARY_PATH="$PWD/build/source/core:\$LD_LIBRARY_PATH"
First Stream
Fetch the last hour of CPU usage. The stream automatically chunks the request,
retries on errors with adaptive backoff, and prefetches the next batch in the
background. All examples use the generic Stream API with JSON config - no
Datadog-specific fields in the binding layer.
basic_stream.py
from ddstream import Stream
import time
now = int(time.time())
with Stream("datadog_v1", {
"api_key": "YOUR_DD_API_KEY",
"app_key": "YOUR_DD_APP_KEY",
"query": "avg:system.cpu.user{*}",
"range_start": now - 3600,
"range_end": now,
}) as stream:
for batch in stream:
print(f"Batch: {batch['count']} points, progress={stream.progress():.0%}")Next Steps
- Read the Architecture page to understand adaptive scaling and prefetch.
- See the Datadog Adapter page for v1/v2 details and adaptive chunk sizing.
- Browse REST Adapters for page/token-based API pagination.
- Learn how to Build Your Own Adapter for Prometheus, Grafana, or any API.
- See Examples for CSV export, pandas integration, V2 timeseries, and more.
