Canary SDK Documentation
Lightweight AI agent tracing and observability. Get full visibility into your production agents in under 60 seconds.
Installation
Install the Canary SDK via npm:
npm install @heycanary/sdkQuick Setup (3 Lines of Code)
Initialize Canary and wrap your agent logic to start tracing immediately:
import { init, wrap } from '@heycanary/sdk';
// 1. Initialize with your API key
init({
apiKey: 'ck_your_api_key_here',
projectId: 'your_project_id'
});
// 2. Wrap your agent function
const myAgent = wrap(async (input) => {
// Your agent logic here
const result = await callLLM(input);
return result;
}, { name: 'my-agent' });
// 3. Run your agent — traces appear automatically
await myAgent({ query: 'Hello!' });First Trace in 60 Seconds
Here's a complete example with OpenAI that creates your first trace:
import { init, wrap } from '@heycanary/sdk';
import OpenAI from 'openai';
// Initialize Canary
init({
apiKey: 'ck_your_api_key_here',
projectId: 'project_123'
});
// Wrap your OpenAI call
const generateResponse = wrap(async (prompt: string) => {
const openai = new OpenAI();
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: prompt }]
});
return response.choices[0].message.content;
}, { name: 'openai-chat' });
// Use it
const result = await generateResponse('What is the capital of France?');
console.log(result);That's it! Check your Canary dashboard to see the trace with input, output, duration, token usage, and cost.
API Reference
init(config)
Initialize the Canary SDK. Call this once at application startup.
import { init } from '@heycanary/sdk';
init({
apiKey: string; // Required: Your Canary API key (starts with 'ck_')
projectId: string; // Required: Your project ID
endpoint?: string; // Optional: Custom API endpoint (default: 'https://heycanary.ai/api/ingest')
flushInterval?: number; // Optional: How often to send traces in ms (default: 5000)
maxQueueSize?: number; // Optional: Max traces to queue before dropping oldest (default: 1000)
});Defaults:
endpoint:'https://heycanary.ai/api/ingest'flushInterval:5000(5 seconds)maxQueueSize:1000
wrap(fn, options)
Wrap an async function to automatically trace its execution. Returns a function with the same signature that captures input, output, duration, and errors.
import { wrap } from '@heycanary/sdk';
const tracedFunction = wrap<T extends (...args: any[]) => Promise<any>>(
fn: T, // The async function to trace
options: {
name: string; // Required: Name for this trace
metadata?: Record<string, any>; // Optional: Custom metadata
}
): T; // Returns same function type with tracingHow it works:
- Captures function input (single arg or array of args)
- Records start time
- Executes your function
- Captures output or error
- Extracts token usage and model info (if present in response)
- Sends trace asynchronously (non-blocking)
- Returns original result or throws original error
Generics & Return Type:
// TypeScript preserves your function signature
async function fetchUser(id: string): Promise<User> { ... }
const traced = wrap(fetchUser, { name: 'fetch-user' });
// traced: (id: string) => Promise<User> ✅ Type-safe!
const user = await traced('user_123'); // User type preservedtrace(name, metadata)
Manually create a trace span for custom events or synchronous operations.
import { trace } from '@heycanary/sdk';
trace(
name: string, // Name of the event
metadata?: Record<string, any> // Optional metadata
): TraceSpan;
// Example: Track a custom event
trace('user-login', {
userId: 'user_123',
method: 'oauth',
provider: 'google'
});
// Returns a TraceSpan object with:
// { id, name, startTime, endTime, duration, metadata, projectId }shutdown()
Flush all pending traces and stop the SDK. Essential for serverless environments to ensure traces are sent before the function exits.
import { shutdown } from '@heycanary/sdk';
await shutdown(); // Flushes queue and clears flush timer
// Example: Serverless handler
export const handler = async (event) => {
init({ apiKey: process.env.CANARY_API_KEY, projectId: 'proj_123' });
const result = await myAgent(event);
await shutdown(); // ← Important! Ensures traces are sent
return result;
};Integrations
OpenAI
import { init, wrap } from '@heycanary/sdk';
import OpenAI from 'openai';
init({ apiKey: 'ck_...', projectId: 'proj_123' });
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const chatCompletion = wrap(async (messages: any[]) => {
return await openai.chat.completions.create({
model: 'gpt-4o',
messages
});
}, { name: 'openai-chat' });
// Usage
const response = await chatCompletion([
{ role: 'user', content: 'Explain quantum computing in simple terms' }
]);
console.log(response.choices[0].message.content);
// ✅ Trace captured: input, output, tokens, model, duration, costAnthropic
import { init, wrap } from '@heycanary/sdk';
import Anthropic from '@anthropic-ai/sdk';
init({ apiKey: 'ck_...', projectId: 'proj_123' });
const client = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
const createMessage = wrap(async (prompt: string) => {
return await client.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
messages: [{ role: 'user', content: prompt }]
});
}, { name: 'anthropic-message' });
// Usage
const message = await createMessage('Write a haiku about AI observability');
console.log(message.content[0].text);
// ✅ Trace captured with Claude-specific metadataLangChain
import { init, wrap } from '@heycanary/sdk';
import { ChatOpenAI } from '@langchain/openai';
import { AgentExecutor, createOpenAIFunctionsAgent } from 'langchain/agents';
import { pull } from 'langchain/hub';
init({ apiKey: 'ck_...', projectId: 'proj_123' });
const llm = new ChatOpenAI({ model: 'gpt-4o', temperature: 0 });
const prompt = await pull("hwchase17/openai-functions-agent");
const agent = await createOpenAIFunctionsAgent({ llm, tools, prompt });
const executor = new AgentExecutor({ agent, tools });
// Wrap the agent executor
const runAgent = wrap(async (input: string) => {
return await executor.invoke({ input });
}, { name: 'langchain-agent' });
// Usage
const result = await runAgent('What is the weather in SF?');
console.log(result.output);
// ✅ Entire agent execution traced, including tool callsCustom Agent
import { init, wrap, trace } from '@heycanary/sdk';
init({ apiKey: 'ck_...', projectId: 'proj_123' });
// Multi-step agent with granular tracing
const myAgent = wrap(async (query: string) => {
// Step 1: Analyze query
trace('query-analysis', { query, intent: 'search' });
const intent = await analyzeIntent(query);
// Step 2: Retrieve context
const retrieval = wrap(async () => {
return await vectorDB.search(query, { limit: 5 });
}, { name: 'retrieval' });
const context = await retrieval();
// Step 3: Generate response
const generation = wrap(async () => {
return await llm.generate({ context, query });
}, { name: 'generation' });
const response = await generation();
trace('agent-complete', {
success: true,
contextItems: context.length,
responseLength: response.length
});
return response;
}, { name: 'custom-agent' });
const answer = await myAgent('Tell me about Canary observability');
// ✅ Full trace tree: agent → analysis → retrieval → generation → completeServerless / Edge Functions
import { init, wrap, shutdown } from '@heycanary/sdk';
// Vercel Edge Function
export const config = { runtime: 'edge' };
export default async function handler(req: Request) {
init({ apiKey: process.env.CANARY_API_KEY!, projectId: 'proj_123' });
const processRequest = wrap(async (body: any) => {
// Your edge logic here
return { success: true, result: 'processed' };
}, { name: 'edge-handler' });
const body = await req.json();
const result = await processRequest(body);
await shutdown(); // ← Critical for serverless!
return new Response(JSON.stringify(result), {
headers: { 'Content-Type': 'application/json' }
});
}
// AWS Lambda
export const handler = async (event: any) => {
init({ apiKey: process.env.CANARY_API_KEY!, projectId: 'proj_123' });
const processEvent = wrap(async (evt: any) => {
// Lambda logic
return { statusCode: 200, body: 'ok' };
}, { name: 'lambda-handler' });
const response = await processEvent(event);
await shutdown();
return response;
};Configuration
Environment Variables
Store your API credentials securely:
# .env.local
CANARY_API_KEY=ck_your_api_key_here
CANARY_PROJECT_ID=proj_123
# Optional overrides
CANARY_ENDPOINT=https://heycanary.ai/api/ingest
CANARY_FLUSH_INTERVAL=5000
CANARY_MAX_QUEUE_SIZE=1000// app.ts
import { init } from '@heycanary/sdk';
init({
apiKey: process.env.CANARY_API_KEY!,
projectId: process.env.CANARY_PROJECT_ID!,
// Optional: override defaults
endpoint: process.env.CANARY_ENDPOINT,
flushInterval: parseInt(process.env.CANARY_FLUSH_INTERVAL || '5000'),
maxQueueSize: parseInt(process.env.CANARY_MAX_QUEUE_SIZE || '1000')
});Batch Size and Flush Interval
Control how often traces are sent to optimize performance vs. real-time visibility:
init({
apiKey: 'ck_...',
projectId: 'proj_123',
flushInterval: 10000, // Send every 10 seconds (default: 5000)
maxQueueSize: 500 // Max 500 traces in queue (default: 1000)
});
// Traces are also auto-flushed when:
// 1. Queue reaches 100 items (immediate flush)
// 2. shutdown() is called
// 3. maxQueueSize is reached (oldest traces dropped)Performance Tips:
- Lower
flushIntervalfor near real-time (increases network calls) - Higher
flushIntervalfor batch efficiency (delays visibility) - Increase
maxQueueSizefor high-volume agents - Always call
shutdown()in serverless to prevent dropped traces
Custom Endpoint URL
Point to a self-hosted Canary instance or proxy:
init({
apiKey: 'ck_...',
projectId: 'proj_123',
endpoint: 'https://canary.yourcompany.com/api/ingest' // Custom endpoint
});Error Handling
Canary is designed to never break your application:
- Non-blocking: All trace sending happens asynchronously
- Graceful degradation: Network errors are silently retried
- Auto-retry: Failed traces are re-queued (up to 500 most recent)
- No exceptions: SDK errors never propagate to your code
// Your code never fails due to Canary
const myFunction = wrap(async () => {
// Even if Canary API is down, this executes normally
return await doImportantWork();
}, { name: 'critical-function' });
try {
await myFunction(); // ✅ Always works, traces sent when API recovers
} catch (error) {
// This only catches YOUR errors, never Canary SDK errors
console.error(error);
}Dashboard
View all your traces, metrics, and insights at your Canary dashboard.
What Metrics Are Tracked
Every trace captures:
- Timing: Start time, end time, duration (ms)
- I/O: Full input and output (sanitized for circular refs)
- Tokens: Usage breakdown (prompt, completion, total) when available
- Model: LLM model name (e.g.,
gpt-4o,claude-sonnet-4) - Cost: Calculated from token usage × model pricing
- Errors: Error message and stack trace on failures
- Metadata: Custom tags and context you provide
- Project: Isolated by project ID for multi-tenant setups
Filtering and Searching Traces
The dashboard provides powerful search and filtering:
- Time range: Last hour, 24h, 7d, 30d, or custom range
- Trace name: Filter by function name (e.g.,
openai-chat) - Status: Success, error, or all
- Model: Group by or filter by LLM model
- Cost range: Find expensive traces (> $0.10, etc.)
- Duration: Slow traces (> 5s, p95, p99)
- Metadata search: Query custom metadata fields
- Full-text search: Search input/output content
Pro Tip: Bookmark filtered views for recurring investigations (e.g., "All errors in the last 24h" or "GPT-4o traces > $0.50").
Live Updates
The dashboard updates in real-time as traces arrive (typically within 5-10 seconds). No need to refresh — new traces appear automatically.
FAQ
Does it slow down my agents?
No. Canary adds < 1ms overhead per trace. All network operations are asynchronous and non-blocking — your function returns immediately while traces are queued and sent in the background.
// Your function runs at full speed
const result = await wrap(myFunction, { name: 'fast' })();
// ↑ Returns immediately, trace sent asynchronously in backgroundWhat happens if the Canary API is down?
Your app keeps running. Failed traces are automatically queued for retry. When the API recovers, they're sent. If the queue fills up (maxQueueSize), oldest traces are dropped to prevent memory issues.
- Network errors are caught silently
- Up to 500 most recent failed traces are re-queued
- No exceptions thrown to your code
- Traces resume automatically when API is back
What data is captured?
Canary captures:
- Input: Function arguments (sanitized for circular references)
- Output: Return value (sanitized)
- Duration: Wall-clock time from start to end
- Tokens: Usage stats from LLM responses (when present)
- Model: LLM model name (when present)
- Errors: Error message and stack trace on failures
- Metadata: Any custom metadata you attach
Data you control: You choose what to trace by wrapping specific functions. Sensitive operations can be excluded by simply not wrapping them.
Is it framework-specific?
No. Canary works with anything that runs async JavaScript/TypeScript:
- OpenAI, Anthropic, Gemini, Cohere, etc.
- LangChain, LlamaIndex, Vercel AI SDK
- Custom agents built from scratch
- Express, Next.js, Fastify, Hono
- Serverless (Lambda, Vercel, Cloudflare Workers)
- Edge functions, background jobs, cron tasks
If it's an async function, you can wrap it.
How do I get an API key?
Join the waitlist to get early access. We'll send you an API key and project ID when we launch.
Can I use Canary in production today?
Canary is in private beta. Join the waitlist to get early access and help shape the product before the public launch.
What's the pricing?
See our pricing page. We offer a generous free tier and transparent usage-based pricing that scales with your agent volume.
Ready to get started?
Join the waitlist and be first to get full observability when we launch.
Join Waitlist