Observability for AI Agents
Track every session, tool call, and LLM request. See what your agents cost, when they break, and why.
import { AgentOps } from '@agentops/sdk';
const ops = new AgentOps({ apiKey: 'ak_...', endpoint: 'https://...' });2 lines to integrate
< 1ms overhead
OpenAI + Anthropic + LangChain
Free tier included
From first LLM call to daily cost reports — full observability in minutes.
Every agent run captured with full event timeline, outcome tracking, and metadata.
Real-time spend tracking per agent, model, and session. Catch cost anomalies before they hit your bill.
Success rates, latency percentiles, and failure patterns across all your agent tools.
Token usage, model performance, and cost per call. Compare models side-by-side.
Automatic error flagging with full context. Know when agents fail and why.
Morning health report: cost anomalies, error spikes, model comparison, and issues.
Works with every major LLM provider and framework.
import { AgentOps } from '@agentops/sdk';
import OpenAI from 'openai';
const ops = new AgentOps({ apiKey: 'ak_...' });
const session = ops.startSession({ name: 'support-agent' });
const openai = new OpenAI();
const res = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
});
session.end({ outcome: 'success' });Start free. Scale when you're ready.