|
2 | 2 | title: AI SDK Integration |
3 | 3 | description: Capture token usage, tool calls, model info, and streaming metrics from the Vercel AI SDK into wide events. Wrap your model and get full AI observability. |
4 | 4 | navigation: |
5 | | - icon: i-lucide-scan-eye |
| 5 | + icon: i-simple-icons-vercel |
6 | 6 | links: |
7 | 7 | - label: Wide Events |
8 | 8 | icon: i-lucide-layers |
@@ -112,15 +112,47 @@ Your wide event now includes: |
112 | 112 |
|
113 | 113 | ## How It Works |
114 | 114 |
|
115 | | -`createAILogger(log)` returns an `AILogger` with two methods: |
| 115 | +`createAILogger(log, options?)` returns an `AILogger` with two methods: |
116 | 116 |
|
117 | 117 | | Method | Description | |
118 | 118 | |--------|-------------| |
119 | | -| `wrap(model)` | Wraps a language model with middleware. Accepts a model string (e.g. `'anthropic/claude-sonnet-4.6'`) or a `LanguageModelV3` object. Works with `generateText`, `streamText`, `generateObject`, `streamObject`, and `ToolLoopAgent`. | |
| 119 | +| `wrap(model)` | Wraps a language model with middleware. Accepts a model string (e.g. `'anthropic/claude-sonnet-4.6'`) or a `LanguageModelV3` object. Works with `generateText`, `streamText`, `generateObject`, `streamObject`, and `ToolLoopAgent`. Also works with pre-wrapped models (e.g. from supermemory). | |
120 | 120 | | `captureEmbed(result)` | Manually captures token usage from `embed()` or `embedMany()` results (embedding models use a different type). | |
121 | 121 |
|
122 | 122 | The middleware intercepts calls at the provider level. It does not touch your callbacks, prompts, or responses. Captured data flows through the normal evlog pipeline (sampling, enrichers, drains) and ends up in Axiom, Better Stack, or wherever you drain to. |
123 | 123 |
|
| 124 | +### Options |
| 125 | + |
| 126 | +| Option | Type | Default | Description | |
| 127 | +|--------|------|---------|-------------| |
| 128 | +| `toolInputs` | `boolean \| ToolInputsOptions` | `false` | When enabled, `toolCalls` contains `{ name, input }` objects instead of plain strings. Opt-in because inputs can be large and may contain sensitive data. | |
| 129 | + |
| 130 | +Pass `true` to capture all inputs as-is, or an options object for fine-grained control: |
| 131 | + |
| 132 | +| Sub-option | Type | Description | |
| 133 | +|------------|------|-------------| |
| 134 | +| `maxLength` | `number` | Truncate stringified inputs exceeding this character length (appends `…`) | |
| 135 | +| `transform` | `(input, toolName) => unknown` | Custom transform applied before `maxLength`. Use to redact fields or reshape data. | |
| 136 | + |
| 137 | +```typescript |
| 138 | +// Capture everything |
| 139 | +const ai = createAILogger(log, { toolInputs: true }) |
| 140 | + |
| 141 | +// Truncate long inputs (e.g. SQL queries) |
| 142 | +const ai = createAILogger(log, { toolInputs: { maxLength: 200 } }) |
| 143 | + |
| 144 | +// Redact sensitive tool inputs |
| 145 | +const ai = createAILogger(log, { |
| 146 | + toolInputs: { |
| 147 | + maxLength: 500, |
| 148 | + transform: (input, toolName) => { |
| 149 | + if (toolName === 'queryDB') return { sql: '***' } |
| 150 | + return input |
| 151 | + }, |
| 152 | + }, |
| 153 | +}) |
| 154 | +``` |
| 155 | + |
124 | 156 | ## Usage Patterns |
125 | 157 |
|
126 | 158 | ### streamText |
@@ -182,7 +214,9 @@ import { createAILogger } from 'evlog/ai' |
182 | 214 |
|
183 | 215 | export default defineEventHandler(async (event) => { |
184 | 216 | const log = useLogger(event) |
185 | | - const ai = createAILogger(log) |
| 217 | + const ai = createAILogger(log, { |
| 218 | + toolInputs: { maxLength: 500 }, |
| 219 | + }) |
186 | 220 |
|
187 | 221 | const agent = new ToolLoopAgent({ |
188 | 222 | model: ai.wrap('anthropic/claude-sonnet-4.6'), |
@@ -210,7 +244,17 @@ Wide event after a 3-step agent run: |
210 | 244 | "outputTokens": 1200, |
211 | 245 | "totalTokens": 5700, |
212 | 246 | "finishReason": "stop", |
213 | | - "toolCalls": ["searchWeb", "queryDatabase", "searchWeb"], |
| 247 | + "toolCalls": [ |
| 248 | + { "name": "searchWeb", "input": { "query": "TypeScript 6.0 features" } }, |
| 249 | + { "name": "queryDatabase", "input": { "sql": "SELECT * FROM docs WHERE topic = 'typescript'" } }, |
| 250 | + { "name": "searchWeb", "input": { "query": "TypeScript 6.0 release date" } } |
| 251 | + ], |
| 252 | + "responseId": "msg_01XFDUDYJgAACzvnptvVoYEL", |
| 253 | + "stepsUsage": [ |
| 254 | + { "model": "claude-sonnet-4.6", "inputTokens": 1200, "outputTokens": 300, "toolCalls": ["searchWeb"] }, |
| 255 | + { "model": "claude-sonnet-4.6", "inputTokens": 1500, "outputTokens": 400, "toolCalls": ["queryDatabase", "searchWeb"] }, |
| 256 | + { "model": "claude-sonnet-4.6", "inputTokens": 1800, "outputTokens": 500 } |
| 257 | + ], |
214 | 258 | "msToFirstChunk": 312, |
215 | 259 | "msToFinish": 8200, |
216 | 260 | "tokensPerSecond": 146 |
@@ -302,13 +346,42 @@ const model = ai.wrap(anthropic('claude-sonnet-4.6')) |
302 | 346 | | `ai.cacheWriteTokens` | `usage.inputTokens.cacheWrite` | Tokens written to prompt cache | |
303 | 347 | | `ai.reasoningTokens` | `usage.outputTokens.reasoning` | Reasoning tokens (extended thinking) | |
304 | 348 | | `ai.finishReason` | `finishReason.unified` | Why generation ended (`stop`, `tool-calls`, etc.) | |
305 | | -| `ai.toolCalls` | Content / stream chunks | List of tool names called | |
| 349 | +| `ai.toolCalls` | Content / stream chunks | `string[]` of tool names by default, or `Array<{ name, input }>` when `toolInputs` is enabled | |
| 350 | +| `ai.responseId` | `response.id` | Provider-assigned response ID (e.g. Anthropic's `msg_...`) | |
306 | 351 | | `ai.steps` | Step count | Number of LLM calls (only when > 1) | |
| 352 | +| `ai.stepsUsage` | Per-step accumulation | Per-step token and tool call breakdown (only when > 1 step) | |
307 | 353 | | `ai.msToFirstChunk` | Stream timing | Time to first text chunk (streaming only) | |
308 | 354 | | `ai.msToFinish` | Stream timing | Total stream duration (streaming only) | |
309 | 355 | | `ai.tokensPerSecond` | Computed | Output tokens per second (streaming only) | |
310 | 356 | | `ai.error` | Error capture | Error message if a model call fails | |
311 | 357 |
|
| 358 | +## Composability |
| 359 | + |
| 360 | +`ai.wrap()` works with models that are already wrapped by other tools. If you use supermemory, guardrails middleware, or any other model wrapper, pass the wrapped model to `ai.wrap()`: |
| 361 | + |
| 362 | +```typescript |
| 363 | +import { createAILogger } from 'evlog/ai' |
| 364 | +import { withSupermemory } from '@supermemory/tools/ai-sdk' |
| 365 | + |
| 366 | +const ai = createAILogger(log) |
| 367 | +const base = gateway('anthropic/claude-sonnet-4.6') |
| 368 | +const model = ai.wrap(withSupermemory(base, orgId, { mode: 'full' })) |
| 369 | +``` |
| 370 | + |
| 371 | +For explicit middleware composition, use `createAIMiddleware` to get the raw middleware and compose it yourself via `wrapLanguageModel`: |
| 372 | + |
| 373 | +```typescript |
| 374 | +import { createAIMiddleware } from 'evlog/ai' |
| 375 | +import { wrapLanguageModel } from 'ai' |
| 376 | + |
| 377 | +const model = wrapLanguageModel({ |
| 378 | + model: base, |
| 379 | + middleware: [createAIMiddleware(log, { toolInputs: true }), otherMiddleware], |
| 380 | +}) |
| 381 | +``` |
| 382 | + |
| 383 | +`createAIMiddleware` returns the same middleware that `createAILogger` uses internally. The difference: `createAIMiddleware` does not include `captureEmbed` (embedding models don't use middleware). Use `createAILogger` for the full API, `createAIMiddleware` when you need explicit middleware ordering. |
| 384 | + |
312 | 385 | ## Error Handling |
313 | 386 |
|
314 | 387 | If a model call fails, the middleware captures the error into the wide event before re-throwing: |
|
0 commit comments