Skip to content

Commit a32d2fb

Browse files
committed
docs: add cover image to TanStack AI testing blog post
1 parent b861d43 commit a32d2fb

File tree

2 files changed

+2
-0
lines changed

2 files changed

+2
-0
lines changed
2.98 MB
Loading

src/blog/how-we-test-tanstack-ai-across-7-providers.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,8 @@ authors:
77
- Alem Tuzlak
88
---
99

10+
![E2E Testing - All Tests Passed](/blog-assets/how-we-test-tanstack-ai-across-7-providers/cover.png)
11+
1012
LLM responses are non-deterministic. API calls cost money. And the thing that works perfectly with OpenAI might silently break with Anthropic.
1113

1214
If you've ever built on an AI SDK, you know the feeling: you trust the library works because the README says it supports your provider. But does it? Has anyone actually verified that tool calling works the same way across OpenAI, Gemini, and Ollama? That streaming structured output doesn't break when you switch from Groq to Anthropic?

0 commit comments

Comments
 (0)