Redirect Logo
Dashboard
tanstack
AI
llm
sdk

TanStack AI: Provider Switching Without the Vendor Lock-In

Dishant Sharma
Dishant Sharma
Dec 12th, 2025
6 min read
TanStack AI: Provider Switching Without the Vendor Lock-In

TanStack dropped a new library on December 3rd. An AI SDK. Reddit lit up within hours.​

One comment got 68 upvotes: "I can't help feeling a bit annoyed by the increasing reliance on AI tools". Another developer replied to Tanner directly saying they weren't interested in anything AI related. But people kept reading anyway.​

Here's why. Most AI SDKs lock you in. You pick Vercel's AI SDK, you're basically married to their ecosystem. You pick OpenAI's library, good luck switching to Anthropic later. TanStack AI does something different. Define your tools once, run them anywhere. Switch from OpenAI to Gemini with zero code changes.​

You've probably felt this pain before. i know i have. Built a chat feature with one provider. Client wants to try a different model. Now you're rewriting everything. TanStack AI says that's stupid. And honestly, they're right.

The library is in alpha. Bugs exist. Rough edges everywhere. But the architecture is cleaner than anything else out there. And when the team behind TanStack Query and TanStack Router builds something, people pay attention.​

The isomorphic tool trick

Here's where it gets interesting. Most AI libraries make you define tools twice. Once for the server. Once for the client. Different schemas. Different validation. Different everything.​

TanStack AI has this thing called isomorphic tools. You write toolDefinition() once. Then attach .server() or .client() implementations. Same name. Same schema. Different execution contexts.​

i spent an hour trying to figure out why this mattered. Then it clicked. Your AI can call a tool, and your UI can call the same tool directly. Same validation. Same types. No duplicate logic.​

The SDK handles which version to run. Server tools execute on your backend. Client tools run in the browser. The LLM doesn't care. It just calls the tool.​

And the SDK supports approval workflows out of the box. The system emits approval-requested events. Execution pauses. User confirms. Tool runs. All built into the state machine.​

Type safety that actually works

Every AI model is different. GPT-4o supports images. Some models don't. Claude has different options than Gemini. You know this. Your IDE doesn't.​

Vercel's AI SDK uses flexible typing. You can pass options that don't apply to your model. TypeScript won't stop you. You find out at runtime.​

TanStack went the opposite direction. Per-model type safety. Select gpt-4o and TypeScript knows exactly what that model accepts. Pass an invalid option and you get a compile error.​

The BaseAdapter class uses seven type parameters. Sounds insane. But it means your editor knows which modalities each model supports. Text, image, audio. All typed.​

Most tutorials tell you type safety doesn't matter for AI apps. That's wrong. When you're switching between providers, types save you hours of debugging.

The language thing nobody talks about

TanStack AI ships with TypeScript, PHP, and Python support. Same streaming protocol across all three.​

Your frontend uses @tanstack/ai-client. Your backend can be Python FastAPI. Or PHP Laravel. Or Node. Same chunks. Same tool execution. Different language.​

This matters more than it sounds. Most teams don't have uniform stacks. Your ML engineers write Python. Your web team writes TypeScript. With Vercel's SDK, you're stuck with TypeScript everywhere.​

But here's what actually happens. Your Python team builds the AI logic. Your TypeScript team builds the UI. TanStack AI just works.​

The protocol is documented and open. You could write a Rust backend if you wanted. As long as it speaks the protocol, the client doesn't care.​

Streaming that doesn't feel broken

Nobody talks about this. Streaming AI responses often feels janky. Words break mid-character. Sentences split at weird spots. Users notice.​

TanStack AI has configurable chunking strategies. ImmediateStrategy sends every token instantly. PunctuationStrategy waits for sentence boundaries. WordBoundaryStrategy never splits mid-word.​

You pick what makes sense for your app. Vercel's SDK handles this at the transport level. You get what you get. TanStack gives you explicit control.​

Why naming things is impossible

i once worked on a project where we named every service after birds. Started with Sparrow. Then Hawk. Then we ran out of cool birds and ended up with Pigeon .

The database was called Nest. The cache was Feather. Someone suggested naming the logging service Poop. We almost did it. That's how bad naming gets when you're three months into a project .

TanStack avoids this by just calling everything TanStack. TanStack Query. TanStack Router. TanStack AI. Boring but smart. You always know what ecosystem you're in.​

And when you google "TanStack", you find their stuff. Not some random bird facts. Unlike that time i googled "Sparrow service" and got pet store ads for an hour.

Who actually needs this

Let's be honest. Most people don't need to switch AI providers. You pick OpenAI or Anthropic. You stick with it. The flexibility TanStack AI offers is overkill for simple chat interfaces.​

If you're building a basic chatbot, Vercel's AI SDK is more mature. It has 30+ providers. Image generation. Speech. MCP integration. TanStack AI launched last week. It has four providers.​

This is for teams that hate lock-in. Teams that got burned by vendor pricing changes. Teams with polyglot backends. Teams building on TanStack Start who want tight ecosystem integration.​

And teams that care about architecture more than features. The Vercel SDK is production-ready now. TanStack AI has cleaner patterns but needs time to mature.​

If you're on Next.js deployed to Vercel, just use their SDK. It's optimized for that. TanStack AI is for everyone else.​

The real competition

This isn't really about beating Vercel. It's about having a choice. For years, AI tooling meant picking a platform and living with their decisions. TanStack AI says you can build AI features without committing to someone's ecosystem.​

The team is small. All volunteers. They're asking for feedback and contributions. Python and PHP packages aren't even on package managers yet. This is alpha in the truest sense.​

But the architecture is right. Type-safe. Framework-agnostic. Language-flexible. These decisions don't change. Features get added over time. The foundation matters more.​

One developer on Reddit said TanStack AI reminds them why they trust this team. They've been shipping framework-agnostic tools for years. Query, Router, Table, Form. All of them work everywhere. All of them age well.​

Ending thoughts

i still think most AI chat features are unnecessary. Users don't want to talk to every app. But when you need AI, the tools should get out of your way.

TanStack AI does that. No platform to migrate to. No service fees. Just libraries that work with your stack. In alpha, sure. But built by people who've earned trust by shipping good tools for a decade.​

The Reddit comments were mixed. Some developers are tired of AI hype. Others are rebuilding their apps with it. Both reactions make sense. But at least now there's a choice that doesn't lock you in.​

Enjoyed this article? Check out more posts.

View All Posts