The AI-first SaaS stack
A production-leaning TypeScript stack for SaaS products where AI is the core feature, not a sprinkle. Built for teams that expect to hire a second engineer in the first six months.
- 01NEWeb Frameworksweb-framework
Next.js
Deepest React ecosystem with App Router and server components.
- 02CLAI Modelsllm-primary
Claude Opus 4.7
Sharpest model for long-context and agentic reliability.
- 03GPAI Modelsllm-secondary
GPT-5
Cost-effective fallback and routing target for cheap calls.
- 04MAAgent Frameworksagent-framework
Mastra
TypeScript-native workflows, evals, and RAG.
- 05VEHostinghosting
Vercel
Native Next.js deployment with preview environments.
- 06SUDatabasesdatabase-auth
Supabase
Postgres plus auth and storage; self-host path keeps the door open.
Why this combo
The SaaS stack where AI is the product. Next.js for the app surface because rich dashboards earn the React Server Components abstractions. Two-model routing (Opus for reliability-critical work, GPT-5 for cheap calls) keeps costs in check without a fallback library. Mastra is the orchestration layer where the actual AI work happens. Supabase keeps the data plane simple.
This stack is built for teams shipping a SaaS product where AI does the work the customer is paying for — not a chat widget bolted onto a CRUD app. It assumes the team will hire a second engineer in the first six months and that the app surface will grow toward dashboard complexity.
Why this combination works
Next.js because the app surface earns it. When you’re building a real dashboard with nested layouts, server components, and seamless auth-gated routes, Next is the lowest-friction choice in the React ecosystem. Astro would fight you on this shape.
Two-model routing. Opus 4.7 for the reliability-critical work where the cost is justified. GPT-5 (or 5-mini) for high-volume cheap calls. No fallback library — just a tiny router in code that picks based on task category. The cost difference between Opus and GPT-5-mini at scale makes this routing one of the highest-leverage decisions in the stack.
Mastra as the orchestration layer. Workflows, evals, RAG, and integrations — the things you’d otherwise build from scratch. Stays in TypeScript so the boundary between your app code and your AI code is zero friction.
Supabase keeps options open. Apache 2.0 means you can self-host if data residency requirements force it. Postgres at the core means you can move to Neon or a dedicated Postgres provider without rewriting your schema.
What this stack pushes off to v2
Multi-tenant data isolation patterns. Background job queues (Inngest or self-hosted BullMQ when Mastra’s built-in scheduling doesn’t cut it). Observability beyond Vercel + Mastra’s built-ins. These are real but they’re not v1 problems.