πŸš€Beta Releaseβ€” APIs may change

When Serverless
Can't Keep Up with AI

Long response. Real-time streaming.
What AI apps needβ€”without a separate backend.

You Started Building AI

Next.js + LLM APIs. The fastest way to ship.
Until you hit the wall.

😊
Start

Next.js + OpenAI API

Perfect combo. Ship your AI app fast.

πŸ€”
Then

Timeout errors start appearing

LLM responses take 30+ seconds. Serverless cuts you off at 10.

😟
Next

Streaming? WebSocket?

Real-time responses need persistent connections. Serverless says no.

😩
Finally

Do I need a separate backend?

Python server? Go? Double the codebase, double the work, AI coding gets harder...

The Serverless Limit

AI apps need long response times and real-time streaming.
Serverless wasn't built for this.

What AI Apps Need
Serverless
Superfunction
Long Response (30s+)
βœ—10s timeout limit
βœ“No timeout limits
Real-time Streaming
~Limited SSE support
βœ“Full SSE & WebSocket
WebSocket
βœ—Not supported
βœ“Full bidirectional support
Connection Pooling
βœ—New connection per request
βœ“Persistent connection pool
Background Jobs
βœ—Not supported
βœ“Queue system with scheduling
Type Safety
~Manual sync needed
βœ“E2E auto-generated client

Superfunction runs alongside Next.js in the same codebase β€” one project, two servers.

Ship Your AI App

5 minutes to your first LLM-powered endpoint.

Terminal
# Create your AI app
$ npx spfn@alpha create my-ai-app
$ cd my-ai-app

# Start the backend
$ npm run spfn:dev

βœ… Backend ready - no timeout limits
βœ… WebSocket & SSE enabled
βœ… Ready for LLM integration

Define β†’ Router β†’ Client

Type-safe from backend to frontend

1
Define
routes/*.ts
// 1. Define AI route
import { route } from '@spfn/core/route';
import { Type } from '@sinclair/typebox';
import OpenAI from 'openai';

const openai = new OpenAI();

export const chat = route
  .post('/chat')
  .input({
    body: Type.Object({
      message: Type.String()
    })
  })
  .handler(async (c) => {
    const { body } = await c.data();
    // No timeout - takes as long as needed
    return await openai.chat.completions.create({
      model: 'gpt-4',
      messages: [{ role: 'user', content: body.message }]
    });
  });
2
Router
router.ts
// 2. Add to router
import { defineRouter } from '@spfn/core/route';
import { chat } from './routes/chat';

export const appRouter = defineRouter({
  chat,
});

export type AppRouter = typeof appRouter;
3
Client
Next.js
// 3. Call from Next.js
import { api } from '@/lib/api-client';

// Fully typed, no timeout issues
const response = await api.chat.call({
  body: { message: 'Explain quantum computing' }
});

// Or stream the response
const stream = await api.chat.stream({
  body: { message: 'Write a story' }
});

What You Can Build

AI applications that need long response times and real-time streaming

πŸ’¬

AI Chatbots

Conversational AI with streaming responses. No timeout interruptions.

  • Customer support bots
  • AI assistants
  • Interactive agents
πŸ“

AI Writing Tools

Content generation that takes time. Let the LLM think.

  • Blog generators
  • Copywriting tools
  • Translation apps
πŸ”

RAG Applications

Vector search + LLM combination. Complex queries, no rush.

  • Document Q&A
  • Knowledge bases
  • Semantic search
🎨

AI Creative Tools

Image generation APIs, video processing, and more.

  • DALL-E integration
  • Midjourney wrappers
  • Video AI tools