S2 integrates with the Vercel AI SDK and can be used as a durable transport in multiple ways:
@s2-dev/resumable-stream/aisdk:
useChat streams can be made resumable, allowing clients to reconnect mid-generation.
- Completed messages can be kept on a per-session or per-conversation stream for replay and history.
- TypeScript SDK examples — agent sessions, chat persistence, and multi-agent patterns using the S2 TypeScript SDK directly.
For more control over resumability, the underlying
@s2-dev/resumable-stream
package exposes a generic ReadableStream<string> resumer that works with any text stream.
AI SDK resumable streams
The ./aisdk subpath makes AI SDK useChat streams resumable on S2. makeResumable tees the UIMessageChunk stream i.e. one branch streams directly to the client as SSE, the other persists to S2 for later replay.
Prerequisites
npm install @s2-dev/resumable-stream ai
Requires ai >= 5.0. Create an S2 access token and basin first:
- Sign up here, generate an access token and set it as
S2_ACCESS_TOKEN in your env.
- Create a basin from the Basins tab with Create Stream on Append enabled, and set it as
S2_BASIN in your env.
Setup
Create a createResumableChat instance once and share it across routes:
import { createResumableChat } from '@s2-dev/resumable-stream/aisdk';
export const chat = createResumableChat({
accessToken: process.env.S2_ACCESS_TOKEN!,
basin: process.env.S2_BASIN!,
});
Server: POST route
import { after } from 'next/server';
import { convertToModelMessages, streamText, type UIMessage } from 'ai';
import { openai } from '@ai-sdk/openai';
import { chat } from '@/lib/s2';
export async function POST(req: Request) {
const { id, messages } = (await req.json()) as {
id: string;
messages: UIMessage[];
};
const result = streamText({
model: openai('gpt-4o-mini'),
messages: await convertToModelMessages(messages),
});
return chat.makeResumable(`chat-${id}`, result.toUIMessageStream(), {
waitUntil: (p) => after(async () => { await p; }),
});
}
Server: GET route (reconnect)
app/api/chat/[id]/stream/route.ts
import { chat } from '@/lib/s2';
export async function GET(
_req: Request,
{ params }: { params: Promise<{ id: string }> },
) {
const { id } = await params;
return chat.replay(`chat-${id}`);
}
The URL shape ${api}/${chatId}/stream is the default that DefaultChatTransport reconnects to, so no transport customization is needed.
Client
'use client';
import { useChat } from '@ai-sdk/react';
import { DefaultChatTransport } from 'ai';
const transport = new DefaultChatTransport({ api: '/api/chat' });
export default function Chat() {
const { messages, sendMessage } = useChat({ transport, resume: true });
// ...
}
resume: true triggers useChat’s reconnectToStream on mount, which hits the GET route. If there’s an in-flight generation, it tails it from S2; otherwise it no-ops.
Configuration
Relevant options on createResumableChat:
| option | default | what it controls |
|---|
streamReuse | "single-use" | "single-use": one S2 stream per generation, self-cleans via a final trim. "shared": one S2 stream reused across generations, trimmed on each new claim. |
leaseDurationMs | 5000 | Only for shared mode. Max pause within an active generation before a new claim can take it over. |
onError | generic message | Maps upstream errors to the errorText shown to the client. Default emits "An error occurred."; provide to sanitize or forward details. |
batchSize / lingerDuration | 10 / 50ms | S2 append batching knobs. |
Pair streamReuse: "single-use" with a delete-on-empty policy on your
basin or streams. Each completed generation ends with a trim that empties
the stream, so with delete-on-empty configured the spent streams get
garbage-collected automatically instead of accumulating under your
retention policy.
End-to-end demo
A runnable Bun server + vanilla-JS client demonstrating the full flow, including transcript persistence and mid-generation refresh recovery, lives in the SDK repo: examples/ai-sdk-resumable-chat.
Examples
The S2 TypeScript SDK includes examples that pair S2 with the Vercel AI SDK for common agent and chat patterns.
Agent Session
A stream-per-run pattern that creates an ordered audit trail of everything an agent does — tool calls, LLM responses, and state changes — all on a single S2 stream.
Source
Chat Persistence
Multi-turn conversation persistence backed by S2. Each chat gets its own stream, giving you a durable, replayable message history.
Source
Dinner Party (Multi-Agent)
Multi-agent coordination where agents communicate over a shared S2 stream (the “bus”) while maintaining per-agent memory streams. Demonstrates how S2’s ordered log naturally solves message ordering in multi-agent systems.
Source