Skip to main content
The pattern here is one stream per task, job, or session. A producer appends events as they happen, and consumers read them in real-time. The same read API serves both historical and live data — when a reader catches up to the tail, the connection stays open and new records arrive as they’re appended.

Architecture

Serverless function logs

If you run sandboxed execution environments — CI/CD runners, browsers-as-a-service, coding sandboxes — your customers likely want to see what’s happening in real-time. Store each job’s output in its own stream. When a user kicks off a job, create a read-only access token scoped to that stream and return it alongside the job ID. The executor appends logs directly. The customer reads and tails them live. See the Browser Infra demo for a live example of this pattern — click into tasks to watch their progress in real-time.

Build log streaming

Same pattern applied to build pipelines. Stream build output to users as it happens. Delete-on-empty can clean up streams automatically once they’ve been fully consumed and are no longer needed.

Getting started

# Create a basin with auto-stream creation
s2 create-basin my-jobs --create-stream-on-append

# Executor appends logs as the job runs
echo "Step 1: Installing dependencies..." \
    | s2 append s2://my-jobs/user-123/job-456
    
echo "Step 2: Running tests..." \
    | s2 append s2://my-jobs/user-123/job-456

# User tails the stream live
s2 read s2://my-jobs/user-123/job-456
In production, you’d issue a scoped access token for the user and have them read via SSE or an SDK.