Skip to main content

Overview

The broadcast pattern uses a single writer and many readers over an S2 stream. One process appends to the stream, and many read from it. Readers can follow updates in real-time, or consume from historical prefixes of the stream – either by sequence number, offset from latest message, or timestamp.

Architecture

Common use cases include system monitoring, live event streams, and market data distribution.

Why S2 for broadcast?

S2’s granular access controls let you configure read-only tokens that can be created per-user for fine-grained control, or you can have all consumers share a single token. Streams are accessible directly from the web via REST you don’t need to run any servers to maintain connections with consumers. Moreover, S2 supports high write throughput as well as massive read fanouts, so you can offload a lot of scaling challenges. Messages are only transmitted after they’ve been sequenced and made durable on object storage, giving you deterministic ordering. You never need to worry about messages getting shuffled, or consumers seeing something slightly different. The same read API seamlessly integrates historical data with real-time updates. You can read from any point to consume historical data, and when caught up to the current moment, read transparently switches to following live updates.

Simple example

Here’s a simple demonstration of the core concept using the CLI. You’d typically use one of our SDKs to build real applications, but this shows the basic idea. Write a log stream:
tail -f /var/log/app.log | s2 append s2://my-basin/prod/logs
Now this feed is available for any number of readers to consume. Create a read-only token scoped to this stream:
s2 issue-access-token \
  --id public-ro-logs \
  --op-groups="stream=r" \
  --basins="my-basin" \
  --streams="prod/logs"
This should produce a token, like I7oAAAAAAABo+9HGXPSurWahOECn5Q21Nf698JSaUh1nmYWG, which can then be provided to your readers. Read historically with timestamp (as milliseconds since Unix epoch) bounds:
# use the read-only token
export S2_ACCESS_TOKEN="I7oAAAAAAABo+9HGXPSurWahOECn5Q21Nf698JSaUh1nmYWG"
s2 read s2://my-basin/prod/logs \
  --timestamp 1761333455729 \
  --until 1761333456740
Tail the stream in real-time:
# use the read-only token
export S2_ACCESS_TOKEN="I7oAAAAAAABo+9HGXPSurWahOECn5Q21Nf698JSaUh1nmYWG"
s2 read s2://my-basin/prod/logs
Multiple readers can consume the same stream simultaneously, each at their own pace and from their own starting position.