Overview
The broadcast pattern uses a single writer and many readers over an S2 stream. One process appends to the stream, and many read from it. Readers can follow updates in real-time, or consume from historical prefixes of the stream – either by sequence number, offset from latest message, or timestamp.Architecture
Common use cases include system monitoring, live event streams, and market data distribution.Why S2 for broadcast?
S2’s granular access controls let you configure read-only tokens that can be created per-user for fine-grained control, or you can have all consumers share a single token. Streams are accessible directly from the web via REST you don’t need to run any servers to maintain connections with consumers. Moreover, S2 supports high write throughput as well as massive read fanouts, so you can offload a lot of scaling challenges. Messages are only transmitted after they’ve been sequenced and made durable on object storage, giving you deterministic ordering. You never need to worry about messages getting shuffled, or consumers seeing something slightly different. The same read API seamlessly integrates historical data with real-time updates. You can read from any point to consume historical data, and when caught up to the current moment,read transparently switches to following live updates.
Simple example
Here’s a simple demonstration of the core concept using the CLI. You’d typically use one of our SDKs to build real applications, but this shows the basic idea. Write a log stream:I7oAAAAAAABo+9HGXPSurWahOECn5Q21Nf698JSaUh1nmYWG, which can then be provided to your readers.
Read historically with timestamp (as milliseconds since Unix epoch) bounds:

