Skip to main content
Streams grow when records are appended to them. Streams are an “append-only” data structure, which just means that records are always added to the end, or tail, of the stream. When you append records, S2 responds with an acknowledgement only once your data is fully durable. That acknowledgement contains details about what position your data occupies in the stream, and the sequence number that will be assigned to the next appended record in the future.
{
  "start": { "seq_num": 42, "timestamp": 1713812735000 },
  "end":   { "seq_num": 44, "timestamp": 1713812735012 },
  "tail":  { "seq_num": 44, "timestamp": 1713812735012 }
}
  • start — position of the first record appended.
  • end — one past the last record appended (so end.seq_num - start.seq_num = number of records).
  • tail — current tail of the stream. Can exceed end if there are concurrent appends.

Batches

The append API accepts batches of records. A single batch can contain up to 1000 records or 1 MiB of data. For payloads larger than the 1 MiB record size limit, the typical approach is to store the data externally (e.g. in object storage) and append a pointer to it as a record. You can also serialize large messages across multiple records — see this blog post for patterns and examples with the TypeScript SDK. Streams are rate-limited to 200 batches per second, per stream, so for high throughput, send batches closer to the size limit rather than many small ones.

Auto-batching and the Producer API

The SDKs provide a Producer API that handles batching automatically. You submit individual records, and the producer groups them into batches based on configurable thresholds (linger time, record count, byte size). See Tuning for details on batching parameters and session-level performance.

See also