Sessions
For sustained throughput, use sessions instead of individual unary calls. Sessions maintain a persistent connection and enable pipelining — multiple batches in flight simultaneously, with an ordering guarantee.- Append sessions pipeline batches with strict ordering. If any batch fails, subsequent batches won’t become durable. See SDK: Append Session.
- Read sessions stream records continuously and handle reconnection on transient failures. See SDK: Read Session.
append() calls: while concurrent calls can also achieve high throughput, the ordering in which they become durable is not guaranteed.
Batching
Streams support up to 200 batches per second, so throughput is maximized by sending larger batches (up to 1000 records or 1 MiB per batch). The SDKs offer auto-batching utilities. Key parameters:| Option | Default | Description |
|---|---|---|
linger | 5ms | How long to wait for more records before flushing a partial batch |
maxBatchRecords | 1000 | Flush when the batch hits this many records |
maxBatchBytes | 1 MiB | Flush when the batch hits this size |
Producer API
The Producer API wraps sessions and auto-batching into a record-oriented interface. You submit individual records and get back per-record tickets — the producer handles batching, pipelining, backpressure, and ordering transparently. Use the Producer API when:- Records arrive one at a time (from HTTP requests, message queues, etc.)
- You want per-record durability confirmation
- You don’t want to manage batch boundaries yourself
Backpressure
Append sessions track how much data is in flight (submitted but not yet acknowledged). When you hit the configured limits,submit() blocks until capacity frees up.
| Option | Default | Description |
|---|---|---|
maxInflightBytes | 5 MiB | Maximum unacknowledged bytes before blocking |
maxInflightBatches | None | Optional limit on unacknowledged batches |

