As a constant area of focus we prioritize reliability and performance. In the coming months, you can expect a public status page against these objectives.

In Progress

Stream processing connectors

UI for basins and streams

Easily manage your basins and streams from the S2 dashboard.

Compression

Support zstd and gzip compression for S2 APIs and SDKs.

Session heartbeats

Improved robustness of append and read sessions with support for application-level heartbeats.

Timestamps

Records will support a client or service-assigned timestamp. You will be able to read from a stream starting at a particular timestamp, in addition to the existing support for logical time i.e. sequence numbers.

Planned

Emulator

Open source in-memory emulator of the S2 API that can be easily used for integration testing. Pre-built Docker image for the emulator can be used with e.g. Testcontainers, and Rust applications will also be able to embed it directly.

Stream encryption

Authenticated encryption of records at the edge service of S2 with a stream-specific encryption key.

Visibility into usage and availability

Dashboard enhancements to surface historical and real-time usage, as well as availability metrics for API requests.

Stream lifecycle automation

  • create-on-append using default stream config for the basin.
  • delete-on-empty when a stream’s creation time is older than a threshold, and there are no retained records.

Stream lifecycle events

Basin-level stream of all stream lifecycle events i.e. CreateStream, ReconfigureStream, DeleteStream.

Key-based compaction

Currently you can configure age-based record retention, or trim a stream explicitly. As an alternative, you will be able to configure the name of a header whose value represents the key for compaction. This will provide infinite retention with automatic cleanup of old records for a key, inspired by Kafka’s semantics for log compaction.

Dedicated cells

S2 has a cellular architecture, and by default basins are placed in a multi-tenant cell. We will add support for dedicated cells that can only be used by a specific account.

Exploring

Let us know if you are interested in anything here (or not noted!) so we can prioritize and potentially partner with you.

  • Fine-grained authorization, e.g. role-based access control.
  • Delegated authentication, e.g. OpenID Connect, or pre-signed URLs.
  • Continuous export to your object storage bucket in an open format.

Later

We intend to get there, but cannot yet dive into:

  • Kafka wire-compatibility.
  • Other clouds than AWS.
  • Basins that can span multiple regions or clouds.
  • Native storage class — under 5 milliseconds to read or append.