As a constant area of focus, we prioritize reliability and developer experience.

Beyond that…

In Progress

Granular authentication

Currently, access tokens generated from the S2 dashboard give root-level access. You will be able to issue access tokens scoped to basin(s) and stream(s) (exact or name prefix) with specific permissions. The goal is to enable fine-grained authorization, as well as direct access from your clients without a reverse proxy.

Visibility into usage and availability

Metering infra –– which you will experience as better visibility.

Pub/Sub

Stay tuned!

Planned

Usage limits per access token

Time-windowed limits on how much usage a specific access token can accrue in a region.

Emulator

Open source in-memory emulator of the S2 API that can be easily used for integration testing. We will make Docker images available and Rust applications will also be able to embed the emulator directly.

Stream encryption

Authenticated encryption of records at the edge service of S2 with a stream-specific encryption key.

Reading from a timestamp

S2 added support for monotonic record timestamps. We want to allow users to read from a timestamp, besides the existing support for logical time i.e. sequence numbers.

Key-based compaction

Currently you can configure age-based record retention, or trim a stream explicitly. As an alternative, you will be able to configure the name of a header whose value represents the key for compaction. This will provide infinite retention with automatic cleanup of old records for a key, inspired by Kafka’s semantics for log compaction.

Dedicated cells

S2 has a cellular architecture, and by default basins are placed in a multi-tenant cell. We will add support for dedicated cells that can only be used by a specific account.

Exploring

Let us know if you are interested in anything here (or not noted!) so we can prioritize and potentially partner with you.

  • S2 cell in your cloud account.
  • Basin-level stream of all stream lifecycle events.
  • Continuous export to your object storage bucket in an open format.

Later

We intend to get there, but cannot yet dive into:

  • Kafka wire-compatibility.
  • Other clouds than AWS.
  • Basins that can span multiple regions or clouds.
  • Native storage class — under 5 milliseconds to read or append.