In Progress

v1 REST API

We will share more as it shapes up.

Maturity

Towards Beta status with transparent SLAs –– currently the service is in public Preview.

Visibility into usage and availability

Metering infra –– which you will experience as better visibility.

Planned

Automate stream deletion

Stream configuration knob to automatically delete if it does not have any records for some period of time. This is a natural counterpart to automatically creating a stream on append or read.

Stream encryption

Authenticated encryption of records at the edge service of S2 with a stream-specific encryption key.

Usage limits per access token

Time-windowed limits on how much usage a specific access token can accrue in a region.

Massive read fanout

Horizontally scale reads against recent records.

Key-based compaction

Currently you can configure age-based record retention, or trim a stream explicitly. As an alternative, you will be able to configure the name of a header whose value represents the key for compaction. This will provide infinite retention with automatic cleanup of old records for a key, inspired by Kafka’s semantics for log compaction.

Emulator

Open source in-memory emulator of the S2 API that can be easily used for integration testing. We will make Docker images available and Rust applications will also be able to embed the emulator directly.

Dedicated cells

S2 has a cellular architecture, and by default basins are placed in a multi-tenant cell. We will add support for dedicated cells that can only be used by a specific account.

Subscriptions

Managed key-ordered consumption that allows either

  • Pull from a group of subscribers.
  • Push to a configured HTTP endpoint.

Exploring

Let us know if you are interested in anything here (or not noted!) so we can prioritize and potentially partner with you.

  • S2 cell in your cloud account.
  • Basin-level stream of all stream lifecycle events.
  • Continuous export to your object storage bucket in an open format.

Later

We intend to get there, but cannot yet dive into:

  • Kafka API compatibility.
  • Other clouds than AWS.
  • Basins that can span multiple regions or clouds.
  • Native storage class — under 5 milliseconds to read or append.