Skip to content

aklivity/zilla


Latest Release Slack Community Artifact HUB

🦎 Zilla: Multi-protocol event-native edge/service proxy

Zilla abstracts Apache Kafka® for web applications, IoT clients and microservices. With Zilla, Kafka topics can be securely and reliably exposed via user-defined REST, Server-Sent Events (SSE), MQTT, or gRPC APIs.

Zilla has no external dependencies and does not rely on the Kafka Consumer/Producer API or Kafka Connect. Instead, it natively supports the Kafka wire protocol and uses advanced protocol mediation to establish stateless API entry points into Kafka. Zilla also addresses security enforcement, observability and connection offloading on the data path.

When Zilla is deployed alongside Apache Kafka®, any application or service can seamlessly be made event-driven.

Contents

Getting Started

The fastest way to try out Zilla is via the Quickstart, which walks you through publishing and subscribing to Kafka through REST, gRPC, SSE and MQTT API entry points. The Quickstart uses Aklivity’s public Postman Workspace with pre-defined API endpoints and a Docker Compose stack running pre-configured Zilla and Kafka instances to make things as easy as possible.

Key Features

HTTP-Kafka Proxying

Support for both REST and SEE Kafka proxying. Configure an application-centric REST API on top of Kafka for synchronous CRUD operations or correlated request-response over Kafka topics. Make existing OpenAPI service definitions asynchronous and event-driven by mapping them to Kafka. Reliably broadcast/fanout data out to clients at web-scale.

Find out more

gRPC-Kafka Proxying

Implement gRPC service definitions from protobuf files to produce and consume messages via Kafka topics.

Find out more

MQTT-Kafka Proxying

Turn Kafka into an MQTT broker by persisting MQTT messages and client state across Kafka topics.

Find out more

Deployment, Performance & Other

  • OpenAPI and AsyncAPI Support — Use OpenAPI and AsyncAPI specifications for configuration and/or validation enforcement.
  • Apicurio and Karapace Schema Registry Integrations — Integration with external schema registries for a variety of data formats.
  • Realtime Cache — Local cache synchronized with Kafka for specific topics, even when no clients are connected. The cache is stateless and recovers automatically. It is consistent across different Zilla instances without peer communication.
  • Filtering — Local cache indexes message key and headers upon retrieval from Kafka, supporting efficiently filtered reads from cached topics.
  • Fan-in, Fan-out — For SSE-Kafka and MQTT-Kafka proxying, the local cache uses a small number of connections to interact with Kafka brokers, independent of the number of connected clients.
  • Authorization — Specific routed topics can be guarded to enforce required client privileges.
  • Helm Chart — Generic Zilla Helm chart available.
  • Auto-reconfigure — Detect changes in zilla.yaml and reconfigure Zilla automatically.
  • Prometheus Integration — Export Zilla metrics to Prometheus for observability and auto-scaling.
  • Declarative Configuration — API mappings and endpoints inside Zilla are declaratively configured via YAML.
  • Kafka Security — Connect Zilla to Kafka over PLAINTEXT, TLS/SSL, TLS/SSL with Client Certificates, SASL/PLAIN, and SASL/SCRAM.

📚 Read the docs

  • Zilla Documentation: Guides, tutorials and references to help understand how to use Zilla and configure it for your use case.
  • Product Roadmap: Check out our plan for upcoming releases.
  • Zilla Examples: A collection of pre-canned Zilla feature demos.
  • Eventful Petstore Demo: See Zilla make the OpenAPI/Swagger Petstore service event-driven by mapping it onto Kafka in just a few lines of YAML.
  • Taxi Demo: A demo of a taxi-based IoT deployment with Zilla, Kafka, OpenAPIs and AsyncAPIs.

📝 Check out blog posts

Inside Zilla, every protocol, whether it is TCP, TLS, HTTP, Kafka, gRPC, etc., is treated as a stream, so mediating between protocols simplifies to mapping protocol-specific metadata.

Zilla’s declarative configuration defines a routed graph of protocol decoders, transformers, encoders and caches that combine to provide a secure and stateless API entry point into an event-driven architecture. This “routed graph” can be visualized and maintained with the help of the Zilla VS Code extension.

Zilla has been designed from the ground up to be very high-performance. Inside, all data flows over shared memory as streams with back pressure between CPU cores, allowing Zilla to take advantage of modern multi-core hardware. The code base is written in system-level Java and uses low-level, high-performance data structures, with no locks and no object allocation on the data path.

You can get a sense of the internal efficiencies of Zilla by running the BufferBM microbenchmark for the internal data structure that underpins all data flow inside the Zilla runtime.

git clone https://github.com/aklivity/zilla
cd zilla
./mvnw clean install
cd runtime/engine/target
java -jar ./engine-develop-SNAPSHOT-shaded-tests.jar BufferBM

Note: with Java 16 or higher add --add-opens=java.base/java.io=ALL-UNNAMED just after java to avoid getting errors related to reflective access across Java module boundaries when running the benchmark.

Benchmark                  Mode  Cnt         Score        Error  Units
BufferBM.batched          thrpt   15  15315188.949 ± 198360.879  ops/s
BufferBM.multiple         thrpt   15  18366915.039 ± 420092.183  ops/s
BufferBM.multiple:reader  thrpt   15   3884377.984 ± 112128.903  ops/s
BufferBM.multiple:writer  thrpt   15  14482537.055 ± 316551.083  ops/s
BufferBM.single           thrpt   15  15111915.264 ± 294689.110  ops/s

This benchmark was executed on 2019 MacBook Pro laptop with 2.3 GHZ 8-Core Intel i9 chip and 16 GB of DDR4 RAM, showing about 14-15 million messages per second.

Is Zilla production-ready?

Yes, Zilla has been built with the highest performance and security considerations in mind, and the Zilla engine has been deployed inside enterprise production environments. If you are looking to deploy Zilla for a mission-critical use case and need enterprise support, please contact us.

Does Zilla only work with Apache Kafka?

Currently, yes, although nothing about Zilla is Kafka-specific — Kafka is just another protocol in Zilla's transformation pipeline. Besides expanding on the list of supported protocols and mappings, we are in the process of adding more traditional proxying capabilities, such as rate-limiting and security enforcement, for existing Async and OpenAPI endpoints. See the Zilla Roadmap for more details.

Another REST-Kafka Proxy? How is this one different?

Take a look at our blog post, where we go into detail about how Zilla is different TL;DR Zilla supports creating application-style REST APIs on top of Kafka, as opposed to providing just a system-level HTTP API. Most notably, this unlocks correlated request-response over Kafka topics.

What does Zilla's performance look like?

Please see the note above on performance.

What's on the roadmap for Zilla?

Please review the Zilla Roadamp. If you have a request or feedback, we would love to hear it! Get in touch through our community channels.

Looking to contribute to Zilla? Check out the Contributing to Zilla guide. ✨We value all contributions, whether it is source code, documentation, bug reports, feature requests or feedback!

Many Thanks To Our Contributors!

Zilla is made available under the Aklivity Community License. This is an open source-derived license that gives you the freedom to deploy, modify and run Zilla as you see fit, as long as you are not turning into a standalone commercialized “Zilla-as-a-service” offering. Running Zilla in the cloud for your own workloads, production or not, is completely fine.

(🔼 Back to top)