November 21, 2023 By Michael Burgess 4 min read

Apache Kafka is a well-known open-source event store and stream processing platform and has grown to become the de facto standard for data streaming. In this article, developer Michael Burgess provides an insight into the concept of schemas and schema management as a way to add value to your event-driven applications on the fully managed Kafka service, IBM Event Streams on IBM Cloud®.

What is a schema?

A schema describes the structure of data.

For example:

A simple Java class modelling an order of some product from an online store might start with fields like:

public class Order{

private String productName

private String productCode

private int quantity

[…]

}

If order objects were being created using this class, and sent to a topic in Kafka, we could describe the structure of those records using a schema such as this Avro schema:

{
"type": "record",
"name": “Order”,
"fields": [
{"name": "productName", "type": "string"},
{"name": "productCode", "type": "string"},
{"name": "quantity", "type": "int"}
]
}

Why should you use a schema?

Apache Kafka transfers data without validating the information in the messages. It does not have any visibility of what kind of data are being sent and received, or what data types it might contain. Kafka does not examine the metadata of your messages.

One of the functions of Kafka is to decouple consuming and producing applications, so that they communicate via a Kafka topic rather than directly. This allows them to each work at their own speed, but they still need to agree upon the same data structure; otherwise, the consuming applications have no way to deserialize the data they receive back into something with meaning. The applications all need to share the same assumptions about the structure of the data.

In the scope of Kafka, a schema describes the structure of the data in a message. It defines the fields that need to be present in each message and the types of each field.

This means a schema forms a well-defined contract between a producing application and a consuming application, allowing consuming applications to parse and interpret the data in the messages they receive correctly.

What is a schema registry?

A schema registry supports your Kafka cluster by providing a repository for managing and validating schemas within that cluster. It acts as a database for storing your schemas and provides an interface for managing the schema lifecycle and retrieving schemas. A schema registry also validates evolution of schemas.

Optimize your Kafka environment by using a schema registry.

A schema registry is essentially an agreement of the structure of your data within your Kafka environment. By having a consistent store of the data formats in your applications, you avoid common mistakes that can occur when building applications such as poor data quality, and inconsistencies between your producing and consuming applications that may eventually lead to data corruption. Having a well-managed schema registry is not just a technical necessity but also contributes to the strategic goals of treating data as a valuable product and helps tremendously on your data-as-a-product journey.

Using a schema registry increases the quality of your data and ensures data remain consistent, by enforcing rules for schema evolution. So as well as ensuring data consistency between produced and consumed messages, a schema registry ensures that your messages will remain compatible as schema versions change over time. Over the lifetime of a business, it is very likely that the format of the messages exchanged by the applications supporting the business will need to change. For example, the Order class in the example schema we used earlier might gain a new status field—the product code field might be replaced by a combination of department number and product number, or changes the like. The result is that the schema of the objects in our business domain is continually evolving, and so you need to be able to ensure agreement on the schema of messages in any particular topic at any given time.

There are various patterns for schema evolution:

  • Forward Compatibility: where the producing applications can be updated to a new version of the schema, and all consuming applications will be able to continue to consume messages while waiting to be migrated to the new version.
  • Backward Compatibility: where consuming applications can be migrated to a new version of the schema first, and are able to continue to consume messages produced in the old format while producing applications are migrated.
  • Full Compatibility: when schemas are both forward and backward compatible.

A schema registry is able to enforce rules for schema evolution, allowing you to guarantee either forward, backward or full compatibility of new schema versions, preventing incompatible schema versions being introduced.

By providing a repository of versions of schemas used within a Kafka cluster, past and present, a schema registry simplifies adherence to data governance and data quality policies, since it provides a convenient way to track and audit changes to your topic data formats.

What’s next?

In summary, a schema registry plays a crucial role in managing schema evolution, versioning and the consistency of data in distributed systems, ultimately supporting interoperability between different components. Event Streams on IBM Cloud provides a Schema Registry as part of its Enterprise plan. Ensure your environment is optimized by utilizing this feature on the fully managed Kafka offering on IBM Cloud to build intelligent and responsive applications that react to events in real time.

  • Provision an instance of Event Streams on IBM Cloud here.
  • Learn how to use the Event Streams Schema Registry here.
  • Learn more about Kafka and its use cases here.
  • For any challenges in set up, see our Getting Started Guide and FAQs.
Was this article helpful?
YesNo

More from Cloud

How Wasabi and IBM help clients deliver on data-driven innovation

2 min read - Last year, Wasabi Technologies and IBM Cloud® joined forces to drive data innovation across hybrid cloud environments, positioning enterprises to run applications across any environment—on premises, in the cloud or at the edge—and enabling users to cost efficiently access and use key business data and analytics in real time. As we head into the second half of 2024, IBM Cloud and Wasabi continue to build new ways to expand their relationship. This growing relationship has the potential to reshape how…

Unlocking business value: Maximizing returns from your SAP investments

3 min read - Amid the dynamic realms of modern business and technology, organizations seek to maintain a competitive edge and elevate business outcomes and user experiences through their SAP investments. The crux of this endeavor lies in fostering continuous value creation throughout the journey. Drawing from my experience with clients across expansive, multi-year SAP engagements, there are three areas where collaborative value creation and charting future roadmaps intertwine seamlessly. 1. Value assurance throughout the engagement journey: Value assurance is the cornerstone of every…

Maximize business outcomes on IBM Cloud with Concierge Platinum Services

2 min read - In the rapidly evolving digital landscape, we see that businesses are increasingly migrating to cloud services to enhance their operations, boost productivity and foster innovation. However, the process of transitioning clients to the cloud can often be intricate and time-intensive. To tackle this challenge head-on, IBM® offers clients access to a specialized Concierge Platinum Team, which is equipped with top-tier skills and expertise, to help expedite the cloud onboarding process and provide a smooth transition to Day Two Operations. What…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters