Updated March 6, 2023
Introduction to Kafka Version
A Kafka version is a particular form of Kafka in which some details are different from earlier or later forms. Basically, it is differing in a certain respect to other forms of the same type of thing. It indicates an event that refers to its specific Kafka versions. It is used to refer discovery of each stage and development of Kafka or each new form of a file or document related. There are few properties that are defined specific to the Kafka version, like CURRENT_KAFKA_VERSION and CURRENT_MESSAGE_FORMAT_VERSION. The Properties CURRENT_KAFKA_VERSION refers to the version you are upgrading from, and CURRENT_MESSAGE_FORMAT_VERSION refers to the message format version currently in use.
Different Kafka Version and their Specific Features
Given below are different Kafka version and their specific features:
1. 0.5.0
This version of Kafka is aimed to provide a publish-subscribe model which can handle stream data and processing on a consumer-scale web site.
2. 0.6.0
The 0.06 version added new Producer API’s which is SyncProducer and AsyncProducer. The main purpose is to wrap all the producer functionalities in a single API.
3. 0.7.0
The compression on the producer for some or all topics, write the data in compressed format on the server and make the consumer’s compression aware. This method currently gives you the aggregated metric for the latest 30-second window. Trading a bit of CPU for reduced disk and network activity sounds excellent.
4. 0.8.0
This release added a replication in Kafka, which is responsible for higher availability and durability. It assured that any successfully published message could be consumed even if any server failures. The migration tool uses a 0.7 Kafka consumer to consume the message and sent it to 0.8 target cluster without any downtime.
5. 0.8.1 & 0.8.2
There is only one broker at a time that acts as a controller or server 57 to bring it down, update the code and restart.
6. 0.9.0
In this version, basically, there is an inter-broker protocol change that allows to run of a different version of broker within one cluster. This is used for reducing downtime of the cluster during the upgrade. Before updating, any broker needs to set the inter.broker.protocol.version to the existing version on all brokers. There are still communications between the old’ protocol to communicate with the other brokers. The Java producer has been the recommended option since 0.9.0.0.
7. 0.10.0
In this version, by adding the new protocol in the prior release, it’s required to upgrade the Kafka cluster. It introduced a new client library called Kafka Streams that are used for stream processing on data stored in Kafka topics. There is no backward compatibility for the new client library; it only works with 0.10.x and upward versioned brokers. The Scala producers, which have been deprecated since 0.10.0. The Java consumer has been the recommended option since 0.10.0.
8. 0.10.1
In this version, there is a modification on log retention, which will be based on the largest timestamp of the messages in a log segment. The modification on log caused the brokers with a large amount of log segments and could be longer. The log loading time by changing num.recovery.threads.per.data.dir properties.
9. 0.10.2
The Streams API has introduced to manage the topics and to avoid modification in Zookeeper directly. Also, the backward compatibility of communications with older brokers has improved. For example, Version 0.10.2 clients can talk to version 0.10.0 or newer brokers.
10. 0.11.0
The new user header interface provided with read and write access. So that new Headers API expose Producer Record and Consumer Record via Headers headers() method call. There are many classes deprecated like Cluster class, and also introduced ExtendedSerializer and ExtendedDeserializerinterface to support serialization and deserialization for headers. The Scala consumers, which have been deprecated since 0.11.0.0. As Binary compatibility had been broken accidentally in 1.0.0, that is restored.
11. 1.1.0
The dependency between the Kafka artifact and Log4j has been removed. The Kafka Streams become more advanced in order to deal with exception and communication with clusters.
12. 2.0.0
The major support of the java version has been changed from java 7 to java 8.there is a certification update as https to perform hostname verifications. The Scala consumers and Scala producers, which was deprecated in the previous release, have completely removed. The ability to do the same things in multiple ways that is OAuth 2 as implemented as part of this version.
13. 2.1.0
This Kafka version supports Java 11. The since Kafka has no built-in UUID Serializer / Deserializer, UUIDs cannot be used out of the box, and they need to be converted either to a String or to a byte[]. The String representation of UUID is common across platforms and programming languages. The Network thread has implemented as part of this release, which is responsible for establishing connections with clients and other brokers, performing authentication with them. This implementation reduces the cause of denial of service and many other security attacks.
14. 2.2.0
The group id of the default consumer is set as null instead of an empty string (“”). Also, enable the command line script bin/Kafka-topics.sh, which is directly connected to brokers with –bootstrap-server. The infrastructure changes improved by modifying the customize SSL username to a customized principal builder class. Usually, for a minor change, you need to build and maintain a custom builder class and package and deploy the jar to all brokers. In such cases, built-in SSL principal builder configs/rules that allow to customisation of the SSL principal name will be useful.
15. 2.3.0
The availability has increased by introducing a new rebalancing protocol for Kafka Connect. The new feature called static membership has added for rolling restart of servers during up gradations. This version added two options, “–jmx-ssl-enable” and “–jmx-auth-prop,” to pass an environment map that allows valid entry by leveraging JmxTool. The AlterConfigs RPC notified all the resources by altering the configuration of a topic, broker, or other resources. The new RPC should operate incrementally, modifying only the configuration values that are specified.
16. 2.4.0
This is the latest release. It has features like The Zookeeper version has been upgraded from 3.4.14 to 3.5.6. The command-line tool “bin/Kafka-preferred-replica-election.sh” is replaced with “bin/Kafka-leader-election.sh”. The constructor GroupAuthorizationException (String) is used instead of TopicAuthorizationException (String) to specify an exception message. Kafka Consumer Incremental Rebalance Protocol is partially implemented for reducing the time for scaling out/down operations for benefitting heavy, state full consumers, such as Kafka Streams applications. The accidental modification of task configurations to a Connect cluster get reduced by providing authentication and authorization to an internal REST endpoint to connect workers.
Recommended Articles
This is a guide to Kafka Version. Here we discuss the introduction and different Kafka version and their specific features, respectively. You may also have a look at the following articles to learn more –