This is an automated email from the ASF dual-hosted git repository.
schofielaj pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git
The following commit(s) were added to refs/heads/trunk by this push:
new 016a4a6c4c0 KAFKA-19353: Upgrade note and initial docs for KIP-932
(#19863)
016a4a6c4c0 is described below
commit 016a4a6c4c0e0fac3b91468eb71c366b60b2b15e
Author: Andrew Schofield <[email protected]>
AuthorDate: Tue Jun 3 13:23:11 2025 +0100
KAFKA-19353: Upgrade note and initial docs for KIP-932 (#19863)
This is the initial documentation for KIP-932 preview in AK 4.1. The aim
is to get very minimal docs in before the cutoff. Longer term, more
comprehensive documentation will be provided for AK 4.2.
The PR includes:
* Generation of group-level configuration documentation
* Add link to KafkaShareConsumer to API docs
* Add a summary of share group rational to design docs
* Add basic operations information for share groups to ops docs
* Add upgrade note describing arrival of KIP-932 preview in 4.1
Reviewers: Apoorv Mittal <[email protected]>
---------
Co-authored-by: Apoorv Mittal <[email protected]>
---
build.gradle | 9 ++-
docs/api.html | 36 ++++++++----
docs/configuration.html | 59 ++++++++++---------
docs/design.html | 43 ++++++++++++--
docs/ops.html | 66 ++++++++++++++++++++--
docs/toc.html | 41 ++++++++------
docs/upgrade.html | 7 +++
.../kafka/coordinator/group/GroupConfig.java | 4 ++
8 files changed, 200 insertions(+), 65 deletions(-)
diff --git a/build.gradle b/build.gradle
index 9b756b736f2..968eda48e88 100644
--- a/build.gradle
+++ b/build.gradle
@@ -1145,6 +1145,13 @@ project(':core') {
standardOutput = new File(generatedDocsDir,
"topic_config.html").newOutputStream()
}
+ task genGroupConfigDocs(type: JavaExec) {
+ classpath = sourceSets.main.runtimeClasspath
+ mainClass = 'org.apache.kafka.coordinator.group.GroupConfig'
+ if( !generatedDocsDir.exists() ) { generatedDocsDir.mkdirs() }
+ standardOutput = new File(generatedDocsDir,
"group_config.html").newOutputStream()
+ }
+
task genConsumerMetricsDocs(type: JavaExec) {
classpath = sourceSets.test.runtimeClasspath
mainClass = 'org.apache.kafka.clients.consumer.internals.ConsumerMetrics'
@@ -1161,7 +1168,7 @@ project(':core') {
task siteDocsTar(dependsOn: ['genProtocolErrorDocs', 'genProtocolTypesDocs',
'genProtocolApiKeyDocs', 'genProtocolMessageDocs',
'genAdminClientConfigDocs',
'genProducerConfigDocs', 'genConsumerConfigDocs',
- 'genKafkaConfigDocs', 'genTopicConfigDocs',
+ 'genKafkaConfigDocs', 'genTopicConfigDocs',
'genGroupConfigDocs',
':connect:runtime:genConnectConfigDocs',
':connect:runtime:genConnectTransformationDocs',
':connect:runtime:genConnectPredicateDocs',
':connect:runtime:genSinkConnectorConfigDocs',
':connect:runtime:genSourceConnectorConfigDocs',
diff --git a/docs/api.html b/docs/api.html
index 5842548503e..e35d79ca097 100644
--- a/docs/api.html
+++ b/docs/api.html
@@ -30,10 +30,10 @@
The Producer API allows applications to send streams of data to topics
in the Kafka cluster.
<p>
- Examples showing how to use the producer are given in the
+ Examples of using the producer are shown in the
<a
href="/{{version}}/javadoc/index.html?org/apache/kafka/clients/producer/KafkaProducer.html"
title="Kafka {{dotVersion}} Javadoc">javadocs</a>.
<p>
- To use the producer, you can use the following maven dependency:
+ To use the producer, add the following Maven dependency to your project:
<pre class="line-numbers"><code class="language-xml"><dependency>
<groupId>org.apache.kafka</groupId>
@@ -45,26 +45,40 @@
The Consumer API allows applications to read streams of data from
topics in the Kafka cluster.
<p>
- Examples showing how to use the consumer are given in the
+ Examples of using the consumer are shown in the
<a
href="/{{version}}/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html"
title="Kafka {{dotVersion}} Javadoc">javadocs</a>.
<p>
- To use the consumer, you can use the following maven dependency:
+ To use the consumer, add the following Maven dependency to your project:
<pre class="line-numbers"><code class="language-xml"><dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>{{fullDotVersion}}</version>
</dependency></code></pre>
- <h3 class="anchor-heading"><a id="streamsapi"
class="anchor-link"></a><a href="#streamsapi">2.3 Streams API</a></h3>
+ <h3 class="anchor-heading"><a id="shareconsumerapi"
class="anchor-link"></a><a href="#shareconsumerapi">2.3 Share Consumer API
(Preview)</a></h3>
+
+ The Share Consumer API (Preview) enables applications within a share
group to cooperatively consume and process data from Kafka topics.
+ <p>
+ Examples of using the share consumer are shown in the
+ <a
href="/{{version}}/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaShareConsumer.html"
title="Kafka {{dotVersion}} Javadoc">javadocs</a>.
+ <p>
+ To use the share consumer, add the following Maven dependency to your
project:
+ <pre class="line-numbers"><code class="language-xml"><dependency>
+ <groupId>org.apache.kafka</groupId>
+ <artifactId>kafka-clients</artifactId>
+ <version>{{fullDotVersion}}</version>
+</dependency></code></pre>
+
+ <h3 class="anchor-heading"><a id="streamsapi"
class="anchor-link"></a><a href="#streamsapi">2.4 Streams API</a></h3>
The <a href="/{{version}}/documentation/streams">Streams</a> API allows
transforming streams of data from input topics to output topics.
<p>
- Examples showing how to use this library are given in the
+ Examples of using this library are shown in the
<a
href="/{{version}}/javadoc/index.html?org/apache/kafka/streams/KafkaStreams.html"
title="Kafka {{dotVersion}} Javadoc">javadocs</a>.
<p>
Additional documentation on using the Streams API is available <a
href="/{{version}}/documentation/streams">here</a>.
<p>
- To use Kafka Streams you can use the following maven dependency:
+ To use Kafka Streams, add the following Maven dependency to your
project:
<pre class="line-numbers"><code class="language-xml"><dependency>
<groupId>org.apache.kafka</groupId>
@@ -75,7 +89,7 @@
<p>
When using Scala you may optionally include the
<code>kafka-streams-scala</code> library. Additional documentation on using
the Kafka Streams DSL for Scala is available <a
href="/{{version}}/documentation/streams/developer-guide/dsl-api.html#scala-dsl">in
the developer guide</a>.
<p>
- To use Kafka Streams DSL for Scala {{scalaVersion}} you can use the
following maven dependency:
+ To use Kafka Streams DSL for Scala {{scalaVersion}}, add the following
Maven dependency to your project:
<pre class="line-numbers"><code class="language-xml"><dependency>
<groupId>org.apache.kafka</groupId>
@@ -83,7 +97,7 @@
<version>{{fullDotVersion}}</version>
</dependency></code></pre>
- <h3 class="anchor-heading"><a id="connectapi"
class="anchor-link"></a><a href="#connectapi">2.4 Connect API</a></h3>
+ <h3 class="anchor-heading"><a id="connectapi"
class="anchor-link"></a><a href="#connectapi">2.5 Connect API</a></h3>
The Connect API allows implementing connectors that continually pull
from some source data system into Kafka or push from Kafka into some sink data
system.
<p>
@@ -92,11 +106,11 @@
Those who want to implement custom connectors can see the <a
href="/{{version}}/javadoc/index.html?org/apache/kafka/connect" title="Kafka
{{dotVersion}} Javadoc">javadoc</a>.
<p>
- <h3 class="anchor-heading"><a id="adminapi" class="anchor-link"></a><a
href="#adminapi">2.5 Admin API</a></h3>
+ <h3 class="anchor-heading"><a id="adminapi" class="anchor-link"></a><a
href="#adminapi">2.6 Admin API</a></h3>
The Admin API supports managing and inspecting topics, brokers, acls,
and other Kafka objects.
<p>
- To use the Admin API, add the following Maven dependency:
+ To use the Admin API, add the following Maven dependency to your
project:
<pre class="line-numbers"><code class="language-xml"><dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
diff --git a/docs/configuration.html b/docs/configuration.html
index d38dfce2aab..c922d92f105 100644
--- a/docs/configuration.html
+++ b/docs/configuration.html
@@ -28,7 +28,7 @@
<li><code>controller.quorum.bootstrap.servers</code>
</ul>
- Topic-level configurations and defaults are discussed in more detail <a
href="#topicconfigs">below</a>.
+ Topic configurations and defaults are discussed in more detail <a
href="#topicconfigs">below</a>.
<!--#include virtual="generated/kafka_config.html" -->
@@ -185,7 +185,7 @@
Inter-broker listener must be configured using the static broker
configuration <code>inter.broker.listener.name</code>
or <code>security.inter.broker.protocol</code>.
- <h3 class="anchor-heading"><a id="topicconfigs" class="anchor-link"></a><a
href="#topicconfigs">3.2 Topic-Level Configs</a></h3>
+ <h3 class="anchor-heading"><a id="topicconfigs" class="anchor-link"></a><a
href="#topicconfigs">3.2 Topic Configs</a></h3>
Configurations pertinent to topics have both a server default as well an
optional per-topic override. If no per-topic configuration is given the server
default is used. The override can be set at topic creation time by giving one
or more <code>--config</code> options. This example creates a topic named
<i>my-topic</i> with a custom max message size and flush rate:
<pre><code class="language-bash">$ bin/kafka-topics.sh --bootstrap-server
localhost:9092 --create --topic my-topic --partitions 1 \
@@ -201,60 +201,65 @@
<pre><code class="language-bash">$ bin/kafka-configs.sh --bootstrap-server
localhost:9092 --entity-type topics --entity-name my-topic
--alter --delete-config max.message.bytes</code></pre>
- The following are the topic-level configurations. The server's default
configuration for this property is given under the Server Default Property
heading. A given server default config value only applies to a topic if it does
not have an explicit topic config override.
+ Below is the topic configuration. The server's default configuration for
this property is given under the Server Default Property heading. A given
server default config value only applies to a topic if it does not have an
explicit topic config override.
<!--#include virtual="generated/topic_config.html" -->
- <h3 class="anchor-heading"><a id="producerconfigs"
class="anchor-link"></a><a href="#producerconfigs">3.3 Producer Configs</a></h3>
+ <h3 class="anchor-heading"><a id="groupconfigs" class="anchor-link"></a><a
href="#groupconfigs">3.3 Group Configs</a></h3>
- Below is the configuration of the producer:
+ Below is the group configuration:
+ <!--#include virtual="generated/group_config.html" -->
+
+ <h3 class="anchor-heading"><a id="producerconfigs"
class="anchor-link"></a><a href="#producerconfigs">3.4 Producer Configs</a></h3>
+
+ Below is the producer configuration:
<!--#include virtual="generated/producer_config.html" -->
- <h3 class="anchor-heading"><a id="consumerconfigs"
class="anchor-link"></a><a href="#consumerconfigs">3.4 Consumer Configs</a></h3>
+ <h3 class="anchor-heading"><a id="consumerconfigs"
class="anchor-link"></a><a href="#consumerconfigs">3.5 Consumer Configs</a></h3>
- Below is the configuration for the consumer:
+ Below is the consumer and share consumer configuration:
<!--#include virtual="generated/consumer_config.html" -->
- <h3 class="anchor-heading"><a id="connectconfigs" class="anchor-link"></a><a
href="#connectconfigs">3.5 Kafka Connect Configs</a></h3>
- Below is the configuration of the Kafka Connect framework.
+ <h3 class="anchor-heading"><a id="connectconfigs" class="anchor-link"></a><a
href="#connectconfigs">3.6 Kafka Connect Configs</a></h3>
+ Below is the Kafka Connect framework configuration.
<!--#include virtual="generated/connect_config.html" -->
- <h4 class="anchor-heading"><a id="sourceconnectconfigs"
class="anchor-link"></a><a href="#sourceconnectconfigs">3.5.1 Source Connector
Configs</a></h4>
- Below is the configuration of a source connector.
+ <h4 class="anchor-heading"><a id="sourceconnectconfigs"
class="anchor-link"></a><a href="#sourceconnectconfigs">3.6.1 Source Connector
Configs</a></h4>
+ Below is the source connector configuration.
<!--#include virtual="generated/source_connector_config.html" -->
- <h4 class="anchor-heading"><a id="sinkconnectconfigs"
class="anchor-link"></a><a href="#sinkconnectconfigs">3.5.2 Sink Connector
Configs</a></h4>
- Below is the configuration of a sink connector.
+ <h4 class="anchor-heading"><a id="sinkconnectconfigs"
class="anchor-link"></a><a href="#sinkconnectconfigs">3.6.2 Sink Connector
Configs</a></h4>
+ Below is the sink connector configuration.
<!--#include virtual="generated/sink_connector_config.html" -->
- <h3 class="anchor-heading"><a id="streamsconfigs" class="anchor-link"></a><a
href="#streamsconfigs">3.6 Kafka Streams Configs</a></h3>
- Below is the configuration of the Kafka Streams client library.
+ <h3 class="anchor-heading"><a id="streamsconfigs" class="anchor-link"></a><a
href="#streamsconfigs">3.7 Kafka Streams Configs</a></h3>
+ Below is the Kafka Streams client library configuration.
<!--#include virtual="generated/streams_config.html" -->
- <h3 class="anchor-heading"><a id="adminclientconfigs"
class="anchor-link"></a><a href="#adminclientconfigs">3.7 Admin Configs</a></h3>
- Below is the configuration of the Kafka Admin client library.
+ <h3 class="anchor-heading"><a id="adminclientconfigs"
class="anchor-link"></a><a href="#adminclientconfigs">3.8 Admin Configs</a></h3>
+ Below is the Kafka Admin client library configuration.
<!--#include virtual="generated/admin_client_config.html" -->
- <h3 class="anchor-heading"><a id="mirrormakerconfigs"
class="anchor-link"></a><a href="#mirrormakerconfigs">3.8 MirrorMaker
Configs</a></h3>
+ <h3 class="anchor-heading"><a id="mirrormakerconfigs"
class="anchor-link"></a><a href="#mirrormakerconfigs">3.9 MirrorMaker
Configs</a></h3>
Below is the configuration of the connectors that make up MirrorMaker 2.
- <h4 class="anchor-heading"><a id="mirrormakercommonconfigs"
class="anchor-link"></a><a href="#mirrormakercommonconfigs">3.8.1 MirrorMaker
Common Configs</a></h4>
- Below are the common configuration properties that apply to all three
connectors.
+ <h4 class="anchor-heading"><a id="mirrormakercommonconfigs"
class="anchor-link"></a><a href="#mirrormakercommonconfigs">3.9.1 MirrorMaker
Common Configs</a></h4>
+ Below is the common configuration that applies to all three connectors.
<!--#include virtual="generated/mirror_connector_config.html" -->
- <h4 class="anchor-heading"><a id="mirrormakersourceconfigs"
class="anchor-link"></a><a href="#mirrormakersourceconfigs">3.8.2 MirrorMaker
Source Configs</a></h4>
+ <h4 class="anchor-heading"><a id="mirrormakersourceconfigs"
class="anchor-link"></a><a href="#mirrormakersourceconfigs">3.9.2 MirrorMaker
Source Configs</a></h4>
Below is the configuration of MirrorMaker 2 source connector for replicating
topics.
<!--#include virtual="generated/mirror_source_config.html" -->
- <h4 class="anchor-heading"><a id="mirrormakercheckpointconfigs"
class="anchor-link"></a><a href="#mirrormakercheckpointconfigs">3.8.3
MirrorMaker Checkpoint Configs</a></h4>
+ <h4 class="anchor-heading"><a id="mirrormakercheckpointconfigs"
class="anchor-link"></a><a href="#mirrormakercheckpointconfigs">3.9.3
MirrorMaker Checkpoint Configs</a></h4>
Below is the configuration of MirrorMaker 2 checkpoint connector for
emitting consumer offset checkpoints.
<!--#include virtual="generated/mirror_checkpoint_config.html" -->
- <h4 class="anchor-heading"><a id="mirrormakerheartbeatconfigs"
class="anchor-link"></a><a href="#mirrormakerheartbeatconfigs">3.8.4
MirrorMaker HeartBeat Configs</a></h4>
+ <h4 class="anchor-heading"><a id="mirrormakerheartbeatconfigs"
class="anchor-link"></a><a href="#mirrormakerheartbeatconfigs">3.9.4
MirrorMaker HeartBeat Configs</a></h4>
Below is the configuration of MirrorMaker 2 heartbeat connector for checking
connectivity between connectors and clusters.
<!--#include virtual="generated/mirror_heartbeat_config.html" -->
- <h3 class="anchor-heading"><a id="systemproperties"
class="anchor-link"></a><a href="#systemproperties">3.9 System
Properties</a></h3>
+ <h3 class="anchor-heading"><a id="systemproperties"
class="anchor-link"></a><a href="#systemproperties">3.10 System
Properties</a></h3>
Kafka supports some configuration that can be enabled through Java system
properties. System properties are usually set by passing the -D flag to the
Java virtual machine in which Kafka components are running.
Below are the supported system properties.
<ul class="config-list">
@@ -298,14 +303,14 @@
</li>
</ul>
- <h3 class="anchor-heading"><a id="tieredstorageconfigs"
class="anchor-link"></a><a href="#tieredstorageconfigs">3.10 Tiered Storage
Configs</a></h3>
- Below are the configuration properties for Tiered Storage.
+ <h3 class="anchor-heading"><a id="tieredstorageconfigs"
class="anchor-link"></a><a href="#tieredstorageconfigs">3.11 Tiered Storage
Configs</a></h3>
+ Below is the Tiered Storage configuration.
<!--#include virtual="generated/remote_log_manager_config.html" -->
<!--#include virtual="generated/remote_log_metadata_manager_config.html" -->
<h3 class="anchor-heading">
<a id="config_providers" class="anchor-link"></a>
- <a href="#config_providers">3.11 Configuration Providers</a>
+ <a href="#config_providers">3.12 Configuration Providers</a>
</h3>
<p>
diff --git a/docs/design.html b/docs/design.html
index 4c686e68f52..6d15ba11ec9 100644
--- a/docs/design.html
+++ b/docs/design.html
@@ -344,7 +344,42 @@
group will rebalance and fetch the last committed offset, which has the
effect of rewinding back to the state before the transaction aborted.
Alternatively, a more sophisticated application (such as the
transactional message copier) can choose not to use
<code>KafkaConsumer.committed</code> to retrieve the committed offset from
Kafka, and then <code>KafkaConsumer.seek</code> to rewind the current position.
- <h3 class="anchor-heading"><a id="replication" class="anchor-link"></a><a
href="#replication">4.8 Replication</a></h3>
+ <h3 class="anchor-heading"><a id="sharegroups" class="anchor-link"></a><a
href="#sharegroups">4.8 Share groups</a></h3>
+ <p>
+ Share groups are available as a preview in Apache Kafka 4.1.
+ <p>
+ Share groups are a new type of group, existing alongside traditional
consumer groups. Share groups enable Kafka consumers to cooperatively consume
and process records from topics.
+ They offer an alternative to traditional consumer groups, particularly
when applications require finer-grained sharing of partitions and records.
+ <p>
+ The fundamental differences between a share group and a consumer group are:
+ <ul>
+ <li>The consumers within a share group cooperatively consume records,
and partitions may be assigned to multiple consumers.</li>
+ <li>The number of consumers in a share group can exceed the number of
partitions in a topic.</li>
+ <li>Records are acknowledged individually, though the system is
optimized for batch processing to improve efficiency.</li>
+ <li>Delivery attempts to consumers in a share group are counted, which
enables automated handling of unprocessable records.</li>
+ </ul>
+ <p>
+ All consumers in the same share group subscribed to the same topic will
cooperatively consume the records of that topic. If a topic is accessed by
consumers in multiple share groups, each share group
+ consumes from that topic independently of the others.
+ <p>
+ Each consumer can dynamically set its list of subscribed topics. In
practice, all consumers within a share group typically subscribe to the same
topic or topics.
+ <p>
+ When a consumer in a share-group fetches records, it receives available
records from any of the topic-partitions matching its subscriptions. Records
are acquired for delivery to this consumer with a time-limited
+ acquisition lock. While a record is acquired, it is unavailable to other
consumers.
+ <p>By default, the lock duration is 30 seconds, but you can control it
using the group configuration parameter
<code>share.record.lock.duration.ms</code>. The lock is released automatically
once its
+ duration elapses, making the record available to another consumer. A
consumer holding the lock can handle the record in the following ways:
+ <ul>
+ <li>Acknowledge successful processing of the record.</li>
+ <li>Release the record, making it available for another delivery
attempt.</li>
+ <li>Reject the record, indicating it's unprocessable and preventing
further delivery attempts for that record.</li>
+ <li>Do nothing, in which case the lock is automatically released when
its duration expires.</li>
+ </ul>
+ <p>
+ The Kafka cluster limits the number of records acquired for consumers for
each topic-partition within a share group. Once this limit is reached, fetching
operations will temporarily yield no further records
+ until the number of acquired records decreases (as locks naturally time
out). This limit is controlled by the broker configuration property
<code>group.share.partition.max.record.locks</code>. By limiting
+ the duration of the acquisition lock and automatically releasing the
locks, the broker ensures delivery progresses even in the presence of consumer
failures.
+
+ <h3 class="anchor-heading"><a id="replication" class="anchor-link"></a><a
href="#replication">4.9 Replication</a></h3>
<p>
Kafka replicates the log for each topic's partitions across a configurable
number of servers (you can set this replication factor on a topic-by-topic
basis). This allows automatic failover to these replicas when a
server in the cluster fails so messages remain available in the presence
of failures.
@@ -475,7 +510,7 @@
replicas have received the message. For example, if a topic is configured
with only two replicas and one fails (i.e., only one in sync replica remains),
then writes that specify acks=all will succeed. However, these
writes could be lost if the remaining replica also fails.
- Although this ensures maximum availability of the partition, this behavior
may be undesirable to some users who prefer durability over availability.
Therefore, we provide two topic-level configurations that can be
+ Although this ensures maximum availability of the partition, this behavior
may be undesirable to some users who prefer durability over availability.
Therefore, we provide two topic configurations that can be
used to prefer message durability over availability:
<ol>
<li> Disable unclean leader election - if all replicas become
unavailable, then the partition will remain unavailable until the most recent
leader becomes available again. This effectively prefers unavailability
@@ -499,7 +534,7 @@
The result is that we are able to batch together many of the required
leadership change notifications which makes the election process far cheaper
and faster for a large number
of partitions. If the controller itself fails, then another controller
will be elected.
- <h3 class="anchor-heading"><a id="compaction" class="anchor-link"></a><a
href="#compaction">4.9 Log Compaction</a></h3>
+ <h3 class="anchor-heading"><a id="compaction" class="anchor-link"></a><a
href="#compaction">4.10 Log Compaction</a></h3>
Log compaction ensures that Kafka will always retain at least the last
known value for each message key within the log of data for a single topic
partition. It addresses use cases and scenarios such as restoring
state after application crashes or system failure, or reloading caches
after application restarts during operational maintenance. Let's dive into
these use cases in more detail and then describe how compaction works.
@@ -627,7 +662,7 @@
<p>
Further cleaner configurations are described <a
href="/documentation.html#brokerconfigs">here</a>.
- <h3 class="anchor-heading"><a id="design_quotas"
class="anchor-link"></a><a href="#design_quotas">4.10 Quotas</a></h3>
+ <h3 class="anchor-heading"><a id="design_quotas"
class="anchor-link"></a><a href="#design_quotas">4.11 Quotas</a></h3>
<p>
Kafka cluster has the ability to enforce quotas on requests to control the
broker resources used by clients. Two types
of client quotas can be enforced by Kafka brokers for each group of
clients sharing a quota:
diff --git a/docs/ops.html b/docs/ops.html
index 6b317142159..247c7c00529 100644
--- a/docs/ops.html
+++ b/docs/ops.html
@@ -82,7 +82,7 @@
You can also set this to false, but you will then need to manually restore
leadership to the restored replicas by running the command:
<pre><code class="language-bash">$ bin/kafka-leader-election.sh
--bootstrap-server localhost:9092 --election-type preferred
--all-topic-partitions</code></pre>
- <h4 class="anchor-heading"><a id="basic_ops_racks"
class="anchor-link"></a><a href="#basic_ops_racks">Balancing Replicas Across
Racks</a></h4>
+ <h4 class="anchor-heading"><a id="basic_ops_racks"
class="anchor-link"></a><a href="#basic_ops_racks">Balancing replicas across
racks</a></h4>
The rack awareness feature spreads replicas of the same partition across
different racks. This extends the guarantees Kafka provides for broker-failure
to cover rack-failure, limiting the risk of data loss should all the brokers on
a rack fail at once. The feature can also be applied to other broker groupings
such as availability zones in EC2.
<p></p>
You can specify that a broker belongs to a particular rack by adding a
property to the broker config:
@@ -107,7 +107,18 @@ my-topic 0 2
4 2
my-topic 1 2 3 1
consumer-1-029af89c-873c-4751-a720-cefd41a669d6 /127.0.0.1
consumer-1
my-topic 2 2 3 1
consumer-2-42c1abd4-e3b2-425d-a8bb-e1ea49b29bb2 /127.0.0.1
consumer-2</code></pre>
- <h4 class="anchor-heading"><a id="basic_ops_consumer_group"
class="anchor-link"></a><a href="#basic_ops_consumer_group">Managing Consumer
Groups</a></h4>
+ <h4 class="anchor-heading"><a id="basic_ops_groups"
class="anchor-link"></a><a href="#basic_ops_groups">Managing groups</a></h4>
+
+ With the GroupCommand tool, we can list groups of all types, including
consumer groups, share groups and streams groups. Each type of group has its
own tool for administering groups of that type.
+
+ For example, to list all groups in the cluster:
+
+ <pre><code class="language-bash">$ bin/kafka-groups.sh --bootstrap-server
localhost:9092 --list
+GROUP TYPE PROTOCOL
+my-consumer-group Consumer consumer
+my-share-group Share share</code></pre>
+
+ <h4 class="anchor-heading"><a id="basic_ops_consumer_group"
class="anchor-link"></a><a href="#basic_ops_consumer_group">Managing consumer
groups</a></h4>
With the ConsumerGroupCommand tool, we can list, describe, or delete the
consumer groups. The consumer group can be deleted manually, or automatically
when the last committed offset for that group expires. Manual deletion works
only if the group does not have any active members.
@@ -213,11 +224,58 @@ Deletion of requested consumer groups ('my-group',
'my-other-group') was success
<p>
For example, to reset offsets of a consumer group to the latest offset:
- <pre><code class="language-bash">$ bin/kafka-consumer-groups.sh
--bootstrap-server localhost:9092 --reset-offsets --group consumergroup1
--topic topic1 --to-latest
+ <pre><code class="language-bash">$ bin/kafka-consumer-groups.sh
--bootstrap-server localhost:9092 --reset-offsets --group my-group --topic
topic1 --to-latest
TOPIC PARTITION NEW-OFFSET
topic1 0 0</code></pre>
+ <h4 class="anchor-heading"><a id="basic_ops_share_group"
class="anchor-link"></a><a href="#basic_ops_share_group">Managing share
groups</a></h4>
+
+ NOTE: Apache Kafka 4.1 ships with a preview of share groups which is not
enabled by default. To enable share groups, use the
<code>kafka-features.sh</code> tool to upgrade to <code>share.version=1</code>.
+ For more information, please read the <a
href="https://cwiki.apache.org/confluence/x/CIq3FQ"> release notes</a>.
+
<p>
+ Use the ShareGroupCommand tool to list, describe, or delete the share
groups. Only share groups without any active members can be deleted.
+
+ For example, to list all share groups in a cluster:
+
+ <pre><code class="language-bash">$ bin/kafka-share-groups.sh
--bootstrap-server localhost:9092 --list
+my-share-group</code></pre>
+
+ To view the current start offset, use the "--describe" option:
+
+ <pre><code class="language-bash">$ bin/kafka-share-groups.sh
--bootstrap-server localhost:9092 --describe --group my-share-group
+GROUP TOPIC PARTITION START-OFFSET
+my-share-group topic1 0 4</code></pre>
+
+ NOTE: The admin client needs DESCRIBE access to all the topics used in the
group.
+
+ There are many --describe options that provide more detailed information
about a share group:
+ <ul>
+ <li>--members: Describes active members in the share group.
+ <pre><code class="language-bash">bin/kafka-share-groups.sh
--bootstrap-server localhost:9092 --describe --group my-share-group --members
+GROUP CONSUMER-ID HOST CLIENT-ID
#PARTITIONS ASSIGNMENT
+my-share-group 94wrSQNmRda9Q6sk6jMO6Q /127.0.0.1 console-share-consumer
1 topic1:0
+my-share-group EfI0sha8QSKSrL_-I_zaTA /127.0.0.1 console-share-consumer
1 topic1:0</code></pre>
+ You can see that both members have been assigned the same partition
which they are sharing.
+ </li>
+ <li>--offsets: The default describe option. This provides the same output
as the "--describe" option.</li>
+ <li>--state: Describes a summary of the state of the share group.
+ <pre><code class="language-bash">bin/kafka-share-groups.sh
--bootstrap-server localhost:9092 --describe --group my-share-group --state
+GROUP COORDINATOR (ID) STATE #MEMBERS
+my-share-group localhost:9092 (1) Stable 2</code></pre>
+ </li>
+ </ul>
+
+ <p>To delete the offsets of individual topics in the share group, use the
"--delete-offsets" option:
+
+ <pre><code class="language-bash">$ bin/kafka-share-groups.sh
--bootstrap-server localhost:9092 --delete-offsets --group my-share-group
--topic topic1
+TOPIC STATUS
+topic1 Successful</code></pre>
+
+ <p>To delete one or more share groups, use "--delete" option:
+
+ <pre><code class="language-bash">$ bin/kafka-share-groups.sh
--bootstrap-server localhost:9092 --delete --group my-share-group
+Deletion of requested share groups ('my-share-group') was
successful.</code></pre>
<h4 class="anchor-heading"><a id="basic_ops_cluster_expansion"
class="anchor-link"></a><a href="#basic_ops_cluster_expansion">Expanding your
cluster</a></h4>
@@ -352,7 +410,7 @@ Reassignment of partition [foo,0] is completed</code></pre>
Topic:foo PartitionCount:1 ReplicationFactor:3 Configs:
Topic: foo Partition: 0 Leader: 5 Replicas: 5,6,7 Isr:
5,6,7</code></pre>
- <h4 class="anchor-heading"><a id="rep-throttle" class="anchor-link"></a><a
href="#rep-throttle">Limiting Bandwidth Usage during Data Migration</a></h4>
+ <h4 class="anchor-heading"><a id="rep-throttle" class="anchor-link"></a><a
href="#rep-throttle">Limiting bandwidth usage during data migration</a></h4>
Kafka lets you apply a throttle to replication traffic, setting an upper
bound on the bandwidth used to move replicas from machine to machine and from
disk to disk. This is useful when rebalancing a cluster, adding or removing
brokers or adding or removing disks, as it limits the impact these
data-intensive operations will have on users.
<p></p>
There are two interfaces that can be used to engage a throttle. The
simplest, and safest, is to apply a throttle when invoking the
kafka-reassign-partitions.sh, but kafka-configs.sh can also be used to view and
alter the throttle values directly.
diff --git a/docs/toc.html b/docs/toc.html
index c42961cf7fb..ccc824c4962 100644
--- a/docs/toc.html
+++ b/docs/toc.html
@@ -36,29 +36,31 @@
<ul>
<li><a href="#producerapi">2.1 Producer API</a>
<li><a href="#consumerapi">2.2 Consumer API</a>
- <li><a href="#streamsapi">2.3 Streams API</a>
- <li><a href="#connectapi">2.4 Connect API</a>
- <li><a href="#adminapi">2.5 Admin API</a>
+ <li><a href="#shareconsumerapi">2.3 Share Consumer API
(Preview)</a>
+ <li><a href="#streamsapi">2.4 Streams API</a>
+ <li><a href="#connectapi">2.5 Connect API</a>
+ <li><a href="#adminapi">2.6 Admin API</a>
</ul>
<li><a href="#configuration">3. Configuration</a>
<ul>
<li><a href="#brokerconfigs">3.1 Broker Configs</a>
<li><a href="#topicconfigs">3.2 Topic Configs</a>
- <li><a href="#producerconfigs">3.3 Producer Configs</a>
- <li><a href="#consumerconfigs">3.4 Consumer Configs</a>
- <li><a href="#connectconfigs">3.5 Kafka Connect Configs</a>
+ <li><a href="#groupconfigs">3.3 Group Configs</a></li>
+ <li><a href="#producerconfigs">3.4 Producer Configs</a>
+ <li><a href="#consumerconfigs">3.5 Consumer Configs</a>
+ <li><a href="#connectconfigs">3.6 Kafka Connect Configs</a>
<ul>
<li><a href="#sourceconnectconfigs">Source Connector
Configs</a>
<li><a href="#sinkconnectconfigs">Sink Connector
Configs</a>
</ul>
- <li><a href="#streamsconfigs">3.6 Kafka Streams Configs</a>
- <li><a href="#adminclientconfigs">3.7 AdminClient Configs</a>
- <li><a href="#mirrormakerconfigs">3.8 MirrorMaker Configs</a>
- <li><a href="#systemproperties">3.9 System Properties</a>
- <li><a href="#tieredstorageconfigs">3.10 Tiered Storage
Configs</a>
- <li><a href="#config_providers">3.11 Configuration
Providers</a>
+ <li><a href="#streamsconfigs">3.7 Kafka Streams Configs</a>
+ <li><a href="#adminclientconfigs">3.8 AdminClient Configs</a>
+ <li><a href="#mirrormakerconfigs">3.9 MirrorMaker Configs</a>
+ <li><a href="#systemproperties">3.10 System Properties</a>
+ <li><a href="#tieredstorageconfigs">3.11 Tiered Storage
Configs</a>
+ <li><a href="#config_providers">3.12 Configuration
Providers</a>
<ul>
<li><a href="#using_config_providers">Using
Configuration Providers</a>
<li><a
href="#directory_config_provider">DirectoryConfigProvider</a>
@@ -77,9 +79,10 @@
<li><a href="#theconsumer">4.5 The Consumer</a>
<li><a href="#semantics">4.6 Message Delivery Semantics</a>
<li><a href="#usingtransactions">4.7 Using Transactions</a>
- <li><a href="#replication">4.8 Replication</a>
- <li><a href="#compaction">4.9 Log Compaction</a>
- <li><a href="#design_quotas">4.10 Quotas</a>
+ <li><a href="#sharegroups">4.8 Share Groups</a>
+ <li><a href="#replication">4.9 Replication</a>
+ <li><a href="#compaction">4.10 Log Compaction</a>
+ <li><a href="#design_quotas">4.11 Quotas</a>
</ul>
<li><a href="#implementation">5. Implementation</a>
@@ -99,14 +102,16 @@
<li><a href="#basic_ops_modify_topic">Modifying
topics</a>
<li><a href="#basic_ops_restarting">Graceful
shutdown</a>
<li><a href="#basic_ops_leader_balancing">Balancing
leadership</a>
- <li><a href="#basic_ops_racks">Balancing Replicas
Across Racks</a>
+ <li><a href="#basic_ops_racks">Balancing replicas
across racks</a>
<li><a href="#basic_ops_mirror_maker">Mirroring data
between clusters</a>
<li><a href="#basic_ops_consumer_lag">Checking
consumer position</a>
- <li><a href="#basic_ops_consumer_group">Managing
Consumer Groups</a>
+ <li><a href="#basic_ops_groups">Managing groups</a>
+ <li><a href="#basic_ops_consumer_group">Managing
consumer groups</a>
+ <li><a href="#basic_ops_share_group">Managing share
groups</a>
<li><a href="#basic_ops_cluster_expansion">Expanding
your cluster</a>
<li><a
href="#basic_ops_decommissioning_brokers">Decommissioning brokers</a>
<li><a
href="#basic_ops_increase_replication_factor">Increasing replication factor</a>
- <li><a href="#rep-throttle">Limiting Bandwidth Usage
during Data Migration</a>
+ <li><a href="#rep-throttle">Limiting bandwidth usage
during data migration</a>
<li><a href="#quotas">Setting quotas</a>
</ul>
diff --git a/docs/upgrade.html b/docs/upgrade.html
index 5f3ec6d0c07..ad011aa2587 100644
--- a/docs/upgrade.html
+++ b/docs/upgrade.html
@@ -24,6 +24,13 @@
<h5><a id="upgrade_4_1_0" href="#upgrade_4_1_0">Upgrading Servers to 4.1.0
from any version 3.3.x through 4.0.x</a></h5>
<h6><a id="upgrade_410_notable" href="#upgrade_410_notable">Notable
changes in 4.1.0</a></h6>
<ul>
+ <li>
+ Apache Kafka 4.1 ships with a preview of Queues for Kafka (<a
href="https://cwiki.apache.org/confluence/x/4hA0Dw">KIP-932</a>). This feature
introduces a new kind of group called
+ share groups, as an alternative to consumer groups. Consumers
in a share group cooperatively consume records from topics, without assigning
each partition to just one consumer.
+ Share groups also introduce per-record acknowledgement and
counting of delivery attempts. Use share groups in cases where records are
processed one at a time, rather than as part
+ of an ordered stream. To enable share groups, use the
<code>kafka-features.sh</code> tool to upgrade to <code>share.version=1</code>.
+ For more information, please read the <a
href="https://cwiki.apache.org/confluence/x/CIq3FQ"> release notes</a>.
+ </li>
<li><b>Common</b>
<ul>
<li>
diff --git
a/group-coordinator/src/main/java/org/apache/kafka/coordinator/group/GroupConfig.java
b/group-coordinator/src/main/java/org/apache/kafka/coordinator/group/GroupConfig.java
index c7569fdf47a..565d492507b 100644
---
a/group-coordinator/src/main/java/org/apache/kafka/coordinator/group/GroupConfig.java
+++
b/group-coordinator/src/main/java/org/apache/kafka/coordinator/group/GroupConfig.java
@@ -392,4 +392,8 @@ public final class GroupConfig extends AbstractConfig {
throw new IllegalArgumentException("Unknown Share isolation level:
" + shareIsolationLevel);
}
}
+
+ public static void main(String[] args) {
+ System.out.println(CONFIG.toHtml(4, config -> "groupconfigs_" +
config));
+ }
}