This is an automated email from the ASF dual-hosted git repository.

manikumar pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
     new fe1a8e9b2a0 MINOR: Fix typos in protocol.md (#21524)
fe1a8e9b2a0 is described below

commit fe1a8e9b2a0619cf736325b04e3468fbc4073b10
Author: Moshe Blumberg <[email protected]>
AuthorDate: Wed Feb 25 09:48:56 2026 +0000

    MINOR: Fix typos in protocol.md (#21524)
    
    Corrected typos in the protocol documentation regarding client behavior
    and API version handling.
    
    Reviewers: Manikumar Reddy <[email protected]>
---
 docs/design/protocol.md | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/docs/design/protocol.md b/docs/design/protocol.md
index 31fa16b2a66..87966feee10 100644
--- a/docs/design/protocol.md
+++ b/docs/design/protocol.md
@@ -52,7 +52,7 @@ Kafka is a partitioned system so not all servers have the 
complete data set. Ins
 
 All systems of this nature have the question of how a particular piece of data 
is assigned to a particular partition. Kafka clients directly control this 
assignment, the brokers themselves enforce no particular semantics of which 
messages should be published to a particular partition. Rather, to publish 
messages the client directly addresses messages to a particular partition, and 
when fetching messages, fetches from a particular partition. If two clients 
want to use the same partitionin [...]
 
-These requests to publish or fetch data must be sent to the broker that is 
currently acting as the leader for a given partition. This condition is 
enforced by the broker, so a request for a particular partition to the wrong 
broker will result in an the NotLeaderForPartition error code (described below).
+These requests to publish or fetch data must be sent to the broker that is 
currently acting as the leader for a given partition. This condition is 
enforced by the broker, so a request for a particular partition to the wrong 
broker will result in the NotLeaderForPartition error code (described below).
 
 How can the client find out which topics exist, what partitions they have, and 
which brokers currently host those partitions so that it can direct its 
requests to the right hosts? This information is dynamic, so you can't just 
configure each client with some static mapping file. Instead all Kafka brokers 
can answer a metadata request that describes the current state of the cluster: 
what topics there are, which partitions those topics have, which broker is the 
leader for those partitions, [...]
 
@@ -79,7 +79,7 @@ Partitioning really serves two purposes in Kafka:
 
 For a given use case you may care about only one of these or both.
 
-To accomplish simple load balancing a simple approach would be for the client 
to just round robin requests over all brokers. Another alternative, in an 
environment where there are many more producers than brokers, would be to have 
each client chose a single partition at random and publish to that. This later 
strategy will result in far fewer TCP connections.
+To accomplish simple load balancing a simple approach would be for the client 
to just round robin requests over all brokers. Another alternative, in an 
environment where there are many more producers than brokers, would be to have 
each client choose a single partition at random and publish to that. This 
latter strategy will result in far fewer TCP connections.
 
 Semantic partitioning means using some key in the message to assign messages 
to partitions. For example if you were processing a click message stream you 
might want to partition the stream by the user id so that all data for a 
particular user would go to a single consumer. To accomplish this the client 
can take a key associated with the message and use some hash of this key to 
choose the partition to which to deliver the message.
 
@@ -113,7 +113,7 @@ The following sequence may be used by a client to obtain 
supported API versions
   2. On receiving `ApiVersionsRequest`, a broker returns its full list of 
supported ApiKeys and versions regardless of current authentication state 
(e.g., before SASL authentication on an SASL listener, do note that no Kafka 
protocol requests may take place on an SSL listener before the SSL handshake is 
finished). If this is considered to leak information about the broker version a 
workaround is to use SSL with client authentication which is performed at an 
earlier stage of the connectio [...]
   3. If multiple versions of an API are supported by broker and client, 
clients are recommended to use the latest version supported by the broker and 
itself.
   4. Deprecation of a protocol version is done by marking an API version as 
deprecated in the protocol documentation.
-  5. Supported API versions obtained from a broker are only valid for the 
connection on which that information is obtained. In the event of 
disconnection, the client should obtain the information from the broker again, 
as the broker might have been upgraded/downgraded in the mean time.
+  5. Supported API versions obtained from a broker are only valid for the 
connection on which that information is obtained. In the event of 
disconnection, the client should obtain the information from the broker again, 
as the broker might have been upgraded/downgraded in the meantime.
 
 
 

Reply via email to