kamalcph commented on code in PR #19336:
URL: https://github.com/apache/kafka/pull/19336#discussion_r2022646920


##########
docs/ops.html:
##########
@@ -4169,6 +4170,7 @@ <h4 class="anchor-heading"><a 
id="tiered_storage_limitation" class="anchor-link"
   <li>Deleting tiered storage enabled topics is required before disabling 
tiered storage at the broker level</li>

Review Comment:
   Please also reword this sentence:
   
   Deleting tiered storage enabled topics  ... -> Disabling tiered storage 
enabled topics ...



##########
clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java:
##########
@@ -195,7 +195,10 @@ public class ConsumerConfig extends AbstractConfig {
             "Records are fetched in batches by the consumer, and if the first 
record batch in the first non-empty partition of the fetch is larger than " +
             "this value, the record batch will still be returned to ensure 
that the consumer can make progress. As such, this is not a absolute maximum. " 
+
             "The maximum record batch size accepted by the broker is defined 
via <code>message.max.bytes</code> (broker config) or " +
-            "<code>max.message.bytes</code> (topic config). Note that the 
consumer performs multiple fetches in parallel.";
+            "<code>max.message.bytes</code> (topic config). A fetch request 
consists of many partitions, and there is another setting that controls how 
much " +
+            "data is returned for each partition in a fetch request - see 
<code>max.partition.fetch.bytes</code>. Note that there is a current limitation 
when " +
+            "performing remote reads from tiered storage (KIP-405) - only one 
partition out of the fetch request is fetched from the remote store 
(KAFKA-14915). " +
+            "Note also that the consumer performs multiple fetches in 
parallel.";

Review Comment:
   > Note also that the consumer performs multiple fetches in parallel.
   
   This statement is unclear to me. Does this refer to: 
   
   1. Consumer reads from multiple brokers as the partitions gets spread out?
   2. Multiple consumers with different client-id reading from a broker? 
   



##########
clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java:
##########
@@ -220,7 +223,9 @@ public class ConsumerConfig extends AbstractConfig {
             "partition of the fetch is larger than this limit, the " +
             "batch will still be returned to ensure that the consumer can make 
progress. The maximum record batch size " +
             "accepted by the broker is defined via 
<code>message.max.bytes</code> (broker config) or " +
-            "<code>max.message.bytes</code> (topic config). See " + 
FETCH_MAX_BYTES_CONFIG + " for limiting the consumer request size.";
+            "<code>max.message.bytes</code> (topic config). See " + 
FETCH_MAX_BYTES_CONFIG + " for limiting the consumer request size. " +
+            "Consider increasing this limit especially in the cases of remote 
storage reads (KIP-405), because currently only " +

Review Comment:
   > Consider increasing this limit
   
   Could this be changed to?
   ```
   Consider increasing the <code>max.partition.fetch.bytes</code> limit ...
   ```
   
   If the user increases the `fetch.max.bytes` from 50 MB, then it might cause 
GC pressure due to holding of large continuous byte array.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to