[ 
https://issues.apache.org/jira/browse/KAFKA-15653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17780569#comment-17780569
 ] 

ASF GitHub Bot commented on KAFKA-15653:
----------------------------------------

ijuma commented on code in PR #564:
URL: https://github.com/apache/kafka-site/pull/564#discussion_r1375131837


##########
36/upgrade.html:
##########
@@ -116,6 +116,12 @@ <h5><a id="upgrade_360_notable" 
href="#upgrade_360_notable">Notable changes in 3
             For more information about the early access tiered storage 
feature, please check <a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage";>KIP-405</a>
 and
             <a 
href="https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Tiered+Storage+Early+Access+Release+Notes";>Tiered
 Storage Early Access Release Note</a>.
         </li>
+        <li>Transaction partition verification (<a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-890%3A+Transactions+Server-Side+Defense";>KIP-890</a>)
+            has been added to data partitions to prevent hanging transactions. 
Workloads with compression can experience InvalidRecordExceptions and 
UnknownServerExceptions.
+            For workloads with compression, 
<code>transaction.partition.verification.enable</code> should be set to false. 
Note that the default for 3.6 is true.

Review Comment:
   I'd just say "This feature can be disabled by setting..." (instead of "For 
workloads with compression...".



##########
36/upgrade.html:
##########
@@ -116,6 +116,12 @@ <h5><a id="upgrade_360_notable" 
href="#upgrade_360_notable">Notable changes in 3
             For more information about the early access tiered storage 
feature, please check <a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage";>KIP-405</a>
 and
             <a 
href="https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Tiered+Storage+Early+Access+Release+Notes";>Tiered
 Storage Early Access Release Note</a>.
         </li>
+        <li>Transaction partition verification (<a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-890%3A+Transactions+Server-Side+Defense";>KIP-890</a>)
+            has been added to data partitions to prevent hanging transactions. 
Workloads with compression can experience InvalidRecordExceptions and 
UnknownServerExceptions.
+            For workloads with compression, 
<code>transaction.partition.verification.enable</code> should be set to false. 
Note that the default for 3.6 is true.
+            The configuration can also be updated dynamically and is applied 
to the broker.
+            This will be fixed in a future release. See <a 
href="https://issues.apache.org/jira/browse/KAFKA-15653";>KAFKA-15653</a> for 
more details.

Review Comment:
   Can we say it will be fixed in 3.6.1?.





> NPE in ChunkedByteStream
> ------------------------
>
>                 Key: KAFKA-15653
>                 URL: https://issues.apache.org/jira/browse/KAFKA-15653
>             Project: Kafka
>          Issue Type: Bug
>          Components: producer 
>    Affects Versions: 3.6.0
>         Environment: Docker container on a Linux laptop, using the latest 
> release.
>            Reporter: Travis Bischel
>            Assignee: Justine Olshan
>            Priority: Major
>         Attachments: repro.sh
>
>
> When looping franz-go integration tests, I received an UNKNOWN_SERVER_ERROR 
> from producing. The broker logs for the failing request:
>  
> {noformat}
> [2023-10-19 22:29:58,160] ERROR [ReplicaManager broker=2] Error processing 
> append operation on partition 
> 2fa8995d8002fbfe68a96d783f26aa2c5efc15368bf44ed8f2ab7e24b41b9879-24 
> (kafka.server.ReplicaManager)
> java.lang.NullPointerException
>       at 
> org.apache.kafka.common.utils.ChunkedBytesStream.<init>(ChunkedBytesStream.java:89)
>       at 
> org.apache.kafka.common.record.CompressionType$3.wrapForInput(CompressionType.java:105)
>       at 
> org.apache.kafka.common.record.DefaultRecordBatch.recordInputStream(DefaultRecordBatch.java:273)
>       at 
> org.apache.kafka.common.record.DefaultRecordBatch.compressedIterator(DefaultRecordBatch.java:277)
>       at 
> org.apache.kafka.common.record.DefaultRecordBatch.skipKeyValueIterator(DefaultRecordBatch.java:352)
>       at 
> org.apache.kafka.storage.internals.log.LogValidator.validateMessagesAndAssignOffsetsCompressed(LogValidator.java:358)
>       at 
> org.apache.kafka.storage.internals.log.LogValidator.validateMessagesAndAssignOffsets(LogValidator.java:165)
>       at kafka.log.UnifiedLog.append(UnifiedLog.scala:805)
>       at kafka.log.UnifiedLog.appendAsLeader(UnifiedLog.scala:719)
>       at 
> kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:1313)
>       at kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:1301)
>       at 
> kafka.server.ReplicaManager.$anonfun$appendToLocalLog$6(ReplicaManager.scala:1210)
>       at 
> scala.collection.StrictOptimizedMapOps.map(StrictOptimizedMapOps.scala:28)
>       at 
> scala.collection.StrictOptimizedMapOps.map$(StrictOptimizedMapOps.scala:27)
>       at scala.collection.mutable.HashMap.map(HashMap.scala:35)
>       at 
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:1198)
>       at kafka.server.ReplicaManager.appendEntries$1(ReplicaManager.scala:754)
>       at 
> kafka.server.ReplicaManager.$anonfun$appendRecords$18(ReplicaManager.scala:874)
>       at 
> kafka.server.ReplicaManager.$anonfun$appendRecords$18$adapted(ReplicaManager.scala:874)
>       at 
> kafka.server.KafkaRequestHandler$.$anonfun$wrap$3(KafkaRequestHandler.scala:73)
>       at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:130)
>       at java.base/java.lang.Thread.run(Unknown Source)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to