[ https://issues.apache.org/jira/browse/KAFKA-15096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17734119#comment-17734119 ]
Josep Prat commented on KAFKA-15096: ------------------------------------ I cherry-picked this to 3.3, 3.4 and 3.5 > CVE 2023-34455 - Vulnerability identified with Apache kafka > ----------------------------------------------------------- > > Key: KAFKA-15096 > URL: https://issues.apache.org/jira/browse/KAFKA-15096 > Project: Kafka > Issue Type: Bug > Affects Versions: 3.3.0, 3.4.0, 3.3.1, 3.3.2, 3.5.0, 3.4.1 > Reporter: Sasikumar Muthukrishnan Sampath > Assignee: Manyanda Chitimbo > Priority: Major > Fix For: 3.5.1 > > > A new vulnerability CVE-2023-34455 is identified with apache kafka > dependency. The vulnerability is coming from snappy-java:1.1.8.4 > Version 1.1.10.1 contains a patch for this issue. Please upgrade the > snappy-java version to fix this issue > > snappy-java is a fast compressor/decompressor for Java. Due to use of an > unchecked chunk length, an unrecoverable fatal error can occur in versions > prior to 1.1.10.1. > The code in the function hasNextChunk in the fileSnappyInputStream.java > checks if a given stream has more chunks to read. It does that by attempting > to read 4 bytes. If it wasn’t possible to read the 4 bytes, the function > returns false. Otherwise, if 4 bytes were available, the code treats them as > the length of the next chunk. > In the case that the `compressed` variable is null, a byte array is allocated > with the size given by the input data. Since the code doesn’t test the > legality of the `chunkSize` variable, it is possible to pass a negative > number (such as 0xFFFFFFFF which is -1), which will cause the code to raise a > `java.lang.NegativeArraySizeException` exception. A worse case would happen > when passing a huge positive value (such as 0x7FFFFFFF), which would raise > the fatal `java.lang.OutOfMemoryError` error. -- This message was sent by Atlassian Jira (v8.20.10#820010)