[jira] [Commented] (CASSANDRA-16895) Support Java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-16895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17450255#comment-17450255 ] Gil Tene commented on CASSANDRA-16895: -- [~benedict] I expect that multiple OpenJDK distros will end up doing this if it is deemed useful. That’s what happens with OpenJFX currently. and C* is not the only example if needing this sort of Nashorn packaging. I agree that depending on a specific JDK distro is not the a good course. If packaging Nashorn with C* is not workable, and it’s functionality is required, then instructing users on how they can get Nashorn to use their C* with would be needed/warranted, spelling out the various options (either get it themselves from e.g. maven central and install it in some prescribed location, or simply use a JDK package that has it pre-bundled). > Support Java 17 > --- > > Key: CASSANDRA-16895 > URL: https://issues.apache.org/jira/browse/CASSANDRA-16895 > Project: Cassandra > Issue Type: Task > Components: Build >Reporter: Ekaterina Dimitrova >Assignee: Ekaterina Dimitrova >Priority: Normal > > This ticket is intended to group all issues found to support Java 17 in the > future. > Upgrade steps: > * [Dependencies > |https://mvnrepository.com/artifact/org.apache.cassandra/cassandra-all/4.0.1]to > be updated (not all but at least those that require an update in order to > work with Java 17) > * More encapsulated JDK internal APIs. Some of the issues might be solved > with the dependencies updates > * Currently trunk compiles if we remove the Nashorn dependency (ant script > tag, used for the test environment; UDFs) . The oracle recommendation to use > Nashorn-core won't work for the project as it is under GPL 2.0. Most probably > we will opt in for graal-sdk licensed under UPL > * All tests to be cleaned > * CI environment to be setup -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-16895) Support Java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-16895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17449705#comment-17449705 ] Gil Tene edited comment on CASSANDRA-16895 at 11/26/21, 11:19 PM: -- [~benedict] : If popular and free OpenJDK distribution packages were available pre-bundled with JDK+Nashron, would that help keep your packaging simple while allowing people to continue to use Nashron for e.g. UDFs? E.g. Zulu builds of OpenJDK already make pre-bundled JDK+FX packages freely available for similar reasons. was (Author: giltene): [~benedict] : If popular and free OpenJDK distribution packages were available pre-bundled with JDK+Nashron, would that help keep your packaging simple while allowing people to continue to use Nashron for e.g. UDFs? E.g. Zulu builds of OpenJDK already make pre-bundled JDK+FX packages freely for similar reasons. > Support Java 17 > --- > > Key: CASSANDRA-16895 > URL: https://issues.apache.org/jira/browse/CASSANDRA-16895 > Project: Cassandra > Issue Type: Task > Components: Build >Reporter: Ekaterina Dimitrova >Assignee: Ekaterina Dimitrova >Priority: Normal > > This ticket is intended to group all issues found to support Java 17 in the > future. > Upgrade steps: > * [Dependencies > |https://mvnrepository.com/artifact/org.apache.cassandra/cassandra-all/4.0.1]to > be updated (not all but at least those that require an update in order to > work with Java 17) > * More encapsulated JDK internal APIs. Some of the issues might be solved > with the dependencies updates > * Currently trunk compiles if we remove the Nashorn dependency (ant script > tag, used for the test environment; UDFs) . The oracle recommendation to use > Nashorn-core won't work for the project as it is under GPL 2.0. Most probably > we will opt in for graal-sdk licensed under UPL > * All tests to be cleaned > * CI environment to be setup -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-16895) Support Java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-16895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17449705#comment-17449705 ] Gil Tene edited comment on CASSANDRA-16895 at 11/26/21, 11:18 PM: -- [~benedict] : If popular and free OpenJDK distribution packages were available pre-bundled with JDK+Nashron, would that help keep your packaging simple while allowing people to continue to use Nashron for e.g. UDFs? E.g. Zulu builds of OpenJDK already make pre-bundled JDK+FX packages freely for similar reasons. was (Author: giltene): [~benedict] : If popular and free OpenJDK distribution package were available pre-bundled with JDK+Nashron, would that help keep your packaging simple while allowing people to continue to use Nashron for e.g. UDFs? E.g. Zulu builds of OpenJDK already make pre-bundled JDK+FX packages freely for similar reasons. > Support Java 17 > --- > > Key: CASSANDRA-16895 > URL: https://issues.apache.org/jira/browse/CASSANDRA-16895 > Project: Cassandra > Issue Type: Task > Components: Build >Reporter: Ekaterina Dimitrova >Assignee: Ekaterina Dimitrova >Priority: Normal > > This ticket is intended to group all issues found to support Java 17 in the > future. > Upgrade steps: > * [Dependencies > |https://mvnrepository.com/artifact/org.apache.cassandra/cassandra-all/4.0.1]to > be updated (not all but at least those that require an update in order to > work with Java 17) > * More encapsulated JDK internal APIs. Some of the issues might be solved > with the dependencies updates > * Currently trunk compiles if we remove the Nashorn dependency (ant script > tag, used for the test environment; UDFs) . The oracle recommendation to use > Nashorn-core won't work for the project as it is under GPL 2.0. Most probably > we will opt in for graal-sdk licensed under UPL > * All tests to be cleaned > * CI environment to be setup -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-16895) Support Java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-16895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17449705#comment-17449705 ] Gil Tene commented on CASSANDRA-16895: -- [~benedict] : If popular and free OpenJDK distribution package were available pre-bundled with JDK+Nashron, would that help keep your packaging simple while allowing people to continue to use Nashron for e.g. UDFs? E.g. Zulu builds of OpenJDK already make pre-bundled JDK+FX packages freely for similar reasons. > Support Java 17 > --- > > Key: CASSANDRA-16895 > URL: https://issues.apache.org/jira/browse/CASSANDRA-16895 > Project: Cassandra > Issue Type: Task > Components: Build >Reporter: Ekaterina Dimitrova >Assignee: Ekaterina Dimitrova >Priority: Normal > > This ticket is intended to group all issues found to support Java 17 in the > future. > Upgrade steps: > * [Dependencies > |https://mvnrepository.com/artifact/org.apache.cassandra/cassandra-all/4.0.1]to > be updated (not all but at least those that require an update in order to > work with Java 17) > * More encapsulated JDK internal APIs. Some of the issues might be solved > with the dependencies updates > * Currently trunk compiles if we remove the Nashorn dependency (ant script > tag, used for the test environment; UDFs) . The oracle recommendation to use > Nashorn-core won't work for the project as it is under GPL 2.0. Most probably > we will opt in for graal-sdk licensed under UPL > * All tests to be cleaned > * CI environment to be setup -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-16895) Support Java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-16895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17449652#comment-17449652 ] Gil Tene edited comment on CASSANDRA-16895 at 11/26/21, 6:03 PM: - It’s worth noting that while Nashorn is no longer packaged with the JDK, a standalone Nashorn for Java 11+ is very much available. See the [Nashorn Engine|https://github.com/openjdk/nashorn] and e.g. [coordinates on maven central|https://search.maven.org/artifact/org.openjdk.nashorn/nashorn-core/15.3/jar]. was (Author: giltene): It’s worth noting that while Nashorn is no longer packaged with the JDK, a standalone Nashorn for Java 11+ is very much available. See the [Nashorn Engine|[https://github.com/openjdk/nashorn]] and e.g. [coordinates on maven central|https://search.maven.org/artifact/org.openjdk.nashorn/nashorn-core/15.3/jar]. > Support Java 17 > --- > > Key: CASSANDRA-16895 > URL: https://issues.apache.org/jira/browse/CASSANDRA-16895 > Project: Cassandra > Issue Type: Task > Components: Build >Reporter: Ekaterina Dimitrova >Assignee: Ekaterina Dimitrova >Priority: Normal > > This ticket is intended to group all issues found to support Java 17 in the > future. > Upgrade steps: > * [Dependencies > |https://mvnrepository.com/artifact/org.apache.cassandra/cassandra-all/4.0.1]to > be updated (not all but at least those that require an update in order to > work with Java 17) > * More encapsulated JDK internal APIs. Some of the issues might be solved > with the dependencies updates > * Currently trunk compiles if we remove the Nashorn dependency (ant script > tag, used for the test environment; UDFs) . The oracle recommendation to use > Nashorn-core won't work for the project as it is under GPL 2.0. Most probably > we will opt in for graal-sdk licensed under UPL > * All tests to be cleaned > * CI environment to be setup -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-16895) Support Java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-16895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17449652#comment-17449652 ] Gil Tene edited comment on CASSANDRA-16895 at 11/26/21, 6:02 PM: - It’s worth noting that while Nashorn is no longer packaged with the JDK, a standalone Nashorn for Java 11+ is very much available. See the [Nashorn Engine|[https://github.com/openjdk/nashorn]] and e.g. [coordinates on maven central|https://search.maven.org/artifact/org.openjdk.nashorn/nashorn-core/15.3/jar]. was (Author: giltene): It’s worth noting that while Nashorn is no longer packaged with the JDK, a standalone Nashorn for Java 11+ is very much available. See the [Nashorn Engine|[https://github.com/openjdk/nashorn]] and e.g. [coordinates on maven central|https://search.maven.org/artifact/org.openjdk.nashorn/nashorn-core/15.3/jar]. > Support Java 17 > --- > > Key: CASSANDRA-16895 > URL: https://issues.apache.org/jira/browse/CASSANDRA-16895 > Project: Cassandra > Issue Type: Task > Components: Build >Reporter: Ekaterina Dimitrova >Assignee: Ekaterina Dimitrova >Priority: Normal > > This ticket is intended to group all issues found to support Java 17 in the > future. > Upgrade steps: > * [Dependencies > |https://mvnrepository.com/artifact/org.apache.cassandra/cassandra-all/4.0.1]to > be updated (not all but at least those that require an update in order to > work with Java 17) > * More encapsulated JDK internal APIs. Some of the issues might be solved > with the dependencies updates > * Currently trunk compiles if we remove the Nashorn dependency (ant script > tag, used for the test environment; UDFs) . The oracle recommendation to use > Nashorn-core won't work for the project as it is under GPL 2.0. Most probably > we will opt in for graal-sdk licensed under UPL > * All tests to be cleaned > * CI environment to be setup -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-16895) Support Java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-16895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17449652#comment-17449652 ] Gil Tene edited comment on CASSANDRA-16895 at 11/26/21, 6:01 PM: - It’s worth noting that while Nashorn is no longer packaged with the JDK, a standalone Nashorn for Java 11+ is very much available. See the [Nashorn Engine|[https://github.com/openjdk/nashorn]] and e.g. [coordinates on maven central|https://search.maven.org/artifact/org.openjdk.nashorn/nashorn-core/15.3/jar]. was (Author: giltene): It’s worth noting that while Nashorn is no longer packaged with the JDK, a standalone Nashorn for Java 11+ is very much available. See the [Nashorn Engine|[https://github.com/openjdk/nashorn],] and e.g. [coordinates on maven central|https://search.maven.org/artifact/org.openjdk.nashorn/nashorn-core/15.3/jar]. > Support Java 17 > --- > > Key: CASSANDRA-16895 > URL: https://issues.apache.org/jira/browse/CASSANDRA-16895 > Project: Cassandra > Issue Type: Task > Components: Build >Reporter: Ekaterina Dimitrova >Assignee: Ekaterina Dimitrova >Priority: Normal > > This ticket is intended to group all issues found to support Java 17 in the > future. > Upgrade steps: > * [Dependencies > |https://mvnrepository.com/artifact/org.apache.cassandra/cassandra-all/4.0.1]to > be updated (not all but at least those that require an update in order to > work with Java 17) > * More encapsulated JDK internal APIs. Some of the issues might be solved > with the dependencies updates > * Currently trunk compiles if we remove the Nashorn dependency (ant script > tag, used for the test environment; UDFs) . The oracle recommendation to use > Nashorn-core won't work for the project as it is under GPL 2.0. Most probably > we will opt in for graal-sdk licensed under UPL > * All tests to be cleaned > * CI environment to be setup -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-16895) Support Java 17
[ https://issues.apache.org/jira/browse/CASSANDRA-16895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17449652#comment-17449652 ] Gil Tene commented on CASSANDRA-16895: -- It’s worth noting that while Nashorn is no longer packaged with the JDK, a standalone Nashorn for Java 11+ is very much available. See the [Nashorn Engine|[https://github.com/openjdk/nashorn],] and e.g. [coordinates on maven central|https://search.maven.org/artifact/org.openjdk.nashorn/nashorn-core/15.3/jar]. > Support Java 17 > --- > > Key: CASSANDRA-16895 > URL: https://issues.apache.org/jira/browse/CASSANDRA-16895 > Project: Cassandra > Issue Type: Task > Components: Build >Reporter: Ekaterina Dimitrova >Assignee: Ekaterina Dimitrova >Priority: Normal > > This ticket is intended to group all issues found to support Java 17 in the > future. > Upgrade steps: > * [Dependencies > |https://mvnrepository.com/artifact/org.apache.cassandra/cassandra-all/4.0.1]to > be updated (not all but at least those that require an update in order to > work with Java 17) > * More encapsulated JDK internal APIs. Some of the issues might be solved > with the dependencies updates > * Currently trunk compiles if we remove the Nashorn dependency (ant script > tag, used for the test environment; UDFs) . The oracle recommendation to use > Nashorn-core won't work for the project as it is under GPL 2.0. Most probably > we will opt in for graal-sdk licensed under UPL > * All tests to be cleaned > * CI environment to be setup -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14284) Chunk checksum test needs to occur before uncompress to avoid JVM crash
[ https://issues.apache.org/jira/browse/CASSANDRA-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16417573#comment-16417573 ] Gil Tene commented on CASSANDRA-14284: -- The patch for 2.1 has an issue, I think: 2.1 (unlike the later versions) seems to support checksumming o either the compressed or uncompressed data (depending on what metadata.hasPostCompressionAdlerChecksums indicates). Only the checksum test of the compressed data can be moved to before the uncompress. The checksum in the uncompressed case has to remain after the uncompress. > Chunk checksum test needs to occur before uncompress to avoid JVM crash > --- > > Key: CASSANDRA-14284 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14284 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: The check-only-after-doing-the-decompress logic appears > to be in all current releases. > Here are some samples at different evolution points : > 3.11.2: > [https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L146] > https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L207 > > 3.5: > > [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135] > [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L196] > 2.1.17: > > [https://github.com/apache/cassandra/blob/cassandra-2.1.17/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L122] > >Reporter: Gil Tene >Assignee: Benjamin Lerer >Priority: Major > > While checksums are (generally) performed on compressed data, the checksum > test when reading is currently (in all variants of C* 2.x, 3.x I've looked > at) done [on the compressed data] only after the uncompress operation has > completed. > The issue here is that LZ4_decompress_fast (as documented in e.g. > [https://github.com/lz4/lz4/blob/dev/lib/lz4.h#L214)] can result in memory > overruns when provided with malformed source data. This in turn can (and > does, e.g. in CASSANDRA-13757) lead to JVM crashes during the uncompress of > corrupted chunks. The checksum operation would obviously detect the issue, > but we'd never get to it if the JVM crashes first. > Moving the checksum test of the compressed data to before the uncompress > operation (in cases where the checksum is done on compressed data) will > resolve this issue. > - > The check-only-after-doing-the-decompress logic appears to be in all current > releases. > Here are some samples at different evolution points : > 3.11.2: > [https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L146] > https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L207 > > 3.5: > > [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135] > [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L196] > 2.1.17: > > [https://github.com/apache/cassandra/blob/cassandra-2.1.17/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L122] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14284) Chunk checksum test needs to occur before uncompress to avoid JVM crash
[ https://issues.apache.org/jira/browse/CASSANDRA-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gil Tene updated CASSANDRA-14284: - Description: While checksums are (generally) performed on compressed data, the checksum test when reading is currently (in all variants of C* 2.x, 3.x I've looked at) done [on the compressed data] only after the uncompress operation has completed. The issue here is that LZ4_decompress_fast (as documented in e.g. [https://github.com/lz4/lz4/blob/dev/lib/lz4.h#L214)] can result in memory overruns when provided with malformed source data. This in turn can (and does, e.g. in CASSANDRA-13757) lead to JVM crashes during the uncompress of corrupted chunks. The checksum operation would obviously detect the issue, but we'd never get to it if the JVM crashes first. Moving the checksum test of the compressed data to before the uncompress operation (in cases where the checksum is done on compressed data) will resolve this issue. - The check-only-after-doing-the-decompress logic appears to be in all current releases. Here are some samples at different evolution points : 3.11.2: [https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L146] https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L207 3.5: [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135] [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L196] 2.1.17: [https://github.com/apache/cassandra/blob/cassandra-2.1.17/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L122] was: While checksums are (generally) performed on compressed data, the checksum test when reading is currently (in all variants of C* 2.x, 3.x I've looked at) done [on the compressed data] only after the uncompress operation has completed. The issue here is that LZ4_decompress_fast (as documented in e.g. [https://github.com/lz4/lz4/blob/dev/lib/lz4.h#L214)] can result in memory overruns when provided with malformed source data. This in turn can (and does, e.g. in CASSANDRA-13757) lead to JVM crashes during the uncompress of corrupted chunks. The checksum operation would obviously detect the issue, but we'd never get to it if the JVM crashes first. Moving the checksum test of the compressed data to before the uncompress operation (in cases where the checksum is done on compressed data) will resolve this issue. - The check-only-after-doing-the-decompress logic appears to be in all current releases. Here are some samples at different evolution points : 3.11.2: [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135] [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L198] 3.5: [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135] [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L196] 2.1.17: [https://github.com/apache/cassandra/blob/cassandra-2.1.17/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L122] > Chunk checksum test needs to occur before uncompress to avoid JVM crash > --- > > Key: CASSANDRA-14284 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14284 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: The check-only-after-doing-the-decompress logic appears > to be in all current releases. > Here are some samples at different evolution points : > 3.11.2: > https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135 > https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L198 > 3.5: > > [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135] > https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L196 > 2.1.17: > > [https://github.com/apache/cassandra/blob/cassandra-2.1.17/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L122] > >Reporter: Gil Tene >Assignee: Benjamin
[jira] [Updated] (CASSANDRA-14284) Chunk checksum test needs to occur before uncompress to avoid JVM crash
[ https://issues.apache.org/jira/browse/CASSANDRA-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gil Tene updated CASSANDRA-14284: - Environment: The check-only-after-doing-the-decompress logic appears to be in all current releases. Here are some samples at different evolution points : 3.11.2: [https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L146] https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L207 3.5: [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135] [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L196] 2.1.17: [https://github.com/apache/cassandra/blob/cassandra-2.1.17/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L122] was: The check-only-after-doing-the-decompress logic appears to be in all current releases. Here are some samples at different evolution points : 3.11.2: https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135 https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L198 3.5: [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135] https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L196 2.1.17: [https://github.com/apache/cassandra/blob/cassandra-2.1.17/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L122] > Chunk checksum test needs to occur before uncompress to avoid JVM crash > --- > > Key: CASSANDRA-14284 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14284 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: The check-only-after-doing-the-decompress logic appears > to be in all current releases. > Here are some samples at different evolution points : > 3.11.2: > [https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L146] > https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L207 > > 3.5: > > [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135] > [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L196] > 2.1.17: > > [https://github.com/apache/cassandra/blob/cassandra-2.1.17/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L122] > >Reporter: Gil Tene >Assignee: Benjamin Lerer >Priority: Major > > While checksums are (generally) performed on compressed data, the checksum > test when reading is currently (in all variants of C* 2.x, 3.x I've looked > at) done [on the compressed data] only after the uncompress operation has > completed. > The issue here is that LZ4_decompress_fast (as documented in e.g. > [https://github.com/lz4/lz4/blob/dev/lib/lz4.h#L214)] can result in memory > overruns when provided with malformed source data. This in turn can (and > does, e.g. in CASSANDRA-13757) lead to JVM crashes during the uncompress of > corrupted chunks. The checksum operation would obviously detect the issue, > but we'd never get to it if the JVM crashes first. > Moving the checksum test of the compressed data to before the uncompress > operation (in cases where the checksum is done on compressed data) will > resolve this issue. > - > The check-only-after-doing-the-decompress logic appears to be in all current > releases. > Here are some samples at different evolution points : > 3.11.2: > [https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L146] > https://github.com/apache/cassandra/blob/cassandra-3.11.2/src/java/org/apache/cassandra/io/util/CompressedChunkReader.java#L207 > > 3.5: > > [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135] > [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L196] > 2.1.17: > >
[jira] [Comment Edited] (CASSANDRA-13757) Cassandra 3.5.0 JVM Segfault Problem While Repair Job is Running
[ https://issues.apache.org/jira/browse/CASSANDRA-13757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382664#comment-16382664 ] Gil Tene edited comment on CASSANDRA-13757 at 3/1/18 9:26 PM: -- See CASSANDRA-14284 for analysis of how/why this can happen, and what needs to be fixed to prevent it. was (Author: giltene): See https://issues.apache.org/jira/browse/CASSANDRA-14284 for analysis of how/why this can happen, and what needs to be fixed to prevent it. > Cassandra 3.5.0 JVM Segfault Problem While Repair Job is Running > > > Key: CASSANDRA-13757 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13757 > Project: Cassandra > Issue Type: Bug > Environment: Operation System: Debian Jessie > Java: Oracle JDK 1.8.0_131 > Cassandra: 3.5.0 >Reporter: Serhat Rıfat Demircan >Priority: Major > > We got following error while repair job running on our cluster. One of the > nodes stop due to segmantation fault in JVM and repair job fails. > We could not reproduce this problem on our test and staging enviroment (main > difference is data size). > {code:java} > # > # SIGSEGV (0xb) at pc=0x7fd80a399e70, pid=1305, tid=0x7fd7ee7c4700 > # > # JRE version: Java(TM) SE Runtime Environment (8.0_131-b11) (build > 1.8.0_131-b11) > # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.131-b11 mixed mode > linux-amd64 compressed oops) > # Problematic frame: > # C [liblz4-java3580121503903465201.so+0x5e70] LZ4_decompress_fast+0xd0 > # > # Failed to write core dump. Core dumps have been disabled. To enable core > dumping, try "ulimit -c unlimited" before starting Java again > # > # If you would like to submit a bug report, please visit: > # http://bugreport.java.com/bugreport/crash.jsp > # The crash happened outside the Java Virtual Machine in native code. > # See problematic frame for where to report the bug. > # > --- T H R E A D --- > Current thread (0x7fce32dad1b0): JavaThread "CompactionExecutor:9798" > daemon [_thread_in_native, id=16879, > stack(0x7fd7ee784000,0x7fd7ee7c5000)] > siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: > 0x7fd450c4d000 > Registers: > RAX=0x7fcde6560d32, RBX=0x7fd450c4cff9, RCX=0x7fcde6560c7a, > RDX=0x7fcde6560d3e > RSP=0x7fd7ee7c3160, RBP=0x7fd450c44ae6, RSI=0x7fcde6562ff8, > RDI=0x00c2 > R8 =0x7fcde6562ff4, R9 =0x7fcde6563000, R10=0x, > R11=0x > R12=0x000c, R13=0x7fd4501cd000, R14=0x7fcde6562ff7, > R15=0x7fcde6562ffb > RIP=0x7fd80a399e70, EFLAGS=0x00010283, CSGSFS=0x0033, > ERR=0x0004 > TRAPNO=0x000e > Top of Stack: (sp=0x7fd7ee7c3160) > 0x7fd7ee7c3160: 0008 7fd81e21c3d0 > 0x7fd7ee7c3170: 0004 0001 > 0x7fd7ee7c3180: 0002 0001 > 0x7fd7ee7c3190: 0004 0004 > 0x7fd7ee7c31a0: 0004 0004 > 0x7fd7ee7c31b0: > 0x7fd7ee7c31c0: > 0x7fd7ee7c31d0: 0001 > 0x7fd7ee7c31e0: 0002 0003 > 0x7fd7ee7c31f0: 7fd7ee7c32b8 7fce32dad3a8 > 0x7fd7ee7c3200: > 0x7fd7ee7c3210: 7fd4501cd000 7fcde6553000 > 0x7fd7ee7c3220: 00a77ae6 7fd80a39659d > 0x7fd7ee7c3230: dcb8fc9b > 0x7fd7ee7c3240: 7fd7ee7c32d0 > 0x7fd7ee7c3250: 0006e5c7e4d8 7fd7ee7c32b8 > 0x7fd7ee7c3260: 7fce32dad1b0 7fd81df2099d > 0x7fd7ee7c3270: 7fd7ee7c32a8 > 0x7fd7ee7c3280: 0001 > 0x7fd7ee7c3290: 0006e5c7e528 7fd81d74df10 > 0x7fd7ee7c32a0: 0006e5c7e4d8 > 0x7fd7ee7c32b0: 0006f6c7fbf8 0006f6e957f0 > 0x7fd7ee7c32c0: 0006e5c7e350 7fd87fff > 0x7fd7ee7c32d0: 0006e5c7e528 7fd81fa867e0 > 0x7fd7ee7c32e0: 00a77ae20001 00a77ae2 > 0x7fd7ee7c32f0: 0006e5c7e488 0112d5f1 > 0x7fd7ee7c3300: dcb8fc9b99ce 000100a77ae6 > 0x7fd7ee7c3310: 00a814b000a814b4 0006e5c7e4d8 > 0x7fd7ee7c3320: 0006e5c7e4d8 0006f6a4df38 > 0x7fd7ee7c3330: 00060001 00067fff > 0x7fd7ee7c3340: 008971582c8a 0006189d87852057 > 0x7fd7ee7c3350: e5244e71 > Instructions: (pc=0x7fd80a399e70) > 0x7fd80a399e50: e4 0f 49 83 fc 0f 0f 84 94 00 00 00 4a 8d 14 20 > 0x7fd80a399e60: 48 39 f2 0f 87 c0 00 00
[jira] [Commented] (CASSANDRA-13757) Cassandra 3.5.0 JVM Segfault Problem While Repair Job is Running
[ https://issues.apache.org/jira/browse/CASSANDRA-13757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16382664#comment-16382664 ] Gil Tene commented on CASSANDRA-13757: -- See https://issues.apache.org/jira/browse/CASSANDRA-14284 for analysis of how/why this can happen, and what needs to be fixed to prevent it. > Cassandra 3.5.0 JVM Segfault Problem While Repair Job is Running > > > Key: CASSANDRA-13757 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13757 > Project: Cassandra > Issue Type: Bug > Environment: Operation System: Debian Jessie > Java: Oracle JDK 1.8.0_131 > Cassandra: 3.5.0 >Reporter: Serhat Rıfat Demircan >Priority: Major > > We got following error while repair job running on our cluster. One of the > nodes stop due to segmantation fault in JVM and repair job fails. > We could not reproduce this problem on our test and staging enviroment (main > difference is data size). > {code:java} > # > # SIGSEGV (0xb) at pc=0x7fd80a399e70, pid=1305, tid=0x7fd7ee7c4700 > # > # JRE version: Java(TM) SE Runtime Environment (8.0_131-b11) (build > 1.8.0_131-b11) > # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.131-b11 mixed mode > linux-amd64 compressed oops) > # Problematic frame: > # C [liblz4-java3580121503903465201.so+0x5e70] LZ4_decompress_fast+0xd0 > # > # Failed to write core dump. Core dumps have been disabled. To enable core > dumping, try "ulimit -c unlimited" before starting Java again > # > # If you would like to submit a bug report, please visit: > # http://bugreport.java.com/bugreport/crash.jsp > # The crash happened outside the Java Virtual Machine in native code. > # See problematic frame for where to report the bug. > # > --- T H R E A D --- > Current thread (0x7fce32dad1b0): JavaThread "CompactionExecutor:9798" > daemon [_thread_in_native, id=16879, > stack(0x7fd7ee784000,0x7fd7ee7c5000)] > siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: > 0x7fd450c4d000 > Registers: > RAX=0x7fcde6560d32, RBX=0x7fd450c4cff9, RCX=0x7fcde6560c7a, > RDX=0x7fcde6560d3e > RSP=0x7fd7ee7c3160, RBP=0x7fd450c44ae6, RSI=0x7fcde6562ff8, > RDI=0x00c2 > R8 =0x7fcde6562ff4, R9 =0x7fcde6563000, R10=0x, > R11=0x > R12=0x000c, R13=0x7fd4501cd000, R14=0x7fcde6562ff7, > R15=0x7fcde6562ffb > RIP=0x7fd80a399e70, EFLAGS=0x00010283, CSGSFS=0x0033, > ERR=0x0004 > TRAPNO=0x000e > Top of Stack: (sp=0x7fd7ee7c3160) > 0x7fd7ee7c3160: 0008 7fd81e21c3d0 > 0x7fd7ee7c3170: 0004 0001 > 0x7fd7ee7c3180: 0002 0001 > 0x7fd7ee7c3190: 0004 0004 > 0x7fd7ee7c31a0: 0004 0004 > 0x7fd7ee7c31b0: > 0x7fd7ee7c31c0: > 0x7fd7ee7c31d0: 0001 > 0x7fd7ee7c31e0: 0002 0003 > 0x7fd7ee7c31f0: 7fd7ee7c32b8 7fce32dad3a8 > 0x7fd7ee7c3200: > 0x7fd7ee7c3210: 7fd4501cd000 7fcde6553000 > 0x7fd7ee7c3220: 00a77ae6 7fd80a39659d > 0x7fd7ee7c3230: dcb8fc9b > 0x7fd7ee7c3240: 7fd7ee7c32d0 > 0x7fd7ee7c3250: 0006e5c7e4d8 7fd7ee7c32b8 > 0x7fd7ee7c3260: 7fce32dad1b0 7fd81df2099d > 0x7fd7ee7c3270: 7fd7ee7c32a8 > 0x7fd7ee7c3280: 0001 > 0x7fd7ee7c3290: 0006e5c7e528 7fd81d74df10 > 0x7fd7ee7c32a0: 0006e5c7e4d8 > 0x7fd7ee7c32b0: 0006f6c7fbf8 0006f6e957f0 > 0x7fd7ee7c32c0: 0006e5c7e350 7fd87fff > 0x7fd7ee7c32d0: 0006e5c7e528 7fd81fa867e0 > 0x7fd7ee7c32e0: 00a77ae20001 00a77ae2 > 0x7fd7ee7c32f0: 0006e5c7e488 0112d5f1 > 0x7fd7ee7c3300: dcb8fc9b99ce 000100a77ae6 > 0x7fd7ee7c3310: 00a814b000a814b4 0006e5c7e4d8 > 0x7fd7ee7c3320: 0006e5c7e4d8 0006f6a4df38 > 0x7fd7ee7c3330: 00060001 00067fff > 0x7fd7ee7c3340: 008971582c8a 0006189d87852057 > 0x7fd7ee7c3350: e5244e71 > Instructions: (pc=0x7fd80a399e70) > 0x7fd80a399e50: e4 0f 49 83 fc 0f 0f 84 94 00 00 00 4a 8d 14 20 > 0x7fd80a399e60: 48 39 f2 0f 87 c0 00 00 00 0f 1f 80 00 00 00 00 > 0x7fd80a399e70: 48 8b 0b 48 83 c3 08 48 89 08 48 83 c0 08 48 39 > 0x7fd80a399e80: c2 77 ed 48 29 d0 48 89 d1 48 29 c3 0f b7 03 48 >
[jira] [Updated] (CASSANDRA-14284) Chunk checksum test needs to occur before uncompress to avoid JVM crash
[ https://issues.apache.org/jira/browse/CASSANDRA-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gil Tene updated CASSANDRA-14284: - Description: While checksums are (generally) performed on compressed data, the checksum test when reading is currently (in all variants of C* 2.x, 3.x I've looked at) done [on the compressed data] only after the uncompress operation has completed. The issue here is that LZ4_decompress_fast (as documented in e.g. [https://github.com/lz4/lz4/blob/dev/lib/lz4.h#L214)] can result in memory overruns when provided with malformed source data. This in turn can (and does, e.g. in CASSANDRA-13757) lead to JVM crashes during the uncompress of corrupted chunks. The checksum operation would obviously detect the issue, but we'd never get to it if the JVM crashes first. Moving the checksum test of the compressed data to before the uncompress operation (in cases where the checksum is done on compressed data) will resolve this issue. - The check-only-after-doing-the-decompress logic appears to be in all current releases. Here are some samples at different evolution points : 3.11.2: [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135] [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L198] 3.5: [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135] [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L196] 2.1.17: [https://github.com/apache/cassandra/blob/cassandra-2.1.17/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L122] was: While checksums are (generally) performed on compressed data, the checksum test when reading is currently (in all variants of C* 2.x, 3.x I've looked at) done [on the compressed data] only after the uncompress operation has completed. The issue here is that LZ4_decompress_fast (as documented in e.g. [https://github.com/lz4/lz4/blob/dev/lib/lz4.h#L214)] can result in memory overruns when provided with malformed source data. This in turn can (and does, e.g. in CASSANDRA-13757) lead to JVM crashes during the uncompress of corrupted chunks. The checksum operation would obviously detect the issue, but we'd never get to it if the JVM crashes first. Moving the checksum test of the compressed data to before the uncompress operation (in cases where the checksum is done on compressed data) will resolve this issue. > Chunk checksum test needs to occur before uncompress to avoid JVM crash > --- > > Key: CASSANDRA-14284 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14284 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: The check-only-after-doing-the-decompress logic appears > to be in all current releases. > Here are some samples at different evolution points : > 3.11.2: > https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135 > https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L198 > 3.5: > > [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135] > https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L196 > 2.1.17: > > [https://github.com/apache/cassandra/blob/cassandra-2.1.17/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L122] > >Reporter: Gil Tene >Priority: Major > > While checksums are (generally) performed on compressed data, the checksum > test when reading is currently (in all variants of C* 2.x, 3.x I've looked > at) done [on the compressed data] only after the uncompress operation has > completed. > The issue here is that LZ4_decompress_fast (as documented in e.g. > [https://github.com/lz4/lz4/blob/dev/lib/lz4.h#L214)] can result in memory > overruns when provided with malformed source data. This in turn can (and > does, e.g. in CASSANDRA-13757) lead to JVM crashes during the uncompress of > corrupted chunks. The checksum operation would obviously detect the issue, > but we'd never get to it if the JVM crashes first. > Moving the checksum test of the compressed data to before the uncompress > operation (in cases where the checksum is done on compressed data) will > resolve this issue. > - > The
[jira] [Updated] (CASSANDRA-14284) Chunk checksum test needs to occur before uncompress to avoid JVM crash
[ https://issues.apache.org/jira/browse/CASSANDRA-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gil Tene updated CASSANDRA-14284: - Description: While checksums are (generally) performed on compressed data, the checksum test when reading is currently (in all variants of C* 2.x, 3.x I've looked at) done [on the compressed data] only after the uncompress operation has completed. The issue here is that LZ4_decompress_fast (as documented in e.g. [https://github.com/lz4/lz4/blob/dev/lib/lz4.h#L214)] can result in memory overruns when provided with malformed source data. This in turn can (and does, e.g. in CASSANDRA-13757) lead to JVM crashes during the uncompress of corrupted chunks. The checksum operation would obviously detect the issue, but we'd never get to it if the JVM crashes first. Moving the checksum test of the compressed data to before the uncompress operation (in cases where the checksum is done on compressed data) will resolve this issue. was: While checksums are (generally) performed on compressed data, the checksum test when reading is currently (in all variants of C* 2.x, 3.x I've looked at) done o[on the compressed data]nly after the uncompress operation has completed. The issue here is that LZ4_decompress_fast (as documented in e.g. [https://github.com/lz4/lz4/blob/dev/lib/lz4.h#L214)] can result in memory overruns when provided with malformed source data. This in turn can (and does, e.g. in CASSANDRA-13757) lead to JVM crashes during the uncompress of corrupted chunks. The checksum operation would obviously detect the issue, but we'd never get to it if the JVM crashes first. Moving the checksum test of the compressed data to before the uncompress operation (in cases where the checksum is done on compressed data) will resolve this issue. > Chunk checksum test needs to occur before uncompress to avoid JVM crash > --- > > Key: CASSANDRA-14284 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14284 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: The check-only-after-doing-the-decompress logic appears > to be in all current releases. > Here are some samples at different evolution points : > 3.11.2: > https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135 > https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L198 > 3.5: > > [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135] > https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L196 > 2.1.17: > > [https://github.com/apache/cassandra/blob/cassandra-2.1.17/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L122] > >Reporter: Gil Tene >Priority: Major > > While checksums are (generally) performed on compressed data, the checksum > test when reading is currently (in all variants of C* 2.x, 3.x I've looked > at) done [on the compressed data] only after the uncompress operation has > completed. > The issue here is that LZ4_decompress_fast (as documented in e.g. > [https://github.com/lz4/lz4/blob/dev/lib/lz4.h#L214)] can result in memory > overruns when provided with malformed source data. This in turn can (and > does, e.g. in CASSANDRA-13757) lead to JVM crashes during the uncompress of > corrupted chunks. The checksum operation would obviously detect the issue, > but we'd never get to it if the JVM crashes first. > Moving the checksum test of the compressed data to before the uncompress > operation (in cases where the checksum is done on compressed data) will > resolve this issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-14284) Chunk checksum test needs to occur before uncompress to avoid JVM crash
Gil Tene created CASSANDRA-14284: Summary: Chunk checksum test needs to occur before uncompress to avoid JVM crash Key: CASSANDRA-14284 URL: https://issues.apache.org/jira/browse/CASSANDRA-14284 Project: Cassandra Issue Type: Bug Components: Core Environment: The check-only-after-doing-the-decompress logic appears to be in all current releases. Here are some samples at different evolution points : 3.11.2: https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135 https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L198 3.5: [https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L135] https://github.com/apache/cassandra/blob/cassandra-3.5/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L196 2.1.17: [https://github.com/apache/cassandra/blob/cassandra-2.1.17/src/java/org/apache/cassandra/io/compress/CompressedRandomAccessReader.java#L122] Reporter: Gil Tene While checksums are (generally) performed on compressed data, the checksum test when reading is currently (in all variants of C* 2.x, 3.x I've looked at) done o[on the compressed data]nly after the uncompress operation has completed. The issue here is that LZ4_decompress_fast (as documented in e.g. [https://github.com/lz4/lz4/blob/dev/lib/lz4.h#L214)] can result in memory overruns when provided with malformed source data. This in turn can (and does, e.g. in CASSANDRA-13757) lead to JVM crashes during the uncompress of corrupted chunks. The checksum operation would obviously detect the issue, but we'd never get to it if the JVM crashes first. Moving the checksum test of the compressed data to before the uncompress operation (in cases where the checksum is done on compressed data) will resolve this issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org