[
https://issues.apache.org/jira/browse/CASSANDRA-14773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Vladimir Bukhtoyarov updated CASSANDRA-14773:
---------------------------------------------
Description:
In scope of CASSANDRA-13444 the compaction was significantly improved from CPU
and memory perspective. Hovewer this improvement introduces the bug in
rounding. When rounding the expriration time which is close to
*Cell.MAX_DELETION_TIME*(it is just *Integer.MAX_VALUE*) the math overflow
happens(because in scope of -CASSANDRA-13444-) data type for point was changed
for Long to Integer in order to reduce memory footprint), as result point
became negative and acts as silent poison for internal structures of
StreamingTombstoneHistogramBuilder like *DistanceHolder* and *DataHolder*. Then
depending of point intervals:
* The TombstoneHistogram produces wrong values when interval of points is less
then binSize, it is not critical.
* Compaction crashes with ArrayIndexOutOfBoundsException if amount of point
intervals is great then binSize, this case is very critical.
This is pull request [https://github.com/apache/cassandra/pull/273] that
reproduces the issue and provides the fix.
The stacktrace when running(on codebase without fix)
*testMathOverflowDuringRoundingOfLargeTimestamp* without -ea JVM flag
{noformat}
java.lang.ArrayIndexOutOfBoundsException
at java.lang.System.arraycopy(Native Method)
at
org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder$DistanceHolder.add(StreamingTombstoneHistogramBuilder.java:208)
at
org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder.flushValue(StreamingTombstoneHistogramBuilder.java:140)
at
org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder$$Lambda$1/1967205423.consume(Unknown
Source)
at
org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder$Spool.forEach(StreamingTombstoneHistogramBuilder.java:574)
at
org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder.flushHistogram(StreamingTombstoneHistogramBuilder.java:124)
at
org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder.build(StreamingTombstoneHistogramBuilder.java:184)
at
org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilderTest.testMathOverflowDuringRoundingOfLargeTimestamp(StreamingTombstoneHistogramBuilderTest.java:183)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:44)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
at org.junit.runner.JUnitCore.run(JUnitCore.java:159)
at
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
{noformat}
The stacktrace when running(on codebase without fix)
*testMathOverflowDuringRoundingOfLargeTimestamp* with enabled -ea JVM flag
{noformat}
ava.lang.AssertionError: point2 should follow point1
at
org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder$DataHolder.merge(StreamingTombstoneHistogramBuilder.java:324)
at
org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder.mergeBin(StreamingTombstoneHistogramBuilder.java:158)
at
org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder.flushValue(StreamingTombstoneHistogramBuilder.java:145)
at
org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder$$Lambda$1/1967205423.consume(Unknown
Source)
at
org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder$Spool.forEach(StreamingTombstoneHistogramBuilder.java:574)
at
org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder.flushHistogram(StreamingTombstoneHistogramBuilder.java:124)
at
org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder.build(StreamingTombstoneHistogramBuilder.java:184)
at
org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilderTest.testMathOverflowDuringRoundingOfLargeTimestamp(StreamingTombstoneHistogramBuilderTest.java:183)
{noformat}
was:
In scope of CASSANDRA-13444 the compaction was significantly improved from CPU
and memory perspective. Hovewer this improvement introduces the bug in
rounding. When rounding the expriration time which is close to
*Cell.MAX_DELETION_TIME*(it is just *Integer.MAX_VALUE*) the math overflow
happens, as result point became negative and acts as silent poison for internal
structures of StreamingTombstoneHistogramBuilder like *DistanceHolder* and
*DataHolder*. Then depending of point intervals:
* The TombstoneHistogram produces wrong values when interval of points is less
then binSize, it is not critical.
* Compaction crashes with ArrayIndexOutOfBoundsException if amount of point
intervals is great then binSize, this case is very critical.
This is pull request [https://github.com/apache/cassandra/pull/273] that
reproduces the issue and provides the fix.
> Overflow of 32-bit integer during compaction.
> ---------------------------------------------
>
> Key: CASSANDRA-14773
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14773
> Project: Cassandra
> Issue Type: Bug
> Components: Compaction
> Reporter: Vladimir Bukhtoyarov
> Priority: Critical
> Fix For: 4.x
>
>
> In scope of CASSANDRA-13444 the compaction was significantly improved from
> CPU and memory perspective. Hovewer this improvement introduces the bug in
> rounding. When rounding the expriration time which is close to
> *Cell.MAX_DELETION_TIME*(it is just *Integer.MAX_VALUE*) the math overflow
> happens(because in scope of -CASSANDRA-13444-) data type for point was
> changed for Long to Integer in order to reduce memory footprint), as result
> point became negative and acts as silent poison for internal structures of
> StreamingTombstoneHistogramBuilder like *DistanceHolder* and *DataHolder*.
> Then depending of point intervals:
> * The TombstoneHistogram produces wrong values when interval of points is
> less then binSize, it is not critical.
> * Compaction crashes with ArrayIndexOutOfBoundsException if amount of point
> intervals is great then binSize, this case is very critical.
>
> This is pull request [https://github.com/apache/cassandra/pull/273] that
> reproduces the issue and provides the fix.
>
> The stacktrace when running(on codebase without fix)
> *testMathOverflowDuringRoundingOfLargeTimestamp* without -ea JVM flag
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException
> at java.lang.System.arraycopy(Native Method)
> at
> org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder$DistanceHolder.add(StreamingTombstoneHistogramBuilder.java:208)
> at
> org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder.flushValue(StreamingTombstoneHistogramBuilder.java:140)
> at
> org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder$$Lambda$1/1967205423.consume(Unknown
> Source)
> at
> org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder$Spool.forEach(StreamingTombstoneHistogramBuilder.java:574)
> at
> org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder.flushHistogram(StreamingTombstoneHistogramBuilder.java:124)
> at
> org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder.build(StreamingTombstoneHistogramBuilder.java:184)
> at
> org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilderTest.testMathOverflowDuringRoundingOfLargeTimestamp(StreamingTombstoneHistogramBuilderTest.java:183)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:44)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
> at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
> at
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:159)
> at
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> {noformat}
>
> The stacktrace when running(on codebase without fix)
> *testMathOverflowDuringRoundingOfLargeTimestamp* with enabled -ea JVM flag
> {noformat}
> ava.lang.AssertionError: point2 should follow point1
> at
> org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder$DataHolder.merge(StreamingTombstoneHistogramBuilder.java:324)
> at
> org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder.mergeBin(StreamingTombstoneHistogramBuilder.java:158)
> at
> org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder.flushValue(StreamingTombstoneHistogramBuilder.java:145)
> at
> org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder$$Lambda$1/1967205423.consume(Unknown
> Source)
> at
> org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder$Spool.forEach(StreamingTombstoneHistogramBuilder.java:574)
> at
> org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder.flushHistogram(StreamingTombstoneHistogramBuilder.java:124)
> at
> org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder.build(StreamingTombstoneHistogramBuilder.java:184)
> at
> org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilderTest.testMathOverflowDuringRoundingOfLargeTimestamp(StreamingTombstoneHistogramBuilderTest.java:183)
> {noformat}
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]