[ https://issues.apache.org/jira/browse/CASSANDRA-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18013826#comment-18013826 ]
Nikhil edited comment on CASSANDRA-20570 at 8/14/25 4:47 AM: ------------------------------------------------------------- Or can we use `LeveledCompactionStrategy.validateOptions` instead of `LeveledManifest.maxBytesForLevel` [here_1|https://github.com/apache/cassandra/blob/trunk/test/unit/org/apache/cassandra/utils/CassandraGenerators.java#L587] was (Author: JIRAUSER306929): Or can use `LeveledCompactionStrategy.validateOptions` instead of `LeveledManifest.maxBytesForLevel` [here_1|https://github.com/apache/cassandra/blob/trunk/test/unit/org/apache/cassandra/utils/CassandraGenerators.java#L587] > Leveled Compaction doesn't validate maxBytesForLevel when the table is > altered/created > -------------------------------------------------------------------------------------- > > Key: CASSANDRA-20570 > URL: https://issues.apache.org/jira/browse/CASSANDRA-20570 > Project: Apache Cassandra > Issue Type: Improvement > Components: Local/Compaction > Reporter: David Capwell > Assignee: Nikhil > Priority: Normal > Attachments: > ci_summary-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.html, > ci_summary-cassandra-4.1-3c53118393f75ae3808ba553284cbd6c2a28fddf.html, > ci_summary-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.html, > ci_summary-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.html, > result_details-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.tar.gz, > result_details-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.tar.gz, > result_details-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.tar.gz > > Time Spent: 4h 10m > Remaining Estimate: 0h > > In fuzz testing I hit this fun error message > {code} > java.lang.RuntimeException: Repair job has failed with the error message: > Repair command #1 failed with error At most 9223372036854775807 bytes may be > in a compaction level; your maxSSTableSize must be absurdly high to compute > 4.915501902751334E24. Check the logs on the repair participants for further > details > at > org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:187) > {code} > I was able to create a table, write data to it, and it only ever had issues > once I did a repair… > The error comes from > {code} > INFO [node2_Repair-Task:1] 2025-04-18 12:08:14,920 SubstituteLogger.java:169 > - ERROR 19:08:14 Repair 138143d4-1dd2-11b2-ba62-f56ce069092d failed: > java.lang.RuntimeException: At most 9223372036854775807 bytes may be in a > compaction level; your maxSSTableSize must be absurdly high to compute > 4.915501902751334E24 > at > org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:191) > at > org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:182) > at > org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:651) > at > org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:640) > at > org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:635) > at > org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getEstimatedRemainingTasks(LeveledCompactionStrategy.java:276) > at > org.apache.cassandra.db.compaction.CompactionStrategyManager.getEstimatedRemainingTasks(CompactionStrategyManager.java:1147) > at > org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:81) > at > org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:73) > at > org.apache.cassandra.db.compaction.CompactionManager.getPendingTasks(CompactionManager.java:2368) > at > org.apache.cassandra.service.ActiveRepairService.verifyCompactionsPendingThreshold(ActiveRepairService.java:659) > at > org.apache.cassandra.service.ActiveRepairService.prepareForRepair(ActiveRepairService.java:672) > at > org.apache.cassandra.repair.RepairCoordinator.prepare(RepairCoordinator.java:486) > at > org.apache.cassandra.repair.RepairCoordinator.runMayThrow(RepairCoordinator.java:329) > at > org.apache.cassandra.repair.RepairCoordinator.run(RepairCoordinator.java:280) > {code} > Which has this logic > {code} > double bytes = Math.pow(levelFanoutSize, level) * maxSSTableSizeInBytes; > if (bytes > Long.MAX_VALUE) > throw new RuntimeException("At most " + Long.MAX_VALUE + " bytes may be > in a compaction level; your maxSSTableSize must be absurdly high to compute " > + bytes); > {code} > The fuzz test had the following inputs > {code} > level = 8 > levelFanoutSize = 90 > maxSSTableSizeInBytes = 1141899264 > {code} > Given that the max level is known (its 8, and is hard coded), we can do this > calculation during create table / alter table to make sure that we don’t blow > up later on -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org