[jira] [Updated] (CASSANDRA-20570) Leveled Compaction doesn't validate maxBytesForLevel when the table is altered/created

2026-01-09 Thread David Capwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Capwell updated CASSANDRA-20570:
--
Attachment: ci_summary-trunk-2713d7e71db0b5e88aa36e1a479937749bf97741.html

result_details-trunk-2713d7e71db0b5e88aa36e1a479937749bf97741.tar.gz

> Leveled Compaction doesn't validate maxBytesForLevel when the table is 
> altered/created
> --
>
> Key: CASSANDRA-20570
> URL: https://issues.apache.org/jira/browse/CASSANDRA-20570
> Project: Apache Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction
>Reporter: David Capwell
>Assignee: Nikhil
>Priority: Normal
> Fix For: 4.0.20, 4.1.11, 5.0.7, 6.x
>
> Attachments: 
> ci_summary-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.html, 
> ci_summary-cassandra-4.0-deb0f7cc12572cc162fce91033665fb4db2a98ac.html, 
> ci_summary-cassandra-4.1-3c53118393f75ae3808ba553284cbd6c2a28fddf.html, 
> ci_summary-cassandra-4.1-7e21a1a245d52b3d0766adb51f8885a6b15b5cac.html, 
> ci_summary-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.html, 
> ci_summary-cassandra-5.0-e696ffe44016c25f8a2b94131fd73b2187ed26f7.html, 
> ci_summary-trunk-2713d7e71db0b5e88aa36e1a479937749bf97741.html, 
> ci_summary-trunk-4c36a1ea5a27597251fc4221041fbb75a70488e2.html, 
> ci_summary-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.html, 
> result_details-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.tar.gz, 
> result_details-cassandra-4.0-deb0f7cc12572cc162fce91033665fb4db2a98ac.tar.gz, 
> result_details-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.tar.gz, 
> result_details-cassandra-5.0-e696ffe44016c25f8a2b94131fd73b2187ed26f7.tar.gz, 
> result_details-trunk-2713d7e71db0b5e88aa36e1a479937749bf97741.tar.gz, 
> result_details-trunk-4c36a1ea5a27597251fc4221041fbb75a70488e2.tar.gz, 
> result_details-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.tar.gz
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> In fuzz testing I hit this fun error message
> {code}
> java.lang.RuntimeException: Repair job has failed with the error message: 
> Repair command #1 failed with error At most 9223372036854775807 bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24. Check the logs on the repair participants for further 
> details
>   at 
> org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:187)
> {code}
> I was able to create a table, write data to it, and it only ever had issues 
> once I did a repair…
> The error comes from
> {code}
> INFO  [node2_Repair-Task:1] 2025-04-18 12:08:14,920 SubstituteLogger.java:169 
> - ERROR 19:08:14 Repair 138143d4-1dd2-11b2-ba62-f56ce069092d failed:
> java.lang.RuntimeException: At most 9223372036854775807 bytes may be in a 
> compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:191)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:182)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:651)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:640)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:635)
>   at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getEstimatedRemainingTasks(LeveledCompactionStrategy.java:276)
>   at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.getEstimatedRemainingTasks(CompactionStrategyManager.java:1147)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:81)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:73)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.getPendingTasks(CompactionManager.java:2368)
>   at 
> org.apache.cassandra.service.ActiveRepairService.verifyCompactionsPendingThreshold(ActiveRepairService.java:659)
>   at 
> org.apache.cassandra.service.ActiveRepairService.prepareForRepair(ActiveRepairService.java:672)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.prepare(RepairCoordinator.java:486)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.runMayThrow(RepairCoordinator.java:329)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.run(RepairCoordinator.java:280)
> {code}
> Which has this logic
> {code}
> double bytes = Math.pow(levelFanoutSize, level) * maxSSTableSizeInBytes;
> if (bytes > Long.MAX_VALUE)
> throw new RuntimeException("At most " +

[jira] [Updated] (CASSANDRA-20570) Leveled Compaction doesn't validate maxBytesForLevel when the table is altered/created

2026-01-09 Thread David Capwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Capwell updated CASSANDRA-20570:
--
Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

The test fix is 

> Leveled Compaction doesn't validate maxBytesForLevel when the table is 
> altered/created
> --
>
> Key: CASSANDRA-20570
> URL: https://issues.apache.org/jira/browse/CASSANDRA-20570
> Project: Apache Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction
>Reporter: David Capwell
>Assignee: Nikhil
>Priority: Normal
> Fix For: 4.0.20, 4.1.11, 5.0.7, 6.x
>
> Attachments: 
> ci_summary-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.html, 
> ci_summary-cassandra-4.0-deb0f7cc12572cc162fce91033665fb4db2a98ac.html, 
> ci_summary-cassandra-4.1-3c53118393f75ae3808ba553284cbd6c2a28fddf.html, 
> ci_summary-cassandra-4.1-7e21a1a245d52b3d0766adb51f8885a6b15b5cac.html, 
> ci_summary-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.html, 
> ci_summary-cassandra-5.0-e696ffe44016c25f8a2b94131fd73b2187ed26f7.html, 
> ci_summary-trunk-4c36a1ea5a27597251fc4221041fbb75a70488e2.html, 
> ci_summary-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.html, 
> result_details-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.tar.gz, 
> result_details-cassandra-4.0-deb0f7cc12572cc162fce91033665fb4db2a98ac.tar.gz, 
> result_details-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.tar.gz, 
> result_details-cassandra-5.0-e696ffe44016c25f8a2b94131fd73b2187ed26f7.tar.gz, 
> result_details-trunk-4c36a1ea5a27597251fc4221041fbb75a70488e2.tar.gz, 
> result_details-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.tar.gz
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> In fuzz testing I hit this fun error message
> {code}
> java.lang.RuntimeException: Repair job has failed with the error message: 
> Repair command #1 failed with error At most 9223372036854775807 bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24. Check the logs on the repair participants for further 
> details
>   at 
> org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:187)
> {code}
> I was able to create a table, write data to it, and it only ever had issues 
> once I did a repair…
> The error comes from
> {code}
> INFO  [node2_Repair-Task:1] 2025-04-18 12:08:14,920 SubstituteLogger.java:169 
> - ERROR 19:08:14 Repair 138143d4-1dd2-11b2-ba62-f56ce069092d failed:
> java.lang.RuntimeException: At most 9223372036854775807 bytes may be in a 
> compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:191)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:182)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:651)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:640)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:635)
>   at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getEstimatedRemainingTasks(LeveledCompactionStrategy.java:276)
>   at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.getEstimatedRemainingTasks(CompactionStrategyManager.java:1147)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:81)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:73)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.getPendingTasks(CompactionManager.java:2368)
>   at 
> org.apache.cassandra.service.ActiveRepairService.verifyCompactionsPendingThreshold(ActiveRepairService.java:659)
>   at 
> org.apache.cassandra.service.ActiveRepairService.prepareForRepair(ActiveRepairService.java:672)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.prepare(RepairCoordinator.java:486)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.runMayThrow(RepairCoordinator.java:329)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.run(RepairCoordinator.java:280)
> {code}
> Which has this logic
> {code}
> double bytes = Math.pow(levelFanoutSize, level) * maxSSTableSizeInBytes;
> if (bytes > Long.MAX_VALUE)
> throw new RuntimeException("At most " + Long.MAX_VALUE + " bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute " 
> + bytes);
> {code}
> The fuzz test had the following inputs
> {code}
> level = 8
> levelFanoutSize 

[jira] [Updated] (CASSANDRA-20570) Leveled Compaction doesn't validate maxBytesForLevel when the table is altered/created

2026-01-09 Thread David Capwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Capwell updated CASSANDRA-20570:
--
Reviewers: Ariel Weisberg, David Capwell, guo Maxwell  (was: David Capwell, 
guo Maxwell)
   Status: Review In Progress  (was: Patch Available)

> Leveled Compaction doesn't validate maxBytesForLevel when the table is 
> altered/created
> --
>
> Key: CASSANDRA-20570
> URL: https://issues.apache.org/jira/browse/CASSANDRA-20570
> Project: Apache Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction
>Reporter: David Capwell
>Assignee: Nikhil
>Priority: Normal
> Fix For: 4.0.20, 4.1.11, 5.0.7, 6.x
>
> Attachments: 
> ci_summary-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.html, 
> ci_summary-cassandra-4.0-deb0f7cc12572cc162fce91033665fb4db2a98ac.html, 
> ci_summary-cassandra-4.1-3c53118393f75ae3808ba553284cbd6c2a28fddf.html, 
> ci_summary-cassandra-4.1-7e21a1a245d52b3d0766adb51f8885a6b15b5cac.html, 
> ci_summary-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.html, 
> ci_summary-cassandra-5.0-e696ffe44016c25f8a2b94131fd73b2187ed26f7.html, 
> ci_summary-trunk-4c36a1ea5a27597251fc4221041fbb75a70488e2.html, 
> ci_summary-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.html, 
> result_details-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.tar.gz, 
> result_details-cassandra-4.0-deb0f7cc12572cc162fce91033665fb4db2a98ac.tar.gz, 
> result_details-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.tar.gz, 
> result_details-cassandra-5.0-e696ffe44016c25f8a2b94131fd73b2187ed26f7.tar.gz, 
> result_details-trunk-4c36a1ea5a27597251fc4221041fbb75a70488e2.tar.gz, 
> result_details-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.tar.gz
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> In fuzz testing I hit this fun error message
> {code}
> java.lang.RuntimeException: Repair job has failed with the error message: 
> Repair command #1 failed with error At most 9223372036854775807 bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24. Check the logs on the repair participants for further 
> details
>   at 
> org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:187)
> {code}
> I was able to create a table, write data to it, and it only ever had issues 
> once I did a repair…
> The error comes from
> {code}
> INFO  [node2_Repair-Task:1] 2025-04-18 12:08:14,920 SubstituteLogger.java:169 
> - ERROR 19:08:14 Repair 138143d4-1dd2-11b2-ba62-f56ce069092d failed:
> java.lang.RuntimeException: At most 9223372036854775807 bytes may be in a 
> compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:191)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:182)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:651)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:640)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:635)
>   at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getEstimatedRemainingTasks(LeveledCompactionStrategy.java:276)
>   at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.getEstimatedRemainingTasks(CompactionStrategyManager.java:1147)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:81)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:73)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.getPendingTasks(CompactionManager.java:2368)
>   at 
> org.apache.cassandra.service.ActiveRepairService.verifyCompactionsPendingThreshold(ActiveRepairService.java:659)
>   at 
> org.apache.cassandra.service.ActiveRepairService.prepareForRepair(ActiveRepairService.java:672)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.prepare(RepairCoordinator.java:486)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.runMayThrow(RepairCoordinator.java:329)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.run(RepairCoordinator.java:280)
> {code}
> Which has this logic
> {code}
> double bytes = Math.pow(levelFanoutSize, level) * maxSSTableSizeInBytes;
> if (bytes > Long.MAX_VALUE)
> throw new RuntimeException("At most " + Long.MAX_VALUE + " bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute " 
> + bytes);
> {code}
> The fuzz test h

[jira] [Updated] (CASSANDRA-20570) Leveled Compaction doesn't validate maxBytesForLevel when the table is altered/created

2026-01-09 Thread David Capwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Capwell updated CASSANDRA-20570:
--
Status: Ready to Commit  (was: Review In Progress)

+1 from Ariel

> Leveled Compaction doesn't validate maxBytesForLevel when the table is 
> altered/created
> --
>
> Key: CASSANDRA-20570
> URL: https://issues.apache.org/jira/browse/CASSANDRA-20570
> Project: Apache Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction
>Reporter: David Capwell
>Assignee: Nikhil
>Priority: Normal
> Fix For: 4.0.20, 4.1.11, 5.0.7, 6.x
>
> Attachments: 
> ci_summary-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.html, 
> ci_summary-cassandra-4.0-deb0f7cc12572cc162fce91033665fb4db2a98ac.html, 
> ci_summary-cassandra-4.1-3c53118393f75ae3808ba553284cbd6c2a28fddf.html, 
> ci_summary-cassandra-4.1-7e21a1a245d52b3d0766adb51f8885a6b15b5cac.html, 
> ci_summary-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.html, 
> ci_summary-cassandra-5.0-e696ffe44016c25f8a2b94131fd73b2187ed26f7.html, 
> ci_summary-trunk-4c36a1ea5a27597251fc4221041fbb75a70488e2.html, 
> ci_summary-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.html, 
> result_details-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.tar.gz, 
> result_details-cassandra-4.0-deb0f7cc12572cc162fce91033665fb4db2a98ac.tar.gz, 
> result_details-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.tar.gz, 
> result_details-cassandra-5.0-e696ffe44016c25f8a2b94131fd73b2187ed26f7.tar.gz, 
> result_details-trunk-4c36a1ea5a27597251fc4221041fbb75a70488e2.tar.gz, 
> result_details-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.tar.gz
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> In fuzz testing I hit this fun error message
> {code}
> java.lang.RuntimeException: Repair job has failed with the error message: 
> Repair command #1 failed with error At most 9223372036854775807 bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24. Check the logs on the repair participants for further 
> details
>   at 
> org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:187)
> {code}
> I was able to create a table, write data to it, and it only ever had issues 
> once I did a repair…
> The error comes from
> {code}
> INFO  [node2_Repair-Task:1] 2025-04-18 12:08:14,920 SubstituteLogger.java:169 
> - ERROR 19:08:14 Repair 138143d4-1dd2-11b2-ba62-f56ce069092d failed:
> java.lang.RuntimeException: At most 9223372036854775807 bytes may be in a 
> compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:191)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:182)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:651)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:640)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:635)
>   at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getEstimatedRemainingTasks(LeveledCompactionStrategy.java:276)
>   at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.getEstimatedRemainingTasks(CompactionStrategyManager.java:1147)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:81)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:73)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.getPendingTasks(CompactionManager.java:2368)
>   at 
> org.apache.cassandra.service.ActiveRepairService.verifyCompactionsPendingThreshold(ActiveRepairService.java:659)
>   at 
> org.apache.cassandra.service.ActiveRepairService.prepareForRepair(ActiveRepairService.java:672)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.prepare(RepairCoordinator.java:486)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.runMayThrow(RepairCoordinator.java:329)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.run(RepairCoordinator.java:280)
> {code}
> Which has this logic
> {code}
> double bytes = Math.pow(levelFanoutSize, level) * maxSSTableSizeInBytes;
> if (bytes > Long.MAX_VALUE)
> throw new RuntimeException("At most " + Long.MAX_VALUE + " bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute " 
> + bytes);
> {code}
> The fuzz test had the following inputs
> {code}
> level = 8
> levelFanoutSize = 90
> maxSSTableSi

[jira] [Updated] (CASSANDRA-20570) Leveled Compaction doesn't validate maxBytesForLevel when the table is altered/created

2026-01-09 Thread David Capwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Capwell updated CASSANDRA-20570:
--
Status: Patch Available  (was: Open)

> Leveled Compaction doesn't validate maxBytesForLevel when the table is 
> altered/created
> --
>
> Key: CASSANDRA-20570
> URL: https://issues.apache.org/jira/browse/CASSANDRA-20570
> Project: Apache Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction
>Reporter: David Capwell
>Assignee: Nikhil
>Priority: Normal
> Fix For: 4.0.20, 4.1.11, 5.0.7, 6.x
>
> Attachments: 
> ci_summary-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.html, 
> ci_summary-cassandra-4.0-deb0f7cc12572cc162fce91033665fb4db2a98ac.html, 
> ci_summary-cassandra-4.1-3c53118393f75ae3808ba553284cbd6c2a28fddf.html, 
> ci_summary-cassandra-4.1-7e21a1a245d52b3d0766adb51f8885a6b15b5cac.html, 
> ci_summary-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.html, 
> ci_summary-cassandra-5.0-e696ffe44016c25f8a2b94131fd73b2187ed26f7.html, 
> ci_summary-trunk-4c36a1ea5a27597251fc4221041fbb75a70488e2.html, 
> ci_summary-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.html, 
> result_details-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.tar.gz, 
> result_details-cassandra-4.0-deb0f7cc12572cc162fce91033665fb4db2a98ac.tar.gz, 
> result_details-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.tar.gz, 
> result_details-cassandra-5.0-e696ffe44016c25f8a2b94131fd73b2187ed26f7.tar.gz, 
> result_details-trunk-4c36a1ea5a27597251fc4221041fbb75a70488e2.tar.gz, 
> result_details-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.tar.gz
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> In fuzz testing I hit this fun error message
> {code}
> java.lang.RuntimeException: Repair job has failed with the error message: 
> Repair command #1 failed with error At most 9223372036854775807 bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24. Check the logs on the repair participants for further 
> details
>   at 
> org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:187)
> {code}
> I was able to create a table, write data to it, and it only ever had issues 
> once I did a repair…
> The error comes from
> {code}
> INFO  [node2_Repair-Task:1] 2025-04-18 12:08:14,920 SubstituteLogger.java:169 
> - ERROR 19:08:14 Repair 138143d4-1dd2-11b2-ba62-f56ce069092d failed:
> java.lang.RuntimeException: At most 9223372036854775807 bytes may be in a 
> compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:191)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:182)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:651)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:640)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:635)
>   at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getEstimatedRemainingTasks(LeveledCompactionStrategy.java:276)
>   at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.getEstimatedRemainingTasks(CompactionStrategyManager.java:1147)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:81)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:73)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.getPendingTasks(CompactionManager.java:2368)
>   at 
> org.apache.cassandra.service.ActiveRepairService.verifyCompactionsPendingThreshold(ActiveRepairService.java:659)
>   at 
> org.apache.cassandra.service.ActiveRepairService.prepareForRepair(ActiveRepairService.java:672)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.prepare(RepairCoordinator.java:486)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.runMayThrow(RepairCoordinator.java:329)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.run(RepairCoordinator.java:280)
> {code}
> Which has this logic
> {code}
> double bytes = Math.pow(levelFanoutSize, level) * maxSSTableSizeInBytes;
> if (bytes > Long.MAX_VALUE)
> throw new RuntimeException("At most " + Long.MAX_VALUE + " bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute " 
> + bytes);
> {code}
> The fuzz test had the following inputs
> {code}
> level = 8
> levelFanoutSize = 90
> maxSSTableSizeInBytes = 1141899264
> {cod

[jira] [Updated] (CASSANDRA-20570) Leveled Compaction doesn't validate maxBytesForLevel when the table is altered/created

2025-11-17 Thread Sam Tunnicliffe (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-20570:

Resolution: (was: Fixed)
Status: Open  (was: Resolved)

I'm still seeing failures in fuzz tests on trunk with this fix committed. e.g. 
in {{
MultiNodeTableWalkWithoutReadRepairTest}}
{code:java}
accord.utils.Property$PropertyError: Property error detected:
Seed = -3702943409353269651
Examples = 10
Pure = true
Error: 
nodetool command [repair, ks1, tbl, --skip-paxos, --dc-parallel, 
--optimise-streams] was not successful
stdout:
[2025-11-17 13:29:06,192] Starting repair command #1 
(138145d2-1dd2-11b2-98a3-4543c3f011ca), repairing keyspace ks1 with repair 
options (parallelism: dc_parallel, primary range: false, incremental: true, job 
threads: 1, ColumnFamilies: [tbl], dataCenters: [], hosts: [], previewKind: 
NONE, # of ranges: 3, pull repair: false, force repair: false, optimise 
streams: true, ignore unreplicated keyspaces: false, repairData: true, 
repairPaxos: false, dontPurgeTombstones: false, repairAccord: true)
[2025-11-17 13:29:06,248] Repair command #1 failed with error At most 
9223372036854775807 bytes may be in a compaction level; your maxSSTableSize 
must be absurdly high to compute 4.733037027712932E20
[2025-11-17 13:29:06,261] Repair command #1 finished with error

stderr:
error: Repair job has failed with the error message: Repair command #1 
failed with error At most 9223372036854775807 bytes may be in a compaction 
level; your maxSSTableSize must be absurdly high to compute 
4.733037027712932E20. Check the logs on the repair participants for further 
details
-- StackTrace --
java.lang.RuntimeException: Repair job has failed with the error 
message: Repair command #1 failed with error At most 9223372036854775807 bytes 
may be in a compaction level; your maxSSTableSize must be absurdly high to 
compute 4.733037027712932E20. Check the logs on the repair participants for 
further details
at 
org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:198)
at 
org.apache.cassandra.utils.progress.jmx.JMXNotificationProgressListener.handleNotification(JMXNotificationProgressListener.java:77)
at 
java.management/javax.management.NotificationBroadcasterSupport.handleNotification(NotificationBroadcasterSupport.java:275)
at 
java.management/javax.management.NotificationBroadcasterSupport$SendNotifJob.run(NotificationBroadcasterSupport.java:352)
at 
org.apache.cassandra.concurrent.ExecutionFailure$1.run(ExecutionFailure.java:138)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at 
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:840)
{code}

> Leveled Compaction doesn't validate maxBytesForLevel when the table is 
> altered/created
> --
>
> Key: CASSANDRA-20570
> URL: https://issues.apache.org/jira/browse/CASSANDRA-20570
> Project: Apache Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction
>Reporter: David Capwell
>Assignee: Nikhil
>Priority: Normal
> Fix For: 4.0.20, 4.1.11, 5.0.7, 6.x
>
> Attachments: 
> ci_summary-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.html, 
> ci_summary-cassandra-4.0-deb0f7cc12572cc162fce91033665fb4db2a98ac.html, 
> ci_summary-cassandra-4.1-3c53118393f75ae3808ba553284cbd6c2a28fddf.html, 
> ci_summary-cassandra-4.1-7e21a1a245d52b3d0766adb51f8885a6b15b5cac.html, 
> ci_summary-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.html, 
> ci_summary-cassandra-5.0-e696ffe44016c25f8a2b94131fd73b2187ed26f7.html, 
> ci_summary-trunk-4c36a1ea5a27597251fc4221041fbb75a70488e2.html, 
> ci_summary-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.html, 
> result_details-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.tar.gz, 
> result_details-cassandra-4.0-deb0f7cc12572cc162fce91033665fb4db2a98ac.tar.gz, 
> result_details-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.tar.gz, 
> result_details-cassandra-5.0-e696ffe44016c25f8a2b94131fd73b2187ed26f7.tar.gz, 
> result_details-trunk-4c36a1ea5a27597251fc4221041fbb75a70488e2.tar.gz, 
> result_details-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.tar.gz
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> In fuzz testing I hit this fun error message
> {code}
> java.lang.RuntimeException: Repair job has failed with the err

[jira] [Updated] (CASSANDRA-20570) Leveled Compaction doesn't validate maxBytesForLevel when the table is altered/created

2025-11-07 Thread David Capwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Capwell updated CASSANDRA-20570:
--
Attachment: 
ci_summary-cassandra-4.0-deb0f7cc12572cc162fce91033665fb4db2a98ac.html

ci_summary-cassandra-4.1-7e21a1a245d52b3d0766adb51f8885a6b15b5cac.html

ci_summary-cassandra-5.0-e696ffe44016c25f8a2b94131fd73b2187ed26f7.html
ci_summary-trunk-4c36a1ea5a27597251fc4221041fbb75a70488e2.html

result_details-cassandra-4.0-deb0f7cc12572cc162fce91033665fb4db2a98ac.tar.gz

result_details-cassandra-5.0-e696ffe44016c25f8a2b94131fd73b2187ed26f7.tar.gz

result_details-trunk-4c36a1ea5a27597251fc4221041fbb75a70488e2.tar.gz

> Leveled Compaction doesn't validate maxBytesForLevel when the table is 
> altered/created
> --
>
> Key: CASSANDRA-20570
> URL: https://issues.apache.org/jira/browse/CASSANDRA-20570
> Project: Apache Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction
>Reporter: David Capwell
>Assignee: Nikhil
>Priority: Normal
> Fix For: 4.0.20, 4.1.11, 5.0.7, 6.x
>
> Attachments: 
> ci_summary-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.html, 
> ci_summary-cassandra-4.0-deb0f7cc12572cc162fce91033665fb4db2a98ac.html, 
> ci_summary-cassandra-4.1-3c53118393f75ae3808ba553284cbd6c2a28fddf.html, 
> ci_summary-cassandra-4.1-7e21a1a245d52b3d0766adb51f8885a6b15b5cac.html, 
> ci_summary-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.html, 
> ci_summary-cassandra-5.0-e696ffe44016c25f8a2b94131fd73b2187ed26f7.html, 
> ci_summary-trunk-4c36a1ea5a27597251fc4221041fbb75a70488e2.html, 
> ci_summary-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.html, 
> result_details-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.tar.gz, 
> result_details-cassandra-4.0-deb0f7cc12572cc162fce91033665fb4db2a98ac.tar.gz, 
> result_details-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.tar.gz, 
> result_details-cassandra-5.0-e696ffe44016c25f8a2b94131fd73b2187ed26f7.tar.gz, 
> result_details-trunk-4c36a1ea5a27597251fc4221041fbb75a70488e2.tar.gz, 
> result_details-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.tar.gz
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> In fuzz testing I hit this fun error message
> {code}
> java.lang.RuntimeException: Repair job has failed with the error message: 
> Repair command #1 failed with error At most 9223372036854775807 bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24. Check the logs on the repair participants for further 
> details
>   at 
> org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:187)
> {code}
> I was able to create a table, write data to it, and it only ever had issues 
> once I did a repair…
> The error comes from
> {code}
> INFO  [node2_Repair-Task:1] 2025-04-18 12:08:14,920 SubstituteLogger.java:169 
> - ERROR 19:08:14 Repair 138143d4-1dd2-11b2-ba62-f56ce069092d failed:
> java.lang.RuntimeException: At most 9223372036854775807 bytes may be in a 
> compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:191)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:182)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:651)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:640)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:635)
>   at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getEstimatedRemainingTasks(LeveledCompactionStrategy.java:276)
>   at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.getEstimatedRemainingTasks(CompactionStrategyManager.java:1147)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:81)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:73)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.getPendingTasks(CompactionManager.java:2368)
>   at 
> org.apache.cassandra.service.ActiveRepairService.verifyCompactionsPendingThreshold(ActiveRepairService.java:659)
>   at 
> org.apache.cassandra.service.ActiveRepairService.prepareForRepair(ActiveRepairService.java:672)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.prepare(RepairCoordinator.java:486)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.runMayThrow

[jira] [Updated] (CASSANDRA-20570) Leveled Compaction doesn't validate maxBytesForLevel when the table is altered/created

2025-11-07 Thread David Capwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Capwell updated CASSANDRA-20570:
--
  Fix Version/s: 4.0.20
 4.1.11
 5.0.7
 6.x
Source Control Link: 
https://github.com/apache/cassandra/commit/61014f2ae7cd2c3126042275e627f6a90560b5ef
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

> Leveled Compaction doesn't validate maxBytesForLevel when the table is 
> altered/created
> --
>
> Key: CASSANDRA-20570
> URL: https://issues.apache.org/jira/browse/CASSANDRA-20570
> Project: Apache Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction
>Reporter: David Capwell
>Assignee: Nikhil
>Priority: Normal
> Fix For: 4.0.20, 4.1.11, 5.0.7, 6.x
>
> Attachments: 
> ci_summary-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.html, 
> ci_summary-cassandra-4.1-3c53118393f75ae3808ba553284cbd6c2a28fddf.html, 
> ci_summary-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.html, 
> ci_summary-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.html, 
> result_details-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.tar.gz, 
> result_details-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.tar.gz, 
> result_details-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.tar.gz
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> In fuzz testing I hit this fun error message
> {code}
> java.lang.RuntimeException: Repair job has failed with the error message: 
> Repair command #1 failed with error At most 9223372036854775807 bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24. Check the logs on the repair participants for further 
> details
>   at 
> org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:187)
> {code}
> I was able to create a table, write data to it, and it only ever had issues 
> once I did a repair…
> The error comes from
> {code}
> INFO  [node2_Repair-Task:1] 2025-04-18 12:08:14,920 SubstituteLogger.java:169 
> - ERROR 19:08:14 Repair 138143d4-1dd2-11b2-ba62-f56ce069092d failed:
> java.lang.RuntimeException: At most 9223372036854775807 bytes may be in a 
> compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:191)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:182)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:651)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:640)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:635)
>   at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getEstimatedRemainingTasks(LeveledCompactionStrategy.java:276)
>   at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.getEstimatedRemainingTasks(CompactionStrategyManager.java:1147)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:81)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:73)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.getPendingTasks(CompactionManager.java:2368)
>   at 
> org.apache.cassandra.service.ActiveRepairService.verifyCompactionsPendingThreshold(ActiveRepairService.java:659)
>   at 
> org.apache.cassandra.service.ActiveRepairService.prepareForRepair(ActiveRepairService.java:672)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.prepare(RepairCoordinator.java:486)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.runMayThrow(RepairCoordinator.java:329)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.run(RepairCoordinator.java:280)
> {code}
> Which has this logic
> {code}
> double bytes = Math.pow(levelFanoutSize, level) * maxSSTableSizeInBytes;
> if (bytes > Long.MAX_VALUE)
> throw new RuntimeException("At most " + Long.MAX_VALUE + " bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute " 
> + bytes);
> {code}
> The fuzz test had the following inputs
> {code}
> level = 8
> levelFanoutSize = 90
> maxSSTableSizeInBytes = 1141899264
> {code}
> Given that the max level is known (its 8, and is hard coded), we can do this 
> calculation during create table / alter table to make sure that we don’t blow 
> up later on



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

--

[jira] [Updated] (CASSANDRA-20570) Leveled Compaction doesn't validate maxBytesForLevel when the table is altered/created

2025-11-07 Thread David Capwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Capwell updated CASSANDRA-20570:
--
Status: Ready to Commit  (was: Changes Suggested)

> Leveled Compaction doesn't validate maxBytesForLevel when the table is 
> altered/created
> --
>
> Key: CASSANDRA-20570
> URL: https://issues.apache.org/jira/browse/CASSANDRA-20570
> Project: Apache Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction
>Reporter: David Capwell
>Assignee: Nikhil
>Priority: Normal
> Attachments: 
> ci_summary-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.html, 
> ci_summary-cassandra-4.1-3c53118393f75ae3808ba553284cbd6c2a28fddf.html, 
> ci_summary-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.html, 
> ci_summary-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.html, 
> result_details-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.tar.gz, 
> result_details-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.tar.gz, 
> result_details-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.tar.gz
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> In fuzz testing I hit this fun error message
> {code}
> java.lang.RuntimeException: Repair job has failed with the error message: 
> Repair command #1 failed with error At most 9223372036854775807 bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24. Check the logs on the repair participants for further 
> details
>   at 
> org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:187)
> {code}
> I was able to create a table, write data to it, and it only ever had issues 
> once I did a repair…
> The error comes from
> {code}
> INFO  [node2_Repair-Task:1] 2025-04-18 12:08:14,920 SubstituteLogger.java:169 
> - ERROR 19:08:14 Repair 138143d4-1dd2-11b2-ba62-f56ce069092d failed:
> java.lang.RuntimeException: At most 9223372036854775807 bytes may be in a 
> compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:191)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:182)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:651)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:640)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:635)
>   at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getEstimatedRemainingTasks(LeveledCompactionStrategy.java:276)
>   at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.getEstimatedRemainingTasks(CompactionStrategyManager.java:1147)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:81)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:73)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.getPendingTasks(CompactionManager.java:2368)
>   at 
> org.apache.cassandra.service.ActiveRepairService.verifyCompactionsPendingThreshold(ActiveRepairService.java:659)
>   at 
> org.apache.cassandra.service.ActiveRepairService.prepareForRepair(ActiveRepairService.java:672)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.prepare(RepairCoordinator.java:486)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.runMayThrow(RepairCoordinator.java:329)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.run(RepairCoordinator.java:280)
> {code}
> Which has this logic
> {code}
> double bytes = Math.pow(levelFanoutSize, level) * maxSSTableSizeInBytes;
> if (bytes > Long.MAX_VALUE)
> throw new RuntimeException("At most " + Long.MAX_VALUE + " bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute " 
> + bytes);
> {code}
> The fuzz test had the following inputs
> {code}
> level = 8
> levelFanoutSize = 90
> maxSSTableSizeInBytes = 1141899264
> {code}
> Given that the max level is known (its 8, and is hard coded), we can do this 
> calculation during create table / alter table to make sure that we don’t blow 
> up later on



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]



[jira] [Updated] (CASSANDRA-20570) Leveled Compaction doesn't validate maxBytesForLevel when the table is altered/created

2025-07-29 Thread guo Maxwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guo Maxwell updated CASSANDRA-20570:

Status: Changes Suggested  (was: Ready to Commit)

> Leveled Compaction doesn't validate maxBytesForLevel when the table is 
> altered/created
> --
>
> Key: CASSANDRA-20570
> URL: https://issues.apache.org/jira/browse/CASSANDRA-20570
> Project: Apache Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction
>Reporter: David Capwell
>Assignee: Nikhil
>Priority: Normal
> Attachments: 
> ci_summary-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.html, 
> ci_summary-cassandra-4.1-3c53118393f75ae3808ba553284cbd6c2a28fddf.html, 
> ci_summary-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.html, 
> ci_summary-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.html, 
> result_details-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.tar.gz, 
> result_details-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.tar.gz, 
> result_details-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.tar.gz
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> In fuzz testing I hit this fun error message
> {code}
> java.lang.RuntimeException: Repair job has failed with the error message: 
> Repair command #1 failed with error At most 9223372036854775807 bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24. Check the logs on the repair participants for further 
> details
>   at 
> org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:187)
> {code}
> I was able to create a table, write data to it, and it only ever had issues 
> once I did a repair…
> The error comes from
> {code}
> INFO  [node2_Repair-Task:1] 2025-04-18 12:08:14,920 SubstituteLogger.java:169 
> - ERROR 19:08:14 Repair 138143d4-1dd2-11b2-ba62-f56ce069092d failed:
> java.lang.RuntimeException: At most 9223372036854775807 bytes may be in a 
> compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:191)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:182)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:651)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:640)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:635)
>   at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getEstimatedRemainingTasks(LeveledCompactionStrategy.java:276)
>   at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.getEstimatedRemainingTasks(CompactionStrategyManager.java:1147)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:81)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:73)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.getPendingTasks(CompactionManager.java:2368)
>   at 
> org.apache.cassandra.service.ActiveRepairService.verifyCompactionsPendingThreshold(ActiveRepairService.java:659)
>   at 
> org.apache.cassandra.service.ActiveRepairService.prepareForRepair(ActiveRepairService.java:672)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.prepare(RepairCoordinator.java:486)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.runMayThrow(RepairCoordinator.java:329)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.run(RepairCoordinator.java:280)
> {code}
> Which has this logic
> {code}
> double bytes = Math.pow(levelFanoutSize, level) * maxSSTableSizeInBytes;
> if (bytes > Long.MAX_VALUE)
> throw new RuntimeException("At most " + Long.MAX_VALUE + " bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute " 
> + bytes);
> {code}
> The fuzz test had the following inputs
> {code}
> level = 8
> levelFanoutSize = 90
> maxSSTableSizeInBytes = 1141899264
> {code}
> Given that the max level is known (its 8, and is hard coded), we can do this 
> calculation during create table / alter table to make sure that we don’t blow 
> up later on



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]



[jira] [Updated] (CASSANDRA-20570) Leveled Compaction doesn't validate maxBytesForLevel when the table is altered/created

2025-07-17 Thread David Capwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Capwell updated CASSANDRA-20570:
--
Attachment: 
ci_summary-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.html

ci_summary-cassandra-4.1-3c53118393f75ae3808ba553284cbd6c2a28fddf.html

ci_summary-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.html
ci_summary-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.html

result_details-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.tar.gz

result_details-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.tar.gz

result_details-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.tar.gz

> Leveled Compaction doesn't validate maxBytesForLevel when the table is 
> altered/created
> --
>
> Key: CASSANDRA-20570
> URL: https://issues.apache.org/jira/browse/CASSANDRA-20570
> Project: Apache Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction
>Reporter: David Capwell
>Assignee: Nikhil
>Priority: Normal
> Attachments: 
> ci_summary-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.html, 
> ci_summary-cassandra-4.1-3c53118393f75ae3808ba553284cbd6c2a28fddf.html, 
> ci_summary-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.html, 
> ci_summary-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.html, 
> result_details-cassandra-4.0-67a822379d00898af6071f444e43113e7c4a8f2a.tar.gz, 
> result_details-cassandra-5.0-7515ff6ca5ec8ecd064e7e61f8eac3297288f8f7.tar.gz, 
> result_details-trunk-9dadf7c0999632305578d1ca710f54b690fa4792.tar.gz
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> In fuzz testing I hit this fun error message
> {code}
> java.lang.RuntimeException: Repair job has failed with the error message: 
> Repair command #1 failed with error At most 9223372036854775807 bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24. Check the logs on the repair participants for further 
> details
>   at 
> org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:187)
> {code}
> I was able to create a table, write data to it, and it only ever had issues 
> once I did a repair…
> The error comes from
> {code}
> INFO  [node2_Repair-Task:1] 2025-04-18 12:08:14,920 SubstituteLogger.java:169 
> - ERROR 19:08:14 Repair 138143d4-1dd2-11b2-ba62-f56ce069092d failed:
> java.lang.RuntimeException: At most 9223372036854775807 bytes may be in a 
> compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:191)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:182)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:651)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:640)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:635)
>   at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getEstimatedRemainingTasks(LeveledCompactionStrategy.java:276)
>   at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.getEstimatedRemainingTasks(CompactionStrategyManager.java:1147)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:81)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:73)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.getPendingTasks(CompactionManager.java:2368)
>   at 
> org.apache.cassandra.service.ActiveRepairService.verifyCompactionsPendingThreshold(ActiveRepairService.java:659)
>   at 
> org.apache.cassandra.service.ActiveRepairService.prepareForRepair(ActiveRepairService.java:672)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.prepare(RepairCoordinator.java:486)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.runMayThrow(RepairCoordinator.java:329)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.run(RepairCoordinator.java:280)
> {code}
> Which has this logic
> {code}
> double bytes = Math.pow(levelFanoutSize, level) * maxSSTableSizeInBytes;
> if (bytes > Long.MAX_VALUE)
> throw new RuntimeException("At most " + Long.MAX_VALUE + " bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute " 
> + bytes);
> {code}
> The fuzz test had the following inputs
> {code}
> level = 8
> levelFanoutSize = 90
> maxSSTableSizeInBytes = 1141899264
> {code}
> Gi

[jira] [Updated] (CASSANDRA-20570) Leveled Compaction doesn't validate maxBytesForLevel when the table is altered/created

2025-07-10 Thread guo Maxwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guo Maxwell updated CASSANDRA-20570:

Status: Review In Progress  (was: Needs Committer)

> Leveled Compaction doesn't validate maxBytesForLevel when the table is 
> altered/created
> --
>
> Key: CASSANDRA-20570
> URL: https://issues.apache.org/jira/browse/CASSANDRA-20570
> Project: Apache Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction
>Reporter: David Capwell
>Assignee: Nikhil
>Priority: Normal
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> In fuzz testing I hit this fun error message
> {code}
> java.lang.RuntimeException: Repair job has failed with the error message: 
> Repair command #1 failed with error At most 9223372036854775807 bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24. Check the logs on the repair participants for further 
> details
>   at 
> org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:187)
> {code}
> I was able to create a table, write data to it, and it only ever had issues 
> once I did a repair…
> The error comes from
> {code}
> INFO  [node2_Repair-Task:1] 2025-04-18 12:08:14,920 SubstituteLogger.java:169 
> - ERROR 19:08:14 Repair 138143d4-1dd2-11b2-ba62-f56ce069092d failed:
> java.lang.RuntimeException: At most 9223372036854775807 bytes may be in a 
> compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:191)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:182)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:651)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:640)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:635)
>   at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getEstimatedRemainingTasks(LeveledCompactionStrategy.java:276)
>   at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.getEstimatedRemainingTasks(CompactionStrategyManager.java:1147)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:81)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:73)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.getPendingTasks(CompactionManager.java:2368)
>   at 
> org.apache.cassandra.service.ActiveRepairService.verifyCompactionsPendingThreshold(ActiveRepairService.java:659)
>   at 
> org.apache.cassandra.service.ActiveRepairService.prepareForRepair(ActiveRepairService.java:672)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.prepare(RepairCoordinator.java:486)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.runMayThrow(RepairCoordinator.java:329)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.run(RepairCoordinator.java:280)
> {code}
> Which has this logic
> {code}
> double bytes = Math.pow(levelFanoutSize, level) * maxSSTableSizeInBytes;
> if (bytes > Long.MAX_VALUE)
> throw new RuntimeException("At most " + Long.MAX_VALUE + " bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute " 
> + bytes);
> {code}
> The fuzz test had the following inputs
> {code}
> level = 8
> levelFanoutSize = 90
> maxSSTableSizeInBytes = 1141899264
> {code}
> Given that the max level is known (its 8, and is hard coded), we can do this 
> calculation during create table / alter table to make sure that we don’t blow 
> up later on



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]



[jira] [Updated] (CASSANDRA-20570) Leveled Compaction doesn't validate maxBytesForLevel when the table is altered/created

2025-07-10 Thread guo Maxwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guo Maxwell updated CASSANDRA-20570:

Status: Ready to Commit  (was: Review In Progress)

> Leveled Compaction doesn't validate maxBytesForLevel when the table is 
> altered/created
> --
>
> Key: CASSANDRA-20570
> URL: https://issues.apache.org/jira/browse/CASSANDRA-20570
> Project: Apache Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction
>Reporter: David Capwell
>Assignee: Nikhil
>Priority: Normal
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> In fuzz testing I hit this fun error message
> {code}
> java.lang.RuntimeException: Repair job has failed with the error message: 
> Repair command #1 failed with error At most 9223372036854775807 bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24. Check the logs on the repair participants for further 
> details
>   at 
> org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:187)
> {code}
> I was able to create a table, write data to it, and it only ever had issues 
> once I did a repair…
> The error comes from
> {code}
> INFO  [node2_Repair-Task:1] 2025-04-18 12:08:14,920 SubstituteLogger.java:169 
> - ERROR 19:08:14 Repair 138143d4-1dd2-11b2-ba62-f56ce069092d failed:
> java.lang.RuntimeException: At most 9223372036854775807 bytes may be in a 
> compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:191)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:182)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:651)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:640)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:635)
>   at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getEstimatedRemainingTasks(LeveledCompactionStrategy.java:276)
>   at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.getEstimatedRemainingTasks(CompactionStrategyManager.java:1147)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:81)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:73)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.getPendingTasks(CompactionManager.java:2368)
>   at 
> org.apache.cassandra.service.ActiveRepairService.verifyCompactionsPendingThreshold(ActiveRepairService.java:659)
>   at 
> org.apache.cassandra.service.ActiveRepairService.prepareForRepair(ActiveRepairService.java:672)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.prepare(RepairCoordinator.java:486)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.runMayThrow(RepairCoordinator.java:329)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.run(RepairCoordinator.java:280)
> {code}
> Which has this logic
> {code}
> double bytes = Math.pow(levelFanoutSize, level) * maxSSTableSizeInBytes;
> if (bytes > Long.MAX_VALUE)
> throw new RuntimeException("At most " + Long.MAX_VALUE + " bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute " 
> + bytes);
> {code}
> The fuzz test had the following inputs
> {code}
> level = 8
> levelFanoutSize = 90
> maxSSTableSizeInBytes = 1141899264
> {code}
> Given that the max level is known (its 8, and is hard coded), we can do this 
> calculation during create table / alter table to make sure that we don’t blow 
> up later on



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]



[jira] [Updated] (CASSANDRA-20570) Leveled Compaction doesn't validate maxBytesForLevel when the table is altered/created

2025-07-09 Thread guo Maxwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guo Maxwell updated CASSANDRA-20570:

Status: Needs Committer  (was: Review In Progress)

> Leveled Compaction doesn't validate maxBytesForLevel when the table is 
> altered/created
> --
>
> Key: CASSANDRA-20570
> URL: https://issues.apache.org/jira/browse/CASSANDRA-20570
> Project: Apache Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction
>Reporter: David Capwell
>Assignee: Nikhil
>Priority: Normal
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> In fuzz testing I hit this fun error message
> {code}
> java.lang.RuntimeException: Repair job has failed with the error message: 
> Repair command #1 failed with error At most 9223372036854775807 bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24. Check the logs on the repair participants for further 
> details
>   at 
> org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:187)
> {code}
> I was able to create a table, write data to it, and it only ever had issues 
> once I did a repair…
> The error comes from
> {code}
> INFO  [node2_Repair-Task:1] 2025-04-18 12:08:14,920 SubstituteLogger.java:169 
> - ERROR 19:08:14 Repair 138143d4-1dd2-11b2-ba62-f56ce069092d failed:
> java.lang.RuntimeException: At most 9223372036854775807 bytes may be in a 
> compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:191)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:182)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:651)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:640)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:635)
>   at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getEstimatedRemainingTasks(LeveledCompactionStrategy.java:276)
>   at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.getEstimatedRemainingTasks(CompactionStrategyManager.java:1147)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:81)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:73)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.getPendingTasks(CompactionManager.java:2368)
>   at 
> org.apache.cassandra.service.ActiveRepairService.verifyCompactionsPendingThreshold(ActiveRepairService.java:659)
>   at 
> org.apache.cassandra.service.ActiveRepairService.prepareForRepair(ActiveRepairService.java:672)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.prepare(RepairCoordinator.java:486)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.runMayThrow(RepairCoordinator.java:329)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.run(RepairCoordinator.java:280)
> {code}
> Which has this logic
> {code}
> double bytes = Math.pow(levelFanoutSize, level) * maxSSTableSizeInBytes;
> if (bytes > Long.MAX_VALUE)
> throw new RuntimeException("At most " + Long.MAX_VALUE + " bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute " 
> + bytes);
> {code}
> The fuzz test had the following inputs
> {code}
> level = 8
> levelFanoutSize = 90
> maxSSTableSizeInBytes = 1141899264
> {code}
> Given that the max level is known (its 8, and is hard coded), we can do this 
> calculation during create table / alter table to make sure that we don’t blow 
> up later on



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]



[jira] [Updated] (CASSANDRA-20570) Leveled Compaction doesn't validate maxBytesForLevel when the table is altered/created

2025-07-07 Thread David Capwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Capwell updated CASSANDRA-20570:
--
Reviewers: David Capwell, guo Maxwell  (was: guo Maxwell)

> Leveled Compaction doesn't validate maxBytesForLevel when the table is 
> altered/created
> --
>
> Key: CASSANDRA-20570
> URL: https://issues.apache.org/jira/browse/CASSANDRA-20570
> Project: Apache Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction
>Reporter: David Capwell
>Assignee: Nikhil
>Priority: Normal
>
> In fuzz testing I hit this fun error message
> {code}
> java.lang.RuntimeException: Repair job has failed with the error message: 
> Repair command #1 failed with error At most 9223372036854775807 bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24. Check the logs on the repair participants for further 
> details
>   at 
> org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:187)
> {code}
> I was able to create a table, write data to it, and it only ever had issues 
> once I did a repair…
> The error comes from
> {code}
> INFO  [node2_Repair-Task:1] 2025-04-18 12:08:14,920 SubstituteLogger.java:169 
> - ERROR 19:08:14 Repair 138143d4-1dd2-11b2-ba62-f56ce069092d failed:
> java.lang.RuntimeException: At most 9223372036854775807 bytes may be in a 
> compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:191)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:182)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:651)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:640)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:635)
>   at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getEstimatedRemainingTasks(LeveledCompactionStrategy.java:276)
>   at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.getEstimatedRemainingTasks(CompactionStrategyManager.java:1147)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:81)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:73)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.getPendingTasks(CompactionManager.java:2368)
>   at 
> org.apache.cassandra.service.ActiveRepairService.verifyCompactionsPendingThreshold(ActiveRepairService.java:659)
>   at 
> org.apache.cassandra.service.ActiveRepairService.prepareForRepair(ActiveRepairService.java:672)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.prepare(RepairCoordinator.java:486)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.runMayThrow(RepairCoordinator.java:329)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.run(RepairCoordinator.java:280)
> {code}
> Which has this logic
> {code}
> double bytes = Math.pow(levelFanoutSize, level) * maxSSTableSizeInBytes;
> if (bytes > Long.MAX_VALUE)
> throw new RuntimeException("At most " + Long.MAX_VALUE + " bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute " 
> + bytes);
> {code}
> The fuzz test had the following inputs
> {code}
> level = 8
> levelFanoutSize = 90
> maxSSTableSizeInBytes = 1141899264
> {code}
> Given that the max level is known (its 8, and is hard coded), we can do this 
> calculation during create table / alter table to make sure that we don’t blow 
> up later on



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]



[jira] [Updated] (CASSANDRA-20570) Leveled Compaction doesn't validate maxBytesForLevel when the table is altered/created

2025-07-06 Thread guo Maxwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guo Maxwell updated CASSANDRA-20570:

Reviewers: guo Maxwell
   Status: Review In Progress  (was: Patch Available)

> Leveled Compaction doesn't validate maxBytesForLevel when the table is 
> altered/created
> --
>
> Key: CASSANDRA-20570
> URL: https://issues.apache.org/jira/browse/CASSANDRA-20570
> Project: Apache Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction
>Reporter: David Capwell
>Assignee: Nikhil
>Priority: Normal
>
> In fuzz testing I hit this fun error message
> {code}
> java.lang.RuntimeException: Repair job has failed with the error message: 
> Repair command #1 failed with error At most 9223372036854775807 bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24. Check the logs on the repair participants for further 
> details
>   at 
> org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:187)
> {code}
> I was able to create a table, write data to it, and it only ever had issues 
> once I did a repair…
> The error comes from
> {code}
> INFO  [node2_Repair-Task:1] 2025-04-18 12:08:14,920 SubstituteLogger.java:169 
> - ERROR 19:08:14 Repair 138143d4-1dd2-11b2-ba62-f56ce069092d failed:
> java.lang.RuntimeException: At most 9223372036854775807 bytes may be in a 
> compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:191)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:182)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:651)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:640)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:635)
>   at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getEstimatedRemainingTasks(LeveledCompactionStrategy.java:276)
>   at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.getEstimatedRemainingTasks(CompactionStrategyManager.java:1147)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:81)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:73)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.getPendingTasks(CompactionManager.java:2368)
>   at 
> org.apache.cassandra.service.ActiveRepairService.verifyCompactionsPendingThreshold(ActiveRepairService.java:659)
>   at 
> org.apache.cassandra.service.ActiveRepairService.prepareForRepair(ActiveRepairService.java:672)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.prepare(RepairCoordinator.java:486)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.runMayThrow(RepairCoordinator.java:329)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.run(RepairCoordinator.java:280)
> {code}
> Which has this logic
> {code}
> double bytes = Math.pow(levelFanoutSize, level) * maxSSTableSizeInBytes;
> if (bytes > Long.MAX_VALUE)
> throw new RuntimeException("At most " + Long.MAX_VALUE + " bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute " 
> + bytes);
> {code}
> The fuzz test had the following inputs
> {code}
> level = 8
> levelFanoutSize = 90
> maxSSTableSizeInBytes = 1141899264
> {code}
> Given that the max level is known (its 8, and is hard coded), we can do this 
> calculation during create table / alter table to make sure that we don’t blow 
> up later on



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]



[jira] [Updated] (CASSANDRA-20570) Leveled Compaction doesn't validate maxBytesForLevel when the table is altered/created

2025-07-06 Thread guo Maxwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

guo Maxwell updated CASSANDRA-20570:

Test and Documentation Plan: ut
 Status: Patch Available  (was: In Progress)

> Leveled Compaction doesn't validate maxBytesForLevel when the table is 
> altered/created
> --
>
> Key: CASSANDRA-20570
> URL: https://issues.apache.org/jira/browse/CASSANDRA-20570
> Project: Apache Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction
>Reporter: David Capwell
>Assignee: Nikhil
>Priority: Normal
>
> In fuzz testing I hit this fun error message
> {code}
> java.lang.RuntimeException: Repair job has failed with the error message: 
> Repair command #1 failed with error At most 9223372036854775807 bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24. Check the logs on the repair participants for further 
> details
>   at 
> org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:187)
> {code}
> I was able to create a table, write data to it, and it only ever had issues 
> once I did a repair…
> The error comes from
> {code}
> INFO  [node2_Repair-Task:1] 2025-04-18 12:08:14,920 SubstituteLogger.java:169 
> - ERROR 19:08:14 Repair 138143d4-1dd2-11b2-ba62-f56ce069092d failed:
> java.lang.RuntimeException: At most 9223372036854775807 bytes may be in a 
> compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:191)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:182)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:651)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:640)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:635)
>   at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getEstimatedRemainingTasks(LeveledCompactionStrategy.java:276)
>   at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.getEstimatedRemainingTasks(CompactionStrategyManager.java:1147)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:81)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:73)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.getPendingTasks(CompactionManager.java:2368)
>   at 
> org.apache.cassandra.service.ActiveRepairService.verifyCompactionsPendingThreshold(ActiveRepairService.java:659)
>   at 
> org.apache.cassandra.service.ActiveRepairService.prepareForRepair(ActiveRepairService.java:672)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.prepare(RepairCoordinator.java:486)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.runMayThrow(RepairCoordinator.java:329)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.run(RepairCoordinator.java:280)
> {code}
> Which has this logic
> {code}
> double bytes = Math.pow(levelFanoutSize, level) * maxSSTableSizeInBytes;
> if (bytes > Long.MAX_VALUE)
> throw new RuntimeException("At most " + Long.MAX_VALUE + " bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute " 
> + bytes);
> {code}
> The fuzz test had the following inputs
> {code}
> level = 8
> levelFanoutSize = 90
> maxSSTableSizeInBytes = 1141899264
> {code}
> Given that the max level is known (its 8, and is hard coded), we can do this 
> calculation during create table / alter table to make sure that we don’t blow 
> up later on



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]



[jira] [Updated] (CASSANDRA-20570) Leveled Compaction doesn't validate maxBytesForLevel when the table is altered/created

2025-04-18 Thread David Capwell (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-20570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Capwell updated CASSANDRA-20570:
--
Change Category: Operability
 Complexity: Low Hanging Fruit
 Status: Open  (was: Triage Needed)

> Leveled Compaction doesn't validate maxBytesForLevel when the table is 
> altered/created
> --
>
> Key: CASSANDRA-20570
> URL: https://issues.apache.org/jira/browse/CASSANDRA-20570
> Project: Apache Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction
>Reporter: David Capwell
>Priority: Normal
>
> In fuzz testing I hit this fun error message
> {code}
> java.lang.RuntimeException: Repair job has failed with the error message: 
> Repair command #1 failed with error At most 9223372036854775807 bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24. Check the logs on the repair participants for further 
> details
>   at 
> org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:187)
> {code}
> I was able to create a table, write data to it, and it only ever had issues 
> once I did a repair…
> The error comes from
> {code}
> INFO  [node2_Repair-Task:1] 2025-04-18 12:08:14,920 SubstituteLogger.java:169 
> - ERROR 19:08:14 Repair 138143d4-1dd2-11b2-ba62-f56ce069092d failed:
> java.lang.RuntimeException: At most 9223372036854775807 bytes may be in a 
> compaction level; your maxSSTableSize must be absurdly high to compute 
> 4.915501902751334E24
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:191)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.maxBytesForLevel(LeveledManifest.java:182)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:651)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:640)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getEstimatedTasks(LeveledManifest.java:635)
>   at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getEstimatedRemainingTasks(LeveledCompactionStrategy.java:276)
>   at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.getEstimatedRemainingTasks(CompactionStrategyManager.java:1147)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:81)
>   at 
> org.apache.cassandra.metrics.CompactionMetrics$1.getValue(CompactionMetrics.java:73)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.getPendingTasks(CompactionManager.java:2368)
>   at 
> org.apache.cassandra.service.ActiveRepairService.verifyCompactionsPendingThreshold(ActiveRepairService.java:659)
>   at 
> org.apache.cassandra.service.ActiveRepairService.prepareForRepair(ActiveRepairService.java:672)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.prepare(RepairCoordinator.java:486)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.runMayThrow(RepairCoordinator.java:329)
>   at 
> org.apache.cassandra.repair.RepairCoordinator.run(RepairCoordinator.java:280)
> {code}
> Which has this logic
> {code}
> double bytes = Math.pow(levelFanoutSize, level) * maxSSTableSizeInBytes;
> if (bytes > Long.MAX_VALUE)
> throw new RuntimeException("At most " + Long.MAX_VALUE + " bytes may be 
> in a compaction level; your maxSSTableSize must be absurdly high to compute " 
> + bytes);
> {code}
> The fuzz test had the following inputs
> {code}
> level = 8
> levelFanoutSize = 90
> maxSSTableSizeInBytes = 1141899264
> {code}
> Given that the max level is known (its 8, and is hard coded), we can do this 
> calculation during create table / alter table to make sure that we don’t blow 
> up later on



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]