[
https://issues.apache.org/jira/browse/CASSANDRA-20787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Isaac Reath updated CASSANDRA-20787:
------------------------------------
Test and Documentation Plan:
* Modified distributed test for this guardrail to ensure that this config path
is exercised.
* Run through reproduction instructions with and without patch applied.
Status: Patch Available (was: Open)
> Cassandra crashes on first boot with data_disk_usage_max_disk_size set when
> data directory is not created
> ---------------------------------------------------------------------------------------------------------
>
> Key: CASSANDRA-20787
> URL: https://issues.apache.org/jira/browse/CASSANDRA-20787
> Project: Apache Cassandra
> Issue Type: Bug
> Components: Feature/Guardrails
> Reporter: Isaac Reath
> Assignee: Isaac Reath
> Priority: Normal
> Fix For: 4.1.x, 5.0.x, 6.x
>
> Time Spent: 0.5h
> Remaining Estimate: 0h
>
> When starting a new node whose data directory is not yet created (e.g.
> $CASSANDRA_HOME/data/data) , and data_disk_usage_max_disk_size set to any
> value Cassandra will crash with the following exception:
> ERROR [main] 2025-07-14 15:34:21,880 CassandraDaemon.java:900 - Exception
> encountered during startup
> java.lang.RuntimeException: Cannot get data directories grouped by file store
> at
> org.apache.cassandra.service.disk.usage.DiskUsageMonitor.dataDirectoriesGroupedByFileStore(DiskUsageMonitor.java:202)
> at
> org.apache.cassandra.service.disk.usage.DiskUsageMonitor.totalDiskSpace(DiskUsageMonitor.java:209)
> at
> org.apache.cassandra.config.GuardrailsOptions.validateDataDiskUsageMaxDiskSize(GuardrailsOptions.java:786)
> at
> org.apache.cassandra.config.GuardrailsOptions.<init>(GuardrailsOptions.java:83)
> at
> org.apache.cassandra.config.DatabaseDescriptor.applyGuardrails(DatabaseDescriptor.java:1000)
> at
> org.apache.cassandra.config.DatabaseDescriptor.applyAll(DatabaseDescriptor.java:396)
> at
> org.apache.cassandra.config.DatabaseDescriptor.daemonInitialization(DatabaseDescriptor.java:204)
> at
> org.apache.cassandra.config.DatabaseDescriptor.daemonInitialization(DatabaseDescriptor.java:188)
> at
> org.apache.cassandra.service.CassandraDaemon.applyConfig(CassandraDaemon.java:797)
> at
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:740)
> at
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:878)
> Caused by: java.nio.file.NoSuchFileException: /path/to/data
> at
> java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)
> at
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
> at
> java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
> at java.base/sun.nio.fs.UnixFileStore.devFor(UnixFileStore.java:61)
> at java.base/sun.nio.fs.UnixFileStore.<init>(UnixFileStore.java:68)
> at java.base/sun.nio.fs.LinuxFileStore.<init>(LinuxFileStore.java:49)
> at
> java.base/sun.nio.fs.LinuxFileSystemProvider.getFileStore(LinuxFileSystemProvider.java:51)
> at
> java.base/sun.nio.fs.LinuxFileSystemProvider.getFileStore(LinuxFileSystemProvider.java:39)
> at
> java.base/sun.nio.fs.UnixFileSystemProvider.getFileStore(UnixFileSystemProvider.java:373)
> at java.base/java.nio.file.Files.getFileStore(Files.java:1488)
> at
> org.apache.cassandra.service.disk.usage.DiskUsageMonitor.dataDirectoriesGroupedByFileStore(DiskUsageMonitor.java:196)
> ... 10 common frames omitted
>
> This happens because DatabaseDescriptor#applyGuardrails() is called before
> any calls to DatabaseDescriptor#createAllDirectories occurs. Calling
> createAllDirectories before the call to applyGuardrails fixes this issue.
>
> Reproduction steps:
> * Set a value in cassandra.yaml for data_disk_usage_max_disk_size
> * Start cassandra bin/cassandra -f
> * The crash should be observed on startup.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]