> > Can I just stop the node, move the 56Go SSTable (so I guess the Summary, > TOC, Digest, Statistics, CompressionInfo, Data, Index and Filter files) and > restart the node?
Yes this should work, service should stay up as long as all the operations are performed following this rule: Consistency Level < Replication Factor. SSTable will be loaded at start time, no matter which disk it is in. But luckily you don't have to trust me on this one, as it should be easy enough to test on a dev environment. within a 3-nodes C* 2.1.4 cluster I really recommend you to go to C* 2.1.17 as minor upgrades use to be safe enough while running using the early minor versions of a major Cassandra version is known to be a source of problems. Many things were fixed or improved between C* 2.1.4 and C* 2.1.17. C*heers, ----------------------- Alain Rodriguez - @arodream - al...@thelastpickle.com France The Last Pickle - Apache Cassandra Consulting http://www.thelastpickle.com 2017-05-12 11:52 GMT+01:00 Axel Colin de Verdiere <axel...@gmail.com>: > Hello ! > > I'm experiencing a data imbalance issue with one of my nodes within a > 3-nodes C* 2.1.4 cluster. All of them are using JBOD (2 physical disks), > and this particular node seems to have recently made a relatively big > compaction (I'm using STCS), creating a 56Go SSTable file, which results in > one of the disks being 94% used and the other only 34%. I've looked around > for similar issues, and this was supposed to be fixed in 2.1.3 ( > CASSANDRA-7386 <https://issues.apache.org/jira/browse/CASSANDRA-7386>). DSE > docs > <https://support.datastax.com/hc/en-us/articles/204226239-DSE-can-run-out-of-disk-space-using-JBOD-even-with-disks-available-> > suggest > stopping the node and moving some SSTables around between the disks to > force a better balance, while trying to make as few moves as possible. Can > I just stop the node, move the 56Go SSTable (so I guess the Summary, TOC, > Digest, Statistics, CompressionInfo, Data, Index and Filter files) and > restart the node? > > Thanks a lot for your help, > Best, > > Axel >