It depends on what consistency level you use for reads/writes, and whether you 
do deletes

The real danger is that there may have been a tombstone on the drive the failed 
covering data on the disks that remain, where the delete happened older than 
gc-grace - if you simple yank the disk, that data will come back to life (it's 
also possible some data temporarily reverts to a previous state for some 
queries, though the reversion can be fixed with nodetool repair, the 
resurrection can't be undone). If you don't do deletes, this is not a problem. 
If there's no danger to you if data comes back to life, then you're probably ok 
as well.

Cassandra-6696 dramatically lowers this risk , if you're using a new enough 
version of Cassandra



-- 
Jeff Jirsa


> On Jul 31, 2017, at 1:49 AM, Ioannis Zafiropoulos <john...@gmail.com> wrote:
> 
> Hi All,
> 
> I have a 7 node cluster (Version 3.10) consisting of 5 disks each in JBOD. A 
> few hours ago I had a disk failure on a node. I am wondering if I can:
> 
> - stop Cassandra on that node
> - remove the disk, physically and from cassandra.yaml
> - start Cassandra on that node
> - run repair
> 
> I mean, is it necessary to replace a failed disk instead of just removing it? 
> (assuming that the remaining disks have enough free space)
> 
> Thank you for your help,
> John
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org

Reply via email to