Excellent! Thank you Jeff.
On Mon, Jul 31, 2017 at 10:26 AM, Jeff Jirsa wrote:
> 3.10 has 6696 in it, so my understanding is you'll probably be fine just
> running repair
>
>
> Yes, same risks if you swap drives - before 6696, you want to replace a
> whole node if any
3.10 has 6696 in it, so my understanding is you'll probably be fine just
running repair
Yes, same risks if you swap drives - before 6696, you want to replace a whole
node if any sstables are damaged or lost (if you do deletes, and if it hurts
you if deleted data comes back to life).
--
I just want to add that we use vnodes=16 if that helps with my questions..
On Mon, Jul 31, 2017 at 9:41 AM, Ioannis Zafiropoulos
wrote:
> Thank you Jeff for your answer,
>
> I use RF=3 and our client connect always with QUORUM. So I guess I will be
> alright after a repair
Thank you Jeff for your answer,
I use RF=3 and our client connect always with QUORUM. So I guess I will be
alright after a repair (?)
Follow up questions,
- It seems that the risks you describing would be the same as if I had
replaced the drive with an new fresh one and run repair, is that
It depends on what consistency level you use for reads/writes, and whether you
do deletes
The real danger is that there may have been a tombstone on the drive the failed
covering data on the disks that remain, where the delete happened older than
gc-grace - if you simple yank the disk, that
Hi All,
I have a 7 node cluster (Version 3.10) consisting of 5 disks each in JBOD.
A few hours ago I had a disk failure on a node. I am wondering if I can:
- stop Cassandra on that node
- remove the disk, physically and from cassandra.yaml
- start Cassandra on that node
- run repair
I mean, is