Re: A cluster (RF=3) not recovering after two nodes are stopped

2019-04-24 Thread Hiroyuki Yamada
Sorry, I didn't write the version and the configurations. I've tested with C* 3.11.4, and the configurations are mostly set to default except for the replication factor and listen_address for proper networking. Thanks, Hiro On Wed, Apr 24, 2019 at 5:12 PM Hiroyuki Yamada wrote: > Hello Ben, >

Re: when the "delete statement" would be deleted?

2019-04-24 Thread onmstester onmstester
Found the answer: it would be deleted after gc_grace Just decreased the gc_grace, run compact, and the "marked_deleted" partitions purged from sstable Sent using https://www.zoho.com/mail/ On Wed, 24 Apr 2019 14:15:33 +0430 onmstester onmstester wrote Just deleted multiple

when the "delete statement" would be deleted?

2019-04-24 Thread onmstester onmstester
Just deleted multiple partitions from one of my tables, dumping sstables shows that the data successfully deleted, but the 'marked_deleted' rows for each of partitions still exists on sstable and allocates storage. Is there any way to get rid of these delete statements storage overhead

Re: A cluster (RF=3) not recovering after two nodes are stopped

2019-04-24 Thread Hiroyuki Yamada
Hello Ben, Thank you for the quick reply. I haven't tried that case, but it does't recover even if I stopped the stress. Thanks, Hiro On Wed, Apr 24, 2019 at 3:36 PM Ben Slater wrote: > Is it possible that stress is overloading node 1 so it’s not recovering > state properly when node 2 comes

Re: A cluster (RF=3) not recovering after two nodes are stopped

2019-04-24 Thread Ben Slater
Is it possible that stress is overloading node 1 so it’s not recovering state properly when node 2 comes up? Have you tried running with a lower load (say 2 or 3 threads)? Cheers Ben --- *Ben Slater* *Chief Product Officer*

A cluster (RF=3) not recovering after two nodes are stopped

2019-04-24 Thread Hiroyuki Yamada
Hello, I faced a weird issue when recovering a cluster after two nodes are stopped. It is easily reproduce-able and looks like a bug or an issue to fix, so let me write down the steps to reproduce. === STEPS TO REPRODUCE === * Create a 3-node cluster with RF=3 - node1(seed), node2, node3 *