Repairs work fine with TWCS, but having a non-expiring row will prevent
tombstones in newer sstables from being purged
I suspect someone did a manual insert/update without a ttl and that effectively
blocks all other expiring cells from being purged.
--
Jeff Jirsa
> On May 3, 2019, at 7:57
Hi Mike,
If you will, share your compaction settings. More than likely, your issue is
from 1 of 2 reasons:
1. You have read repair chance set to anything other than 0
2. You’re running repairs on the TWCS CF
Or both….
From: Mike Torra [mailto:mto...@salesforce.com.INVALID]
Sent: Friday, May
Thx for the help Paul - there are definitely some details here I still
don't fully understand, but this helped me resolve the problem and know
what to look for in the future :)
On Fri, May 3, 2019 at 12:44 PM Paul Chandler wrote:
> Hi Mike,
>
> For TWCS the sstable can only be deleted when all
Hi Mike,
For TWCS the sstable can only be deleted when all the data has expired in that
sstable, but you had a record without a ttl in it, so that sstable could never
be deleted.
That bit is straight forward, the next bit I remember reading somewhere but
can’t find it at the moment to confirm
This does indeed seem to be a problem of overlapping sstables, but I don't
understand why the data (and number of sstables) just continues to grow
indefinitely. I also don't understand why this problem is only appearing on
some nodes. Is it just a coincidence that the one rogue test row without a
Thank you all.
So, please, bear with me for a second. I'm trying to figure out how can
data be totally lost under the above circumstances when nodes die in two
out of three racks.
You stated
"the replica may or many not have made its way to the third node '. Why
'may not'?
This is what I
Hi Shalom,
I've runned refresh as Nitan suggested without sstablescrub.
Then i tried drain/restart on the 3 nodes. The repair is now OK.
Thanks for your help
Simon
On 02/05/2019 16:58, shalom sagges wrote:
Hi Simon,
If you haven't did that already, try to drain and restart the node you