Thanks Jeff. CASSANDRA-6434 is exactly the issue. Do we have a plan/ticket
to get rid of GCGS (and make only_purge_repaired_tombstones default)? Will
it be covered in CASSANDRA-14145?
I created a ticket CASSANDRA-14543 for purgeable tombstone hints replaying,
which doesn't fix the root cause but
Think he's talking about
https://issues.apache.org/jira/browse/CASSANDRA-6434
Doesn't solve every problem if you don't run repair at all, but if you're
not running repairs, you're nearly guaranteed problems with resurrection
after gcgs anyway.
On Thu, Jun 21, 2018 at 11:33 AM, Jay Zhuang
Yes, I also agree that the user should run (incremental) repair within GCGS
to prevent it from happening.
@Sankalp, would you please point us the patch you mentioned from Marcus?
The problem is basically the same as
https://issues.apache.org/jira/browse/CASSANDRA-14145
CASSANDRA-11427
I agree with Stefan that we should use incremental repair and use patches
from Marcus to drop tombstones only from repaired data.
Regarding deep repair, you can bump the read repair and run the repair. The
issue will be that you will stream lot of data and also your blocking read
repair will go up
We've seen this before but couldn't tie it to GCGS so we ended up
forgetting about it. Now with a reproducible test case things make much
more sense and we should be able to fix this.
Seems that it's most certainly a bug with partition deletions and handling
of GC grace seconds. It seems that the
Hi,
We know that the deleted data may re-appear if repair is not run within
gc_grace_seconds. When the tombstone is not propagated to all nodes, the
data will re-appear. But it's also causing following 2 issues before the
tombstone is compacted away:
a. inconsistent query result
With consistency