[
https://issues.apache.org/jira/browse/CASSANDRA-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13629363#comment-13629363
]
Christian Spriegel commented on CASSANDRA-4905:
-----------------------------------------------
Yeah, repair with TTLed columns can be nasty. Since November, I've seen repairs
streaming up to 90GB of data for a single repair. According to nodetool, this
cluster had no dropped writes. So I would assume it was consistent already.
Before, Sun Dec 23 08:00:01 UTC 2012:
192.168.1.1 datacenter1 rack1 Up Normal 404.17 GB 33.33%
0
192.168.1.2 datacenter1 rack1 Up Normal 410.9 GB 33.33%
56713727820156410577229101238628035242
192.168.1.3 datacenter1 rack1 Up Normal 404.27 GB 33.33%
113427455640312821154458202477256070484
After, Sun Dec 23 12:19:38 UTC 2012:
192.168.1.1 datacenter1 rack1 Up Normal 497.95 GB 33.33%
0
192.168.1.2 datacenter1 rack1 Up Normal 413.26 GB 33.33%
56713727820156410577229101238628035242
192.168.1.3 datacenter1 rack1 Up Normal 449.83 GB 33.33%
113427455640312821154458202477256070484
I'm not saying I want this patch in 1.1. I just wanted to share this rather
spectecular repair :-)
> Repair should exclude gcable tombstones from merkle-tree computation
> --------------------------------------------------------------------
>
> Key: CASSANDRA-4905
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4905
> Project: Cassandra
> Issue Type: Improvement
> Components: Core
> Reporter: Christian Spriegel
> Assignee: Sylvain Lebresne
> Fix For: 1.2.0 beta 3
>
> Attachments: 4905.txt
>
>
> Currently gcable tombstones get repaired if some replicas compacted already,
> but some are not compacted.
> This could be avoided by ignoring all gcable tombstones during merkle tree
> calculation.
> This was discussed with Sylvain on the mailing list:
> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/repair-compaction-and-tombstone-rows-td7583481.html
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira