[
https://issues.apache.org/jira/browse/CASSANDRA-1717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13079984#comment-13079984
]
T Jake Luciani commented on CASSANDRA-1717:
-------------------------------------------
Right, I think given that we are using block compression it really only makes
sense todo checksums at the block level, I just didn't know what recovery tools
we can build.
Sounds like using the row index we could go repair the range containing the bad
block(s) from replicas.
> Cassandra cannot detect corrupt-but-readable column data
> --------------------------------------------------------
>
> Key: CASSANDRA-1717
> URL: https://issues.apache.org/jira/browse/CASSANDRA-1717
> Project: Cassandra
> Issue Type: New Feature
> Components: Core
> Reporter: Jonathan Ellis
> Assignee: Pavel Yaskevich
> Fix For: 1.0
>
> Attachments: checksums.txt
>
>
> Most corruptions of on-disk data due to bitrot render the column (or row)
> unreadable, so the data can be replaced by read repair or anti-entropy. But
> if the corruption keeps column data readable we do not detect it, and if it
> corrupts to a higher timestamp value can even resist being overwritten by
> newer values.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira