[
https://issues.apache.org/jira/browse/CASSANDRA-1717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13079960#comment-13079960
]
T Jake Luciani commented on CASSANDRA-1717:
-------------------------------------------
Block level and column index level are actually the same right? 64kb
The reason block isn't ideal to me is it makes it much harder to
recover/support partial reads since the block has no context in the file
format. Though if there is corruption with block level compression then it's
inherently a block level problem :)
So what kind of recovery can we support? Can we ever recover from bad blocks or
just throw errors "bad blocks found, manual repair required "?
> Cassandra cannot detect corrupt-but-readable column data
> --------------------------------------------------------
>
> Key: CASSANDRA-1717
> URL: https://issues.apache.org/jira/browse/CASSANDRA-1717
> Project: Cassandra
> Issue Type: New Feature
> Components: Core
> Reporter: Jonathan Ellis
> Assignee: Pavel Yaskevich
> Fix For: 1.0
>
> Attachments: checksums.txt
>
>
> Most corruptions of on-disk data due to bitrot render the column (or row)
> unreadable, so the data can be replaced by read repair or anti-entropy. But
> if the corruption keeps column data readable we do not detect it, and if it
> corrupts to a higher timestamp value can even resist being overwritten by
> newer values.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira