[
https://issues.apache.org/jira/browse/CASSANDRA-1717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13081603#comment-13081603
]
Sylvain Lebresne commented on CASSANDRA-1717:
---------------------------------------------
{quote}
bq. We should convert the CRC32 to an int (and only write that) as it is an int
internally (getValue() returns a long only because CRC32 implements the
interface Checksum that require that).
Lets leave that to the ticket for CRC optimization which will allow us to
modify that system-wide
{quote}
Let's not:
* this is completely orthogonal to switching to a drop-in, faster, CRC
implementation.
* it is unclear we want to make that system-wide. Imho, it is not worth
breaking commit log compatibility for that, but it it stupid to commit new code
that perpetuate the mistake, especially to change it later.
> Cassandra cannot detect corrupt-but-readable column data
> --------------------------------------------------------
>
> Key: CASSANDRA-1717
> URL: https://issues.apache.org/jira/browse/CASSANDRA-1717
> Project: Cassandra
> Issue Type: New Feature
> Components: Core
> Reporter: Jonathan Ellis
> Assignee: Pavel Yaskevich
> Fix For: 1.0
>
> Attachments: CASSANDRA-1717-v2.patch, CASSANDRA-1717.patch,
> checksums.txt
>
>
> Most corruptions of on-disk data due to bitrot render the column (or row)
> unreadable, so the data can be replaced by read repair or anti-entropy. But
> if the corruption keeps column data readable we do not detect it, and if it
> corrupts to a higher timestamp value can even resist being overwritten by
> newer values.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira