[
https://issues.apache.org/jira/browse/CASSANDRA-6791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13917270#comment-13917270
]
Jonathan Ellis commented on CASSANDRA-6791:
-------------------------------------------
In the meantime, one workaround is disabling compression on the source CF.
(Remember that if no options are specified, compression is on by default in
1.1+.)
> CompressedSequentialWriter can write zero-length segments during scrub
> ----------------------------------------------------------------------
>
> Key: CASSANDRA-6791
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6791
> Project: Cassandra
> Issue Type: Bug
> Components: Core
> Reporter: Jonathan Ellis
> Assignee: Marcus Eriksson
> Priority: Minor
> Fix For: 1.2.16, 2.0.6
>
>
> This results in errors like this:
> {noformat}
> java.lang.IllegalArgumentException
> at java.nio.Buffer.limit(Buffer.java:267)
> at
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:108)
> at
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:87)
> at
> org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:280)
> {noformat}
> (Because a zero-length chunk actually turns into a length of -4 in the
> {{compressed.limit(chunk.length)}} call, since no checksum is written either.)
> I thought this would be from two bad rows in a row, but it's not; the source
> file that resulted in scrub creating this, did not have any of those. (But
> it does have several instances of bad-good-bad, i.e. separated by exactly one
> row, that is not large enough to force a new compressed chunk.)
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)