[ 
https://issues.apache.org/jira/browse/CASSANDRA-6791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13917269#comment-13917269
 ] 

Jonathan Ellis commented on CASSANDRA-6791:
-------------------------------------------

Brandon has a copy of the data that can reproduce this, via [~kvaster].

> CompressedSequentialWriter can write zero-length segments during scrub
> ----------------------------------------------------------------------
>
>                 Key: CASSANDRA-6791
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6791
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: Jonathan Ellis
>            Assignee: Marcus Eriksson
>            Priority: Minor
>             Fix For: 1.2.16, 2.0.6
>
>
> This results in errors like this:
> {noformat}
> java.lang.IllegalArgumentException
>       at java.nio.Buffer.limit(Buffer.java:267)
>       at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:108)
>       at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:87)
>       at 
> org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:280)
> {noformat}
> (Because a zero-length chunk actually turns into a length of -4 in the 
> {{compressed.limit(chunk.length)}} call, since no checksum is written either.)
> I thought this would be from two bad rows in a row, but it's not; the source 
> file that resulted in scrub creating this, did not have any of those.  (But 
> it does have several instances of bad-good-bad, i.e. separated by exactly one 
> row, that is not large enough to force a new compressed chunk.)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to