[ 
https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13004347#comment-13004347
 ] 

Jason Harvey commented on CASSANDRA-2296:
-----------------------------------------

Got the following error while restarting *after* I ran the scrub on that same 
node:

{code}
ERROR [CompactionExecutor:1] 2011-03-08 19:54:48,023 
AbstractCassandraDaemon.java (line 114) Fatal exception in thread 
Thread[CompactionExecutor:1,1,main]
java.io.IOError: java.io.EOFException
        at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
        at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:67)
        at 
org.apache.cassandra.io.sstable.SSTableScanner$KeyScanningIterator.next(SSTableScanner.java:179)
        at 
org.apache.cassandra.io.sstable.SSTableScanner$KeyScanningIterator.next(SSTableScanner.java:144)
        at 
org.apache.cassandra.io.sstable.SSTableScanner.next(SSTableScanner.java:136)
        at 
org.apache.cassandra.io.sstable.SSTableScanner.next(SSTableScanner.java:39)
        at 
org.apache.commons.collections.iterators.CollatingIterator.set(CollatingIterator.java:284)
        at 
org.apache.commons.collections.iterators.CollatingIterator.least(CollatingIterator.java:326)
        at 
org.apache.commons.collections.iterators.CollatingIterator.next(CollatingIterator.java:230)
        at 
org.apache.cassandra.utils.ReducingIterator.computeNext(ReducingIterator.java:68)
        at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:136)
        at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:131)
        at 
org.apache.commons.collections.iterators.FilterIterator.setNextObject(FilterIterator.java:183)
        at 
org.apache.commons.collections.iterators.FilterIterator.hasNext(FilterIterator.java:94)
        at 
org.apache.cassandra.db.CompactionManager.doCompaction(CompactionManager.java:449)
        at 
org.apache.cassandra.db.CompactionManager$1.call(CompactionManager.java:124)
        at 
org.apache.cassandra.db.CompactionManager$1.call(CompactionManager.java:94)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
{code}

> Scrub resulting in "bloom filter claims to be longer than entire row size" 
> error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Jason Harvey
>         Attachments: sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 
> 0.7.3. Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java 
> (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java 
> (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than 
> entire row size
>         at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at 
> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at 
> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at 
> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire 
> row size
>         at 
> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java 
> (line 625) Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java 
> (line 637) Scrub of 
> SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 
> 254709 rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java 
> (line 639) Unable to recover 1630 that were skipped.  You can attempt manual 
> recovery from the pre-scrub snapshot.  You can also run nodetool repair to 
> transfer the data from a healthy replica, if any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to