[ 
https://issues.apache.org/jira/browse/CASSANDRA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13537736#comment-13537736
 ] 

Robert Coli commented on CASSANDRA-5059:
----------------------------------------

# java -version
java version "1.6.0_24"
Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)

(Ubuntu 10.04 on EC2)

Unable to reproduce via :

1) take snappycompressed sstables from 1.0.11 cluster 
2) define this CF in "different" keyspace in 1.1.6 cluster (note: not 1.1.7)
3) put this sstable into the dir in 1.1.6 cluster, with appropriate keyspace 
prefix
4) run a refresh to pick up the sstables
5) run scrub on this columnfamily

                
> 1.0.11 -> 1.1.7 upgrade results in unusable compressed sstables
> ---------------------------------------------------------------
>
>                 Key: CASSANDRA-5059
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-5059
>             Project: Cassandra
>          Issue Type: Bug
>    Affects Versions: 1.1.7
>         Environment: ubuntu
> sun-java6 6.24-1build0.10.10.1
>            Reporter: Jason Harvey
>         Attachments: LastModified.tar
>
>
> Upgraded a single node in my ring to 1.1.7. Upgrade process went normally 
> with no errors. However, as soon as the node joined the ring, it started 
> spewing this exception hundreds of times a second:
> {code}
>  WARN [ReadStage:22] 2012-12-12 02:00:56,181 FileUtils.java (line 116) Failed 
> closing org.apache.cassandra.db.columniterator.SSTableSliceIterator@5959baa2
> java.io.IOException: Bad file descriptor
>         at sun.nio.ch.FileDispatcher.preClose0(Native Method)
>         at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59)
>         at 
> sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96)
>         at 
> java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97)
>         at java.io.FileInputStream.close(FileInputStream.java:258)
>         at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131)
>         at 
> sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121)
>         at 
> java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97)
>         at java.io.RandomAccessFile.close(RandomAccessFile.java:541)
>         at 
> org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224)
>         at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130)
>         at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.close(SSTableSliceIterator.java:132)
>         at 
> org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112)
>         at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:300)
>         at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64)
>         at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1347)
>         at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1209)
>         at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1144)
>         at org.apache.cassandra.db.Table.getRow(Table.java:378)
>         at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69)
>         at 
> org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:51)
>         at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> {code}
> The node was not responding to reads on any CFs, so I was forced to do an 
> emergency roll-back and abandon the upgrade.
> Node has roughly 3800 sstables. Both LCS and SizeTiered, as well as 
> compressed and uncompressed CFs.
> After some digging on a test node, I've determined that the issue occurs when 
> attempting to read/upgrade/scrub a compressed 1.0.11-generated sstable on 
> 1.1.7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to