[ 
https://issues.apache.org/jira/browse/CASSANDRA-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13539470#comment-13539470
 ] 

Cathy Daw commented on CASSANDRA-5088:
--------------------------------------

I am able to consistently reproduce this running upgrade scenarios for DataStax 
Enterprise (basically C* 1.1.6 to C* 1.1.8).
* I can't reproduce this going from vanilla C* 1.1.6 to C* 1.1.8 using 
cassandra-stress
* I can reproduce this on my mac using DSE.  Java version is: 1.6.0_24
* I can't reproduce this on ubuntu precise 64-bit using Java 1.6.0_31

*Pre-Upgrade: run on DSE 2.2.1 / Cassandra 1.1.6*
{code}
~/dse-2.2.1/demos/portfolio_manager/bin/pricer -o INSERT_PRICES
~/dse-2.2.1/demos/portfolio_manager/bin/pricer -o UPDATE_PORTFOLIOS
~/dse-2.2.1/demos/portfolio_manager/bin/pricer -o INSERT_HISTORICAL_PRICES -n 
100
~/dse-2.2.1/bin/dse  hive -f ~/dse-2.2.1/demos/portfolio_manager/10_day_loss.q
~/dse-2.2.1/bin/nodetool drain
sudo pkill -9 java

# then restart using C* 1.1.8
{code}

+Below are the different related errors+


*Post-Upgrade: read CF created pre-upgrade*
{code}
ERROR [Thrift:3] 2012-12-25 18:53:22,139 AbstractCassandraDaemon.java (line 
135) Exception in thread Thread[Thrift:3,5,main]
java.io.IOError: java.io.IOException: Bad file descriptor
        at org.apache.cassandra.utils.MergeIterator.close(MergeIterator.java:65)
        at 
org.apache.cassandra.db.ColumnFamilyStore$2.close(ColumnFamilyStore.java:1411)
        at 
org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1490)
        at 
org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1435)
        at 
org.apache.cassandra.service.RangeSliceVerbHandler.executeLocally(RangeSliceVerbHandler.java:50)
        at 
org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:876)
        at 
org.apache.cassandra.thrift.CassandraServer.get_range_slices(CassandraServer.java:705)
        at 
com.datastax.bdp.server.DseServer.get_range_slices(DseServer.java:1087)
        at 
org.apache.cassandra.thrift.Cassandra$Processor$get_range_slices.getResult(Cassandra.java:3083)
        at 
org.apache.cassandra.thrift.Cassandra$Processor$get_range_slices.getResult(Cassandra.java:3071)
        at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
        at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
        at 
com.datastax.bdp.transport.server.ClientSocketAwareProcessor.process(ClientSocketAwareProcessor.java:43)
        at 
org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:192)
{code}

*Post-Upgrade: running upgradesstables*
{code}
Error occured while upgrading the sstables for keyspace HiveMetaStore
java.util.concurrent.ExecutionException: java.io.IOError: java.io.IOException: 
Bad file descriptor
        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
        at java.util.concurrent.FutureTask.get(FutureTask.java:83)
        at 
org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:226)
        at 
org.apache.cassandra.db.compaction.CompactionManager.performSSTableRewrite(CompactionManager.java:242)
        at 
org.apache.cassandra.db.ColumnFamilyStore.sstablesRewrite(ColumnFamilyStore.java:983)
        at 
org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:1789)
{code}

*Post-Upgrade: running nodetool scrub*
{code}
WARN [CompactionExecutor:23] 2012-12-25 14:42:50,024 FileUtils.java (line 116) 
Failed closing /var/lib/cassandra/data/cfs/inode/cfs-inode-hf-1-Data.db - chunk 
length 65536, data length 48193.
java.io.IOException: Bad file descriptor
        at sun.nio.ch.FileDispatcher.preClose0(Native Method)
        at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59)
        at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96)
        at 
java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97)
        at java.io.FileInputStream.close(FileInputStream.java:258)
        at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131)
        at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121)
        at 
java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97)
        at java.io.RandomAccessFile.close(RandomAccessFile.java:541)
        at 
org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224)
        at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130)
        at 
org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112)
        at org.apache.cassandra.db.compaction.Scrubber.close(Scrubber.java:306)
        at 
org.apache.cassandra.db.compaction.CompactionManager.scrubOne(CompactionManager.java:500)
        at 
org.apache.cassandra.db.compaction.CompactionManager.doScrub(CompactionManager.java:485)
        at 
org.apache.cassandra.db.compaction.CompactionManager.access$300(CompactionManager.java:69)
        at 
org.apache.cassandra.db.compaction.CompactionManager$4.perform(CompactionManager.java:235)
        at 
org.apache.cassandra.db.compaction.CompactionManager$3.call(CompactionManager.java:205)
{code}

                
> Major compaction IOException in 1.1.8
> -------------------------------------
>
>                 Key: CASSANDRA-5088
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-5088
>             Project: Cassandra
>          Issue Type: Bug
>    Affects Versions: 1.1.8
>            Reporter: Karl Mueller
>
> Upgraded 1.1.6 to 1.1.8.
> Now I'm trying to do a major compaction, and seeing this:
> ERROR [CompactionExecutor:129] 2012-12-22 10:33:44,217 
> AbstractCassandraDaemon.java (line 135) Exception in thread 
> Thread[CompactionExecutor:129,1,RMI Runtime]
> java.io.IOError: java.io.IOException: Bad file descriptor
>         at 
> org.apache.cassandra.utils.MergeIterator.close(MergeIterator.java:65)
>         at 
> org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:195)
>         at 
> org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(CompactionManager.java:298)
>         at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>         at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:619)
> Caused by: java.io.IOException: Bad file descriptor
>         at sun.nio.ch.FileDispatcher.preClose0(Native Method)
>         at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59)
>         at 
> sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96)
>         at 
> java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97)
>         at java.io.FileInputStream.close(FileInputStream.java:258)
>         at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131)
>         at 
> sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121)
>         at 
> java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97)
>         at java.io.RandomAccessFile.close(RandomAccessFile.java:541)
>         at 
> org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224)
>         at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130)
>         at 
> org.apache.cassandra.io.sstable.SSTableScanner.close(SSTableScanner.java:89)
>         at 
> org.apache.cassandra.utils.MergeIterator.close(MergeIterator.java:61)
>         ... 9 more

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to