[jira] [Created] (CASSANDRA-5059) 1.0.11 - 1.1.7 upgrade results Bad file descriptor exception
Jason Harvey created CASSANDRA-5059: --- Summary: 1.0.11 - 1.1.7 upgrade results Bad file descriptor exception Key: CASSANDRA-5059 URL: https://issues.apache.org/jira/browse/CASSANDRA-5059 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Environment: ubuntu sun-java6 6.24-1build0.10.10.1 Reporter: Jason Harvey Upgraded a single node in my ring to 1.1.7. Upgrade process went normally with no errors. However, as soon as the node joined the ring, it started spewing this exception hundreds of times a second: {code} WARN [ReadStage:22] 2012-12-12 02:00:56,181 FileUtils.java (line 116) Failed closing org.apache.cassandra.db.columniterator.SSTableSliceIterator@5959baa2 java.io.IOException: Bad file descriptor at sun.nio.ch.FileDispatcher.preClose0(Native Method) at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.FileInputStream.close(FileInputStream.java:258) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.RandomAccessFile.close(RandomAccessFile.java:541) at org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.close(SSTableSliceIterator.java:132) at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112) at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:300) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1347) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1209) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1144) at org.apache.cassandra.db.Table.getRow(Table.java:378) at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69) at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:51) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) {code} The node was not responding to reads on any CFs, so I was forced to do an emergency roll-back and abandon the upgrade. Node has roughly 3800 sstables. Both LCS and SizeTiered, as well as compressed and uncompressed CFs. Looks like the exception might have something to do with compression? Verified that the service was not bumping into any open file descriptor limitations. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5059) 1.0.11 - 1.1.7 upgrade results in Bad file descriptor exception
[ https://issues.apache.org/jira/browse/CASSANDRA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Harvey updated CASSANDRA-5059: Summary: 1.0.11 - 1.1.7 upgrade results in Bad file descriptor exception (was: 1.0.11 - 1.1.7 upgrade results Bad file descriptor exception) 1.0.11 - 1.1.7 upgrade results in Bad file descriptor exception Key: CASSANDRA-5059 URL: https://issues.apache.org/jira/browse/CASSANDRA-5059 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Environment: ubuntu sun-java6 6.24-1build0.10.10.1 Reporter: Jason Harvey Upgraded a single node in my ring to 1.1.7. Upgrade process went normally with no errors. However, as soon as the node joined the ring, it started spewing this exception hundreds of times a second: {code} WARN [ReadStage:22] 2012-12-12 02:00:56,181 FileUtils.java (line 116) Failed closing org.apache.cassandra.db.columniterator.SSTableSliceIterator@5959baa2 java.io.IOException: Bad file descriptor at sun.nio.ch.FileDispatcher.preClose0(Native Method) at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.FileInputStream.close(FileInputStream.java:258) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.RandomAccessFile.close(RandomAccessFile.java:541) at org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.close(SSTableSliceIterator.java:132) at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112) at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:300) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1347) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1209) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1144) at org.apache.cassandra.db.Table.getRow(Table.java:378) at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69) at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:51) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) {code} The node was not responding to reads on any CFs, so I was forced to do an emergency roll-back and abandon the upgrade. Node has roughly 3800 sstables. Both LCS and SizeTiered, as well as compressed and uncompressed CFs. Looks like the exception might have something to do with compression? Verified that the service was not bumping into any open file descriptor limitations. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5059) 1.0.11 - 1.1.7 upgrade results in Bad file descriptor exception
[ https://issues.apache.org/jira/browse/CASSANDRA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13529830#comment-13529830 ] Jason Harvey commented on CASSANDRA-5059: - I was just able to reproduce this in testing by upgrading a node with a single compressed CF. I also attempted to run upgradesstables on that CF after upgrading, and the following exception occurred: {code} Error occured while upgrading the sstables for keyspace reddit java.util.concurrent.ExecutionException: java.io.IOError: java.io.IOException: Bad file descriptor at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222) at java.util.concurrent.FutureTask.get(FutureTask.java:83) at org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:218) at org.apache.cassandra.db.compaction.CompactionManager.performSSTableRewrite(CompactionManager.java:234) at org.apache.cassandra.db.ColumnFamilyStore.sstablesRewrite(ColumnFamilyStore.java:983) at org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:1788) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:93) at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:27) at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:208) at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:120) at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:262) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761) at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1427) at javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72) at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1265) at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1360) at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:788) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305) at sun.rmi.transport.Transport$1.run(Transport.java:159) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Transport.java:155) at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOError: java.io.IOException: Bad file descriptor at org.apache.cassandra.utils.MergeIterator.close(MergeIterator.java:65) at org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:195) at org.apache.cassandra.db.compaction.CompactionManager$4.perform(CompactionManager.java:246) at org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:197) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) ... 3 more Caused by: java.io.IOException: Bad file descriptor at sun.nio.ch.FileDispatcher.preClose0(Native Method) at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.FileInputStream.close(FileInputStream.java:258) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131) at
[jira] [Comment Edited] (CASSANDRA-5059) 1.0.11 - 1.1.7 upgrade results in Bad file descriptor exception
[ https://issues.apache.org/jira/browse/CASSANDRA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13529830#comment-13529830 ] Jason Harvey edited comment on CASSANDRA-5059 at 12/12/12 11:04 AM: I was just able to reproduce this in testing by upgrading a node with a single size-tiered compressed CF. I also attempted to run upgradesstables on that CF after upgrading, and the following exception occurred: {code} Error occured while upgrading the sstables for keyspace reddit java.util.concurrent.ExecutionException: java.io.IOError: java.io.IOException: Bad file descriptor at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222) at java.util.concurrent.FutureTask.get(FutureTask.java:83) at org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:218) at org.apache.cassandra.db.compaction.CompactionManager.performSSTableRewrite(CompactionManager.java:234) at org.apache.cassandra.db.ColumnFamilyStore.sstablesRewrite(ColumnFamilyStore.java:983) at org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:1788) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:93) at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:27) at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:208) at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:120) at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:262) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761) at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1427) at javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72) at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1265) at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1360) at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:788) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305) at sun.rmi.transport.Transport$1.run(Transport.java:159) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Transport.java:155) at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOError: java.io.IOException: Bad file descriptor at org.apache.cassandra.utils.MergeIterator.close(MergeIterator.java:65) at org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:195) at org.apache.cassandra.db.compaction.CompactionManager$4.perform(CompactionManager.java:246) at org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:197) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) ... 3 more Caused by: java.io.IOException: Bad file descriptor at sun.nio.ch.FileDispatcher.preClose0(Native Method) at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.FileInputStream.close(FileInputStream.java:258) at
[jira] [Comment Edited] (CASSANDRA-5059) 1.0.11 - 1.1.7 upgrade results in Bad file descriptor exception
[ https://issues.apache.org/jira/browse/CASSANDRA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13529830#comment-13529830 ] Jason Harvey edited comment on CASSANDRA-5059 at 12/12/12 11:07 AM: I was just able to reproduce this in testing by upgrading a node with a single size-tiered compressed CF copied from our production ring. I also attempted to run upgradesstables on that CF after upgrading, and the following exception occurred: {code} Error occured while upgrading the sstables for keyspace reddit java.util.concurrent.ExecutionException: java.io.IOError: java.io.IOException: Bad file descriptor at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222) at java.util.concurrent.FutureTask.get(FutureTask.java:83) at org.apache.cassandra.db.compaction.CompactionManager.performAllSSTableOperation(CompactionManager.java:218) at org.apache.cassandra.db.compaction.CompactionManager.performSSTableRewrite(CompactionManager.java:234) at org.apache.cassandra.db.ColumnFamilyStore.sstablesRewrite(ColumnFamilyStore.java:983) at org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:1788) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:93) at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:27) at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:208) at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:120) at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:262) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761) at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1427) at javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72) at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1265) at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1360) at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:788) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305) at sun.rmi.transport.Transport$1.run(Transport.java:159) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Transport.java:155) at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOError: java.io.IOException: Bad file descriptor at org.apache.cassandra.utils.MergeIterator.close(MergeIterator.java:65) at org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:195) at org.apache.cassandra.db.compaction.CompactionManager$4.perform(CompactionManager.java:246) at org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:197) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) ... 3 more Caused by: java.io.IOException: Bad file descriptor at sun.nio.ch.FileDispatcher.preClose0(Native Method) at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.FileInputStream.close(FileInputStream.java:258) at
[jira] [Commented] (CASSANDRA-5059) 1.0.11 - 1.1.7 upgrade results in Bad file descriptor exception
[ https://issues.apache.org/jira/browse/CASSANDRA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13529837#comment-13529837 ] Jason Harvey commented on CASSANDRA-5059: - Attempted a scrub. The scrub command did finish, but each sstable threw an exception. Any reads on the CF continued to fail after the scrub finished. Attempted a restart for the hell of it, too. {code} WARN [CompactionExecutor:10] 2012-12-12 03:05:39,764 FileUtils.java (line 116) Failed closing /var/lib/cassandra/data/reddit/LastModified/reddit-LastModified-hd-11679-Data.db - chunk length 65536, data length 206995038. java.io.IOException: Bad file descriptor at sun.nio.ch.FileDispatcher.preClose0(Native Method) at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.FileInputStream.close(FileInputStream.java:258) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.RandomAccessFile.close(RandomAccessFile.java:541) at org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130) at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112) at org.apache.cassandra.db.compaction.Scrubber.close(Scrubber.java:306) at org.apache.cassandra.db.compaction.CompactionManager.scrubOne(CompactionManager.java:492) at org.apache.cassandra.db.compaction.CompactionManager.doScrub(CompactionManager.java:477) at org.apache.cassandra.db.compaction.CompactionManager.access$300(CompactionManager.java:71) at org.apache.cassandra.db.compaction.CompactionManager$3.perform(CompactionManager.java:227) at org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:197) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) {codE} 1.0.11 - 1.1.7 upgrade results in Bad file descriptor exception Key: CASSANDRA-5059 URL: https://issues.apache.org/jira/browse/CASSANDRA-5059 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Environment: ubuntu sun-java6 6.24-1build0.10.10.1 Reporter: Jason Harvey Upgraded a single node in my ring to 1.1.7. Upgrade process went normally with no errors. However, as soon as the node joined the ring, it started spewing this exception hundreds of times a second: {code} WARN [ReadStage:22] 2012-12-12 02:00:56,181 FileUtils.java (line 116) Failed closing org.apache.cassandra.db.columniterator.SSTableSliceIterator@5959baa2 java.io.IOException: Bad file descriptor at sun.nio.ch.FileDispatcher.preClose0(Native Method) at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.FileInputStream.close(FileInputStream.java:258) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.RandomAccessFile.close(RandomAccessFile.java:541) at org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.close(SSTableSliceIterator.java:132) at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112) at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:300) at
[jira] [Comment Edited] (CASSANDRA-5059) 1.0.11 - 1.1.7 upgrade results in Bad file descriptor exception
[ https://issues.apache.org/jira/browse/CASSANDRA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13529837#comment-13529837 ] Jason Harvey edited comment on CASSANDRA-5059 at 12/12/12 11:09 AM: Attempted a scrub. The scrub command did finish, but each sstable threw an exception. Any reads on the CF continued to fail after the scrub finished. Attempted a restart for the hell of it, too. {code} WARN [CompactionExecutor:10] 2012-12-12 03:05:39,764 FileUtils.java (line 116) Failed closing /var/lib/cassandra/data/reddit/LastModified/reddit-LastModified-hd-11679-Data.db - chunk length 65536, data length 206995038. java.io.IOException: Bad file descriptor at sun.nio.ch.FileDispatcher.preClose0(Native Method) at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.FileInputStream.close(FileInputStream.java:258) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.RandomAccessFile.close(RandomAccessFile.java:541) at org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130) at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112) at org.apache.cassandra.db.compaction.Scrubber.close(Scrubber.java:306) at org.apache.cassandra.db.compaction.CompactionManager.scrubOne(CompactionManager.java:492) at org.apache.cassandra.db.compaction.CompactionManager.doScrub(CompactionManager.java:477) at org.apache.cassandra.db.compaction.CompactionManager.access$300(CompactionManager.java:71) at org.apache.cassandra.db.compaction.CompactionManager$3.perform(CompactionManager.java:227) at org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:197) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) {code} was (Author: alienth): Attempted a scrub. The scrub command did finish, but each sstable threw an exception. Any reads on the CF continued to fail after the scrub finished. Attempted a restart for the hell of it, too. {code} WARN [CompactionExecutor:10] 2012-12-12 03:05:39,764 FileUtils.java (line 116) Failed closing /var/lib/cassandra/data/reddit/LastModified/reddit-LastModified-hd-11679-Data.db - chunk length 65536, data length 206995038. java.io.IOException: Bad file descriptor at sun.nio.ch.FileDispatcher.preClose0(Native Method) at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.FileInputStream.close(FileInputStream.java:258) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.RandomAccessFile.close(RandomAccessFile.java:541) at org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130) at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112) at org.apache.cassandra.db.compaction.Scrubber.close(Scrubber.java:306) at org.apache.cassandra.db.compaction.CompactionManager.scrubOne(CompactionManager.java:492) at org.apache.cassandra.db.compaction.CompactionManager.doScrub(CompactionManager.java:477) at org.apache.cassandra.db.compaction.CompactionManager.access$300(CompactionManager.java:71) at org.apache.cassandra.db.compaction.CompactionManager$3.perform(CompactionManager.java:227) at org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:197)
[jira] [Commented] (CASSANDRA-5059) 1.0.11 - 1.1.7 upgrade results in Bad file descriptor exception
[ https://issues.apache.org/jira/browse/CASSANDRA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13529843#comment-13529843 ] Jason Harvey commented on CASSANDRA-5059: - Using the sstablescrub command while the service is offline results in the same exception path (excluding the thread stuff) as the online scrub attempt. 1.0.11 - 1.1.7 upgrade results in Bad file descriptor exception Key: CASSANDRA-5059 URL: https://issues.apache.org/jira/browse/CASSANDRA-5059 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Environment: ubuntu sun-java6 6.24-1build0.10.10.1 Reporter: Jason Harvey Upgraded a single node in my ring to 1.1.7. Upgrade process went normally with no errors. However, as soon as the node joined the ring, it started spewing this exception hundreds of times a second: {code} WARN [ReadStage:22] 2012-12-12 02:00:56,181 FileUtils.java (line 116) Failed closing org.apache.cassandra.db.columniterator.SSTableSliceIterator@5959baa2 java.io.IOException: Bad file descriptor at sun.nio.ch.FileDispatcher.preClose0(Native Method) at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.FileInputStream.close(FileInputStream.java:258) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.RandomAccessFile.close(RandomAccessFile.java:541) at org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.close(SSTableSliceIterator.java:132) at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112) at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:300) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1347) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1209) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1144) at org.apache.cassandra.db.Table.getRow(Table.java:378) at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69) at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:51) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) {code} The node was not responding to reads on any CFs, so I was forced to do an emergency roll-back and abandon the upgrade. Node has roughly 3800 sstables. Both LCS and SizeTiered, as well as compressed and uncompressed CFs. Looks like the exception might have something to do with compression? Verified that the service was not bumping into any open file descriptor limitations. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5059) 1.0.11 - 1.1.7 upgrade results in Bad file descriptor exception
[ https://issues.apache.org/jira/browse/CASSANDRA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13529872#comment-13529872 ] Jason Harvey commented on CASSANDRA-5059: - Found a work-around. Running on a 1.0.11 test node, I changed the CF to non-compressed and scrubbed. The subsequent test upgrade to 1.1.7 worked just fine. Not super viable for us in production, due to the number of compressed sstables that we have. I also attempted just scrubbing the compressed CF while running 1.0.11. This did not resolve the upgrade issue. 1.0.11 - 1.1.7 upgrade results in Bad file descriptor exception Key: CASSANDRA-5059 URL: https://issues.apache.org/jira/browse/CASSANDRA-5059 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Environment: ubuntu sun-java6 6.24-1build0.10.10.1 Reporter: Jason Harvey Upgraded a single node in my ring to 1.1.7. Upgrade process went normally with no errors. However, as soon as the node joined the ring, it started spewing this exception hundreds of times a second: {code} WARN [ReadStage:22] 2012-12-12 02:00:56,181 FileUtils.java (line 116) Failed closing org.apache.cassandra.db.columniterator.SSTableSliceIterator@5959baa2 java.io.IOException: Bad file descriptor at sun.nio.ch.FileDispatcher.preClose0(Native Method) at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.FileInputStream.close(FileInputStream.java:258) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.RandomAccessFile.close(RandomAccessFile.java:541) at org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.close(SSTableSliceIterator.java:132) at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112) at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:300) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1347) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1209) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1144) at org.apache.cassandra.db.Table.getRow(Table.java:378) at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69) at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:51) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) {code} The node was not responding to reads on any CFs, so I was forced to do an emergency roll-back and abandon the upgrade. Node has roughly 3800 sstables. Both LCS and SizeTiered, as well as compressed and uncompressed CFs. Looks like the exception might have something to do with compression? Verified that the service was not bumping into any open file descriptor limitations. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5059) 1.0.11 - 1.1.7 upgrade results in unusable compressed sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Harvey updated CASSANDRA-5059: Description: Upgraded a single node in my ring to 1.1.7. Upgrade process went normally with no errors. However, as soon as the node joined the ring, it started spewing this exception hundreds of times a second: {code} WARN [ReadStage:22] 2012-12-12 02:00:56,181 FileUtils.java (line 116) Failed closing org.apache.cassandra.db.columniterator.SSTableSliceIterator@5959baa2 java.io.IOException: Bad file descriptor at sun.nio.ch.FileDispatcher.preClose0(Native Method) at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.FileInputStream.close(FileInputStream.java:258) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.RandomAccessFile.close(RandomAccessFile.java:541) at org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.close(SSTableSliceIterator.java:132) at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112) at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:300) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1347) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1209) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1144) at org.apache.cassandra.db.Table.getRow(Table.java:378) at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69) at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:51) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) {code} The node was not responding to reads on any CFs, so I was forced to do an emergency roll-back and abandon the upgrade. Node has roughly 3800 sstables. Both LCS and SizeTiered, as well as compressed and uncompressed CFs. After some digging on a test node, I've determined that the issue occurs when attempting to read/upgrade/scrub a compressed 1.0.11-generated sstable on 1.1.7. was: Upgraded a single node in my ring to 1.1.7. Upgrade process went normally with no errors. However, as soon as the node joined the ring, it started spewing this exception hundreds of times a second: {code} WARN [ReadStage:22] 2012-12-12 02:00:56,181 FileUtils.java (line 116) Failed closing org.apache.cassandra.db.columniterator.SSTableSliceIterator@5959baa2 java.io.IOException: Bad file descriptor at sun.nio.ch.FileDispatcher.preClose0(Native Method) at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.FileInputStream.close(FileInputStream.java:258) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.RandomAccessFile.close(RandomAccessFile.java:541) at org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.close(SSTableSliceIterator.java:132) at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112) at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:300) at
[jira] [Commented] (CASSANDRA-5059) 1.0.11 - 1.1.7 upgrade results in unusable compressed sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13529998#comment-13529998 ] Jonathan Ellis commented on CASSANDRA-5059: --- Do you have any non-sensitive-data sstables you can attach here? Failing that, can you email me one? 1.0.11 - 1.1.7 upgrade results in unusable compressed sstables --- Key: CASSANDRA-5059 URL: https://issues.apache.org/jira/browse/CASSANDRA-5059 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Environment: ubuntu sun-java6 6.24-1build0.10.10.1 Reporter: Jason Harvey Upgraded a single node in my ring to 1.1.7. Upgrade process went normally with no errors. However, as soon as the node joined the ring, it started spewing this exception hundreds of times a second: {code} WARN [ReadStage:22] 2012-12-12 02:00:56,181 FileUtils.java (line 116) Failed closing org.apache.cassandra.db.columniterator.SSTableSliceIterator@5959baa2 java.io.IOException: Bad file descriptor at sun.nio.ch.FileDispatcher.preClose0(Native Method) at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.FileInputStream.close(FileInputStream.java:258) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.RandomAccessFile.close(RandomAccessFile.java:541) at org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.close(SSTableSliceIterator.java:132) at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112) at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:300) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1347) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1209) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1144) at org.apache.cassandra.db.Table.getRow(Table.java:378) at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69) at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:51) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) {code} The node was not responding to reads on any CFs, so I was forced to do an emergency roll-back and abandon the upgrade. Node has roughly 3800 sstables. Both LCS and SizeTiered, as well as compressed and uncompressed CFs. After some digging on a test node, I've determined that the issue occurs when attempting to read/upgrade/scrub a compressed 1.0.11-generated sstable on 1.1.7. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5020) Time to switch back to byte[] internally?
[ https://issues.apache.org/jira/browse/CASSANDRA-5020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13530164#comment-13530164 ] Vijay commented on CASSANDRA-5020: -- How about custom/light-weight implementation of BB, with HBB like and DBB extentions... HBB can just wrap byte[] where as DBB/MMBB can wrap unsafe? Time to switch back to byte[] internally? - Key: CASSANDRA-5020 URL: https://issues.apache.org/jira/browse/CASSANDRA-5020 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Jonathan Ellis Fix For: 2.0 We switched to ByteBuffer for column names and values back in 0.7, which gave us a short term performance boost on mmap'd reads, but we gave that up when we switched to refcounted sstables in 1.0. (refcounting all the way up the read path would be too painful, so we copy into an on-heap buffer when reading from an sstable, then release the reference.) A HeapByteBuffer wastes a lot of memory compared to a byte[] (5 more ints, a long, and a boolean). The hard problem here is how to do the arena allocation we do on writes, which has been very successful in reducing STW CMS from heap fragmentation. ByteBuffer is a good fit there. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5020) Time to switch back to byte[] internally?
[ https://issues.apache.org/jira/browse/CASSANDRA-5020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13530183#comment-13530183 ] T Jake Luciani commented on CASSANDRA-5020: --- I thought the same but ByteBuffer is an abstract class :( we could build our own wrapper on BB though... I think the bigger win will be CASSANDRA-5019 Time to switch back to byte[] internally? - Key: CASSANDRA-5020 URL: https://issues.apache.org/jira/browse/CASSANDRA-5020 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Jonathan Ellis Fix For: 2.0 We switched to ByteBuffer for column names and values back in 0.7, which gave us a short term performance boost on mmap'd reads, but we gave that up when we switched to refcounted sstables in 1.0. (refcounting all the way up the read path would be too painful, so we copy into an on-heap buffer when reading from an sstable, then release the reference.) A HeapByteBuffer wastes a lot of memory compared to a byte[] (5 more ints, a long, and a boolean). The hard problem here is how to do the arena allocation we do on writes, which has been very successful in reducing STW CMS from heap fragmentation. ByteBuffer is a good fit there. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-5060) select keyspace_name from system.schema_keyspaces
Tupshin Harper created CASSANDRA-5060: - Summary: select keyspace_name from system.schema_keyspaces Key: CASSANDRA-5060 URL: https://issues.apache.org/jira/browse/CASSANDRA-5060 Project: Cassandra Issue Type: New Feature Components: API Affects Versions: 1.2.0 beta 3 Reporter: Tupshin Harper Assignee: Alexey Zotov Priority: Minor Fix For: 1.2.0 rc1 It is currently possible to describe tables to list the tables in the current keyspace, or list all tables in all keyspaces if you are not currently in a keyspace. It is also possible to enumerate the keyspaces with a cql command to select from the system.schema_columnfamilies. There should be a simple describe keyspaces command that enumerates just the keyspaces and is syntactic sugar for select keyspace name from schema_keyspaces. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5060) select keyspace_name from system.schema_keyspaces
[ https://issues.apache.org/jira/browse/CASSANDRA-5060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-5060: Reviewer: brandon.williams select keyspace_name from system.schema_keyspaces - Key: CASSANDRA-5060 URL: https://issues.apache.org/jira/browse/CASSANDRA-5060 Project: Cassandra Issue Type: New Feature Components: API Affects Versions: 1.2.0 beta 3 Reporter: Tupshin Harper Assignee: Aleksey Yeschenko Priority: Minor Fix For: 1.2.0 rc1 It is currently possible to describe tables to list the tables in the current keyspace, or list all tables in all keyspaces if you are not currently in a keyspace. It is also possible to enumerate the keyspaces with a cql command to select from the system.schema_columnfamilies. There should be a simple describe keyspaces command that enumerates just the keyspaces and is syntactic sugar for select keyspace name from schema_keyspaces. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4858) Coverage analysis for low-CL queries
[ https://issues.apache.org/jira/browse/CASSANDRA-4858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13530260#comment-13530260 ] Jonathan Ellis commented on CASSANDRA-4858: --- vnodes make this a lot worse. Coverage analysis for low-CL queries Key: CASSANDRA-4858 URL: https://issues.apache.org/jira/browse/CASSANDRA-4858 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Jonathan Ellis Fix For: 2.0 There are many cases where getRangeSlice creates more RangeSliceCommand than it should, because it always creates one for each range returned by getRestrictedRange. Especially for CL.ONE this does not take the replication factor into account and is potentially pretty wasteful. A range slice at CL.ONE on a 3 node cluster with RF=3 should only ever create one RangeSliceCommand. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5060) select keyspace_name from system.schema_keyspaces
[ https://issues.apache.org/jira/browse/CASSANDRA-5060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-5060: - Attachment: 5060.txt select keyspace_name from system.schema_keyspaces - Key: CASSANDRA-5060 URL: https://issues.apache.org/jira/browse/CASSANDRA-5060 Project: Cassandra Issue Type: New Feature Components: API Affects Versions: 1.2.0 beta 3 Reporter: Tupshin Harper Assignee: Aleksey Yeschenko Priority: Minor Fix For: 1.2.0 rc1 Attachments: 5060.txt It is currently possible to describe tables to list the tables in the current keyspace, or list all tables in all keyspaces if you are not currently in a keyspace. It is also possible to enumerate the keyspaces with a cql command to select from the system.schema_columnfamilies. There should be a simple describe keyspaces command that enumerates just the keyspaces and is syntactic sugar for select keyspace name from schema_keyspaces. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4858) Coverage analysis for low-CL queries
[ https://issues.apache.org/jira/browse/CASSANDRA-4858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-4858: -- Fix Version/s: (was: 2.0) 1.2.1 Assignee: Vijay Coverage analysis for low-CL queries Key: CASSANDRA-4858 URL: https://issues.apache.org/jira/browse/CASSANDRA-4858 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Jonathan Ellis Assignee: Vijay Fix For: 1.2.1 There are many cases where getRangeSlice creates more RangeSliceCommand than it should, because it always creates one for each range returned by getRestrictedRange. Especially for CL.ONE this does not take the replication factor into account and is potentially pretty wasteful. A range slice at CL.ONE on a 3 node cluster with RF=3 should only ever create one RangeSliceCommand. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4858) Coverage analysis for low-CL queries
[ https://issues.apache.org/jira/browse/CASSANDRA-4858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tupshin Harper updated CASSANDRA-4858: -- Fix Version/s: 2.0 Coverage analysis for low-CL queries Key: CASSANDRA-4858 URL: https://issues.apache.org/jira/browse/CASSANDRA-4858 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Jonathan Ellis Assignee: Vijay Fix For: 1.2.1, 2.0 There are many cases where getRangeSlice creates more RangeSliceCommand than it should, because it always creates one for each range returned by getRestrictedRange. Especially for CL.ONE this does not take the replication factor into account and is potentially pretty wasteful. A range slice at CL.ONE on a 3 node cluster with RF=3 should only ever create one RangeSliceCommand. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5060) select keyspace_name from system.schema_keyspaces
[ https://issues.apache.org/jira/browse/CASSANDRA-5060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13530318#comment-13530318 ] Brandon Williams commented on CASSANDRA-5060: - +1 select keyspace_name from system.schema_keyspaces - Key: CASSANDRA-5060 URL: https://issues.apache.org/jira/browse/CASSANDRA-5060 Project: Cassandra Issue Type: New Feature Components: API Affects Versions: 1.2.0 beta 3 Reporter: Tupshin Harper Assignee: Aleksey Yeschenko Priority: Minor Fix For: 1.2.0 Attachments: 5060.txt It is currently possible to describe tables to list the tables in the current keyspace, or list all tables in all keyspaces if you are not currently in a keyspace. It is also possible to enumerate the keyspaces with a cql command to select from the system.schema_columnfamilies. There should be a simple describe keyspaces command that enumerates just the keyspaces and is syntactic sugar for select keyspace name from schema_keyspaces. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5060) select keyspace_name from system.schema_keyspaces
[ https://issues.apache.org/jira/browse/CASSANDRA-5060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-5060: Fix Version/s: (was: 1.2.0 rc1) 1.2.0 select keyspace_name from system.schema_keyspaces - Key: CASSANDRA-5060 URL: https://issues.apache.org/jira/browse/CASSANDRA-5060 Project: Cassandra Issue Type: New Feature Components: API Affects Versions: 1.2.0 beta 3 Reporter: Tupshin Harper Assignee: Aleksey Yeschenko Priority: Minor Fix For: 1.2.0 Attachments: 5060.txt It is currently possible to describe tables to list the tables in the current keyspace, or list all tables in all keyspaces if you are not currently in a keyspace. It is also possible to enumerate the keyspaces with a cql command to select from the system.schema_columnfamilies. There should be a simple describe keyspaces command that enumerates just the keyspaces and is syntactic sugar for select keyspace name from schema_keyspaces. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: cqlsh: add DESCRIBE KEYSPACES command; patch by Aleksey Yeschenko, reviewed by Brandon Williams for CASSANDRA-5060
Updated Branches: refs/heads/cassandra-1.2.0 fc5a0cc29 - f562f0bb1 cqlsh: add DESCRIBE KEYSPACES command; patch by Aleksey Yeschenko, reviewed by Brandon Williams for CASSANDRA-5060 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f562f0bb Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f562f0bb Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f562f0bb Branch: refs/heads/cassandra-1.2.0 Commit: f562f0bb1f0a0b9bf6635948bfa11a1f7f4f12dd Parents: fc5a0cc Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Dec 13 00:07:26 2012 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Dec 13 00:07:26 2012 +0300 -- CHANGES.txt |4 bin/cqlsh | 14 +- 2 files changed, 17 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f562f0bb/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 115ee45..d73849c 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,3 +1,7 @@ +1.2.0 + * cqlsh: add DESCRIBE KEYSPACES command (CASSANDRA-5060) + + 1.2-rc1 * rename rpc_timeout settings to request_timeout (CASSANDRA-5027) * add BF with 0.1 FP to LCS by default (CASSANDRA-5029) http://git-wip-us.apache.org/repos/asf/cassandra/blob/f562f0bb/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index 611f6af..f74dc42 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -211,7 +211,8 @@ cqlsh_extra_syntax_rules = r''' ; describeCommand ::= ( DESCRIBE | DESC ) - ( KEYSPACE ksname=keyspaceName? + ( KEYSPACES + | KEYSPACE ksname=keyspaceName? | ( COLUMNFAMILY | TABLE ) cf=columnFamilyName | ( COLUMNFAMILIES | TABLES ) | SCHEMA @@ -1328,6 +1329,11 @@ class Shell(cmd.Cmd): out.write('CREATE INDEX %s ON %s (%s);\n' % (col.index_name, cfname, self.cql_protect_name(col.name))) +def describe_keyspaces(self): +print +cmd.Cmd.columnize(self, self.get_keyspace_names()) +print + def describe_keyspace(self, ksname): print self.print_recreate_keyspace(self.get_keyspace(ksname), sys.stdout) @@ -1381,6 +1387,10 @@ class Shell(cmd.Cmd): Outputs information about the connected Cassandra cluster, or about the data stored on it. Use in one of the following ways: +DESCRIBE KEYSPACES + + Output the names of all keyspaces. + DESCRIBE KEYSPACE [keyspacename] Output CQL commands that could be used to recreate the given @@ -1416,6 +1426,8 @@ class Shell(cmd.Cmd): k. what = parsed.matched[1][1].lower() +if what == 'keyspaces': +self.describe_keyspaces() if what == 'keyspace': ksname = self.cql_unprotect_name(parsed.get_binding('ksname', '')) if not ksname:
[1/2] git commit: Merge branch 'cassandra-1.2.0' into cassandra-1.2
Updated Branches: refs/heads/cassandra-1.2 db9eb04e2 - 637438355 Merge branch 'cassandra-1.2.0' into cassandra-1.2 Conflicts: CHANGES.txt Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/63743835 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/63743835 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/63743835 Branch: refs/heads/cassandra-1.2 Commit: 637438355c10db4c7a9ae56442a788b36e1bd59e Parents: db9eb04 f562f0b Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Dec 13 00:11:01 2012 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Dec 13 00:11:01 2012 +0300 -- CHANGES.txt |4 bin/cqlsh | 14 +- 2 files changed, 17 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/63743835/CHANGES.txt -- diff --cc CHANGES.txt index 2a70cec,d73849c..ba0177b --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,13 -1,7 +1,17 @@@ +1.2.1 + * Optimize name-based queries to use ArrayBackedSortedColumns (CASSANDRA-5043) + * Fall back to old manifest if most recent is unparseable (CASSANDRA-5041) + * pool [Compressed]RandomAccessReader objects on the partitioned read path + (CASSANDRA-4942) + * Add debug logging to list filenames processed by Directories.migrateFile + method (CASSANDRA-4939) + * Expose black-listed directories via JMX (CASSANDRA-4848) + + + 1.2.0 + * cqlsh: add DESCRIBE KEYSPACES command (CASSANDRA-5060) + + 1.2-rc1 * rename rpc_timeout settings to request_timeout (CASSANDRA-5027) * add BF with 0.1 FP to LCS by default (CASSANDRA-5029)
[2/2] git commit: cqlsh: add DESCRIBE KEYSPACES command; patch by Aleksey Yeschenko, reviewed by Brandon Williams for CASSANDRA-5060
cqlsh: add DESCRIBE KEYSPACES command; patch by Aleksey Yeschenko, reviewed by Brandon Williams for CASSANDRA-5060 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f562f0bb Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f562f0bb Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f562f0bb Branch: refs/heads/cassandra-1.2 Commit: f562f0bb1f0a0b9bf6635948bfa11a1f7f4f12dd Parents: fc5a0cc Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Dec 13 00:07:26 2012 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Dec 13 00:07:26 2012 +0300 -- CHANGES.txt |4 bin/cqlsh | 14 +- 2 files changed, 17 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f562f0bb/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 115ee45..d73849c 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,3 +1,7 @@ +1.2.0 + * cqlsh: add DESCRIBE KEYSPACES command (CASSANDRA-5060) + + 1.2-rc1 * rename rpc_timeout settings to request_timeout (CASSANDRA-5027) * add BF with 0.1 FP to LCS by default (CASSANDRA-5029) http://git-wip-us.apache.org/repos/asf/cassandra/blob/f562f0bb/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index 611f6af..f74dc42 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -211,7 +211,8 @@ cqlsh_extra_syntax_rules = r''' ; describeCommand ::= ( DESCRIBE | DESC ) - ( KEYSPACE ksname=keyspaceName? + ( KEYSPACES + | KEYSPACE ksname=keyspaceName? | ( COLUMNFAMILY | TABLE ) cf=columnFamilyName | ( COLUMNFAMILIES | TABLES ) | SCHEMA @@ -1328,6 +1329,11 @@ class Shell(cmd.Cmd): out.write('CREATE INDEX %s ON %s (%s);\n' % (col.index_name, cfname, self.cql_protect_name(col.name))) +def describe_keyspaces(self): +print +cmd.Cmd.columnize(self, self.get_keyspace_names()) +print + def describe_keyspace(self, ksname): print self.print_recreate_keyspace(self.get_keyspace(ksname), sys.stdout) @@ -1381,6 +1387,10 @@ class Shell(cmd.Cmd): Outputs information about the connected Cassandra cluster, or about the data stored on it. Use in one of the following ways: +DESCRIBE KEYSPACES + + Output the names of all keyspaces. + DESCRIBE KEYSPACE [keyspacename] Output CQL commands that could be used to recreate the given @@ -1416,6 +1426,8 @@ class Shell(cmd.Cmd): k. what = parsed.matched[1][1].lower() +if what == 'keyspaces': +self.describe_keyspaces() if what == 'keyspace': ksname = self.cql_unprotect_name(parsed.get_binding('ksname', '')) if not ksname:
[3/3] git commit: cqlsh: add DESCRIBE KEYSPACES command; patch by Aleksey Yeschenko, reviewed by Brandon Williams for CASSANDRA-5060
cqlsh: add DESCRIBE KEYSPACES command; patch by Aleksey Yeschenko, reviewed by Brandon Williams for CASSANDRA-5060 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f562f0bb Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f562f0bb Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f562f0bb Branch: refs/heads/trunk Commit: f562f0bb1f0a0b9bf6635948bfa11a1f7f4f12dd Parents: fc5a0cc Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Dec 13 00:07:26 2012 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Dec 13 00:07:26 2012 +0300 -- CHANGES.txt |4 bin/cqlsh | 14 +- 2 files changed, 17 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f562f0bb/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 115ee45..d73849c 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,3 +1,7 @@ +1.2.0 + * cqlsh: add DESCRIBE KEYSPACES command (CASSANDRA-5060) + + 1.2-rc1 * rename rpc_timeout settings to request_timeout (CASSANDRA-5027) * add BF with 0.1 FP to LCS by default (CASSANDRA-5029) http://git-wip-us.apache.org/repos/asf/cassandra/blob/f562f0bb/bin/cqlsh -- diff --git a/bin/cqlsh b/bin/cqlsh index 611f6af..f74dc42 100755 --- a/bin/cqlsh +++ b/bin/cqlsh @@ -211,7 +211,8 @@ cqlsh_extra_syntax_rules = r''' ; describeCommand ::= ( DESCRIBE | DESC ) - ( KEYSPACE ksname=keyspaceName? + ( KEYSPACES + | KEYSPACE ksname=keyspaceName? | ( COLUMNFAMILY | TABLE ) cf=columnFamilyName | ( COLUMNFAMILIES | TABLES ) | SCHEMA @@ -1328,6 +1329,11 @@ class Shell(cmd.Cmd): out.write('CREATE INDEX %s ON %s (%s);\n' % (col.index_name, cfname, self.cql_protect_name(col.name))) +def describe_keyspaces(self): +print +cmd.Cmd.columnize(self, self.get_keyspace_names()) +print + def describe_keyspace(self, ksname): print self.print_recreate_keyspace(self.get_keyspace(ksname), sys.stdout) @@ -1381,6 +1387,10 @@ class Shell(cmd.Cmd): Outputs information about the connected Cassandra cluster, or about the data stored on it. Use in one of the following ways: +DESCRIBE KEYSPACES + + Output the names of all keyspaces. + DESCRIBE KEYSPACE [keyspacename] Output CQL commands that could be used to recreate the given @@ -1416,6 +1426,8 @@ class Shell(cmd.Cmd): k. what = parsed.matched[1][1].lower() +if what == 'keyspaces': +self.describe_keyspaces() if what == 'keyspace': ksname = self.cql_unprotect_name(parsed.get_binding('ksname', '')) if not ksname:
[2/3] git commit: Merge branch 'cassandra-1.2.0' into cassandra-1.2
Merge branch 'cassandra-1.2.0' into cassandra-1.2 Conflicts: CHANGES.txt Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/63743835 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/63743835 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/63743835 Branch: refs/heads/trunk Commit: 637438355c10db4c7a9ae56442a788b36e1bd59e Parents: db9eb04 f562f0b Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Dec 13 00:11:01 2012 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Dec 13 00:11:01 2012 +0300 -- CHANGES.txt |4 bin/cqlsh | 14 +- 2 files changed, 17 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/63743835/CHANGES.txt -- diff --cc CHANGES.txt index 2a70cec,d73849c..ba0177b --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,13 -1,7 +1,17 @@@ +1.2.1 + * Optimize name-based queries to use ArrayBackedSortedColumns (CASSANDRA-5043) + * Fall back to old manifest if most recent is unparseable (CASSANDRA-5041) + * pool [Compressed]RandomAccessReader objects on the partitioned read path + (CASSANDRA-4942) + * Add debug logging to list filenames processed by Directories.migrateFile + method (CASSANDRA-4939) + * Expose black-listed directories via JMX (CASSANDRA-4848) + + + 1.2.0 + * cqlsh: add DESCRIBE KEYSPACES command (CASSANDRA-5060) + + 1.2-rc1 * rename rpc_timeout settings to request_timeout (CASSANDRA-5027) * add BF with 0.1 FP to LCS by default (CASSANDRA-5029)
[1/3] git commit: Merge branch 'cassandra-1.2' into trunk
Updated Branches: refs/heads/trunk bd572d7d9 - 65bd017fc Merge branch 'cassandra-1.2' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/65bd017f Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/65bd017f Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/65bd017f Branch: refs/heads/trunk Commit: 65bd017fc8f6ba9c768f9ceb4f36097c22d42157 Parents: bd572d7 6374383 Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Dec 13 00:12:11 2012 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Dec 13 00:12:11 2012 +0300 -- CHANGES.txt |4 bin/cqlsh | 14 +- 2 files changed, 17 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/65bd017f/CHANGES.txt --
[jira] [Commented] (CASSANDRA-5060) select keyspace_name from system.schema_keyspaces
[ https://issues.apache.org/jira/browse/CASSANDRA-5060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13530333#comment-13530333 ] Aleksey Yeschenko commented on CASSANDRA-5060: -- Committed, thanks. select keyspace_name from system.schema_keyspaces - Key: CASSANDRA-5060 URL: https://issues.apache.org/jira/browse/CASSANDRA-5060 Project: Cassandra Issue Type: New Feature Components: API Affects Versions: 1.2.0 beta 3 Reporter: Tupshin Harper Assignee: Aleksey Yeschenko Priority: Minor Fix For: 1.2.0 Attachments: 5060.txt It is currently possible to describe tables to list the tables in the current keyspace, or list all tables in all keyspaces if you are not currently in a keyspace. It is also possible to enumerate the keyspaces with a cql command to select from the system.schema_columnfamilies. There should be a simple describe keyspaces command that enumerates just the keyspaces and is syntactic sugar for select keyspace name from schema_keyspaces. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4858) Coverage analysis for low-CL queries
[ https://issues.apache.org/jira/browse/CASSANDRA-4858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tupshin Harper updated CASSANDRA-4858: -- Fix Version/s: (was: 2.0) Coverage analysis for low-CL queries Key: CASSANDRA-4858 URL: https://issues.apache.org/jira/browse/CASSANDRA-4858 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Jonathan Ellis Assignee: Vijay Fix For: 1.2.1 There are many cases where getRangeSlice creates more RangeSliceCommand than it should, because it always creates one for each range returned by getRestrictedRange. Especially for CL.ONE this does not take the replication factor into account and is potentially pretty wasteful. A range slice at CL.ONE on a 3 node cluster with RF=3 should only ever create one RangeSliceCommand. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5059) 1.0.11 - 1.1.7 upgrade results in unusable compressed sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Harvey updated CASSANDRA-5059: Attachment: LastModified.tar CF containing a single row of fake data which exhibits this issue on 1.1.7. 1.0.11 - 1.1.7 upgrade results in unusable compressed sstables --- Key: CASSANDRA-5059 URL: https://issues.apache.org/jira/browse/CASSANDRA-5059 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Environment: ubuntu sun-java6 6.24-1build0.10.10.1 Reporter: Jason Harvey Attachments: LastModified.tar Upgraded a single node in my ring to 1.1.7. Upgrade process went normally with no errors. However, as soon as the node joined the ring, it started spewing this exception hundreds of times a second: {code} WARN [ReadStage:22] 2012-12-12 02:00:56,181 FileUtils.java (line 116) Failed closing org.apache.cassandra.db.columniterator.SSTableSliceIterator@5959baa2 java.io.IOException: Bad file descriptor at sun.nio.ch.FileDispatcher.preClose0(Native Method) at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.FileInputStream.close(FileInputStream.java:258) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.RandomAccessFile.close(RandomAccessFile.java:541) at org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.close(SSTableSliceIterator.java:132) at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112) at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:300) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1347) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1209) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1144) at org.apache.cassandra.db.Table.getRow(Table.java:378) at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69) at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:51) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) {code} The node was not responding to reads on any CFs, so I was forced to do an emergency roll-back and abandon the upgrade. Node has roughly 3800 sstables. Both LCS and SizeTiered, as well as compressed and uncompressed CFs. After some digging on a test node, I've determined that the issue occurs when attempting to read/upgrade/scrub a compressed 1.0.11-generated sstable on 1.1.7. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-5059) 1.0.11 - 1.1.7 upgrade results in unusable compressed sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13530457#comment-13530457 ] Jason Harvey edited comment on CASSANDRA-5059 at 12/12/12 10:47 PM: I've attached a CF containing a single row of fake data which exhibits this issue on 1.1.7. was (Author: alienth): CF containing a single row of fake data which exhibits this issue on 1.1.7. 1.0.11 - 1.1.7 upgrade results in unusable compressed sstables --- Key: CASSANDRA-5059 URL: https://issues.apache.org/jira/browse/CASSANDRA-5059 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Environment: ubuntu sun-java6 6.24-1build0.10.10.1 Reporter: Jason Harvey Attachments: LastModified.tar Upgraded a single node in my ring to 1.1.7. Upgrade process went normally with no errors. However, as soon as the node joined the ring, it started spewing this exception hundreds of times a second: {code} WARN [ReadStage:22] 2012-12-12 02:00:56,181 FileUtils.java (line 116) Failed closing org.apache.cassandra.db.columniterator.SSTableSliceIterator@5959baa2 java.io.IOException: Bad file descriptor at sun.nio.ch.FileDispatcher.preClose0(Native Method) at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.FileInputStream.close(FileInputStream.java:258) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.RandomAccessFile.close(RandomAccessFile.java:541) at org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.close(SSTableSliceIterator.java:132) at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112) at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:300) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1347) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1209) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1144) at org.apache.cassandra.db.Table.getRow(Table.java:378) at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69) at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:51) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) {code} The node was not responding to reads on any CFs, so I was forced to do an emergency roll-back and abandon the upgrade. Node has roughly 3800 sstables. Both LCS and SizeTiered, as well as compressed and uncompressed CFs. After some digging on a test node, I've determined that the issue occurs when attempting to read/upgrade/scrub a compressed 1.0.11-generated sstable on 1.1.7. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-5059) 1.0.11 - 1.1.7 upgrade results in unusable compressed sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13530457#comment-13530457 ] Jason Harvey edited comment on CASSANDRA-5059 at 12/12/12 10:54 PM: [~jbellis] : I've attached a CF containing a single row of fake data which exhibits this issue on 1.1.7. was (Author: alienth): I've attached a CF containing a single row of fake data which exhibits this issue on 1.1.7. 1.0.11 - 1.1.7 upgrade results in unusable compressed sstables --- Key: CASSANDRA-5059 URL: https://issues.apache.org/jira/browse/CASSANDRA-5059 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Environment: ubuntu sun-java6 6.24-1build0.10.10.1 Reporter: Jason Harvey Attachments: LastModified.tar Upgraded a single node in my ring to 1.1.7. Upgrade process went normally with no errors. However, as soon as the node joined the ring, it started spewing this exception hundreds of times a second: {code} WARN [ReadStage:22] 2012-12-12 02:00:56,181 FileUtils.java (line 116) Failed closing org.apache.cassandra.db.columniterator.SSTableSliceIterator@5959baa2 java.io.IOException: Bad file descriptor at sun.nio.ch.FileDispatcher.preClose0(Native Method) at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.FileInputStream.close(FileInputStream.java:258) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.RandomAccessFile.close(RandomAccessFile.java:541) at org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.close(SSTableSliceIterator.java:132) at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112) at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:300) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1347) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1209) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1144) at org.apache.cassandra.db.Table.getRow(Table.java:378) at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69) at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:51) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) {code} The node was not responding to reads on any CFs, so I was forced to do an emergency roll-back and abandon the upgrade. Node has roughly 3800 sstables. Both LCS and SizeTiered, as well as compressed and uncompressed CFs. After some digging on a test node, I've determined that the issue occurs when attempting to read/upgrade/scrub a compressed 1.0.11-generated sstable on 1.1.7. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5059) 1.0.11 - 1.1.7 upgrade results in unusable compressed sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13530470#comment-13530470 ] Jason Harvey commented on CASSANDRA-5059: - schema for attached CF: {code} ColumnFamily: LastModified Key Validation Class: org.apache.cassandra.db.marshal.AsciiType Default column value validator: org.apache.cassandra.db.marshal.DateType Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type Row cache size / save period in seconds / keys to save : 0.0/0/all Row Cache Provider: org.apache.cassandra.cache.SerializingCacheProvider Key cache size / save period in seconds: 20.0/14400 GC grace seconds: 864000 Compaction min/max thresholds: 4/32 Read repair chance: 0.1 Replicate on write: true Bloom Filter FP chance: default Built indexes: [] Compaction Strategy: org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy Compression Options: sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor {code} 1.0.11 - 1.1.7 upgrade results in unusable compressed sstables --- Key: CASSANDRA-5059 URL: https://issues.apache.org/jira/browse/CASSANDRA-5059 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Environment: ubuntu sun-java6 6.24-1build0.10.10.1 Reporter: Jason Harvey Attachments: LastModified.tar Upgraded a single node in my ring to 1.1.7. Upgrade process went normally with no errors. However, as soon as the node joined the ring, it started spewing this exception hundreds of times a second: {code} WARN [ReadStage:22] 2012-12-12 02:00:56,181 FileUtils.java (line 116) Failed closing org.apache.cassandra.db.columniterator.SSTableSliceIterator@5959baa2 java.io.IOException: Bad file descriptor at sun.nio.ch.FileDispatcher.preClose0(Native Method) at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.FileInputStream.close(FileInputStream.java:258) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.RandomAccessFile.close(RandomAccessFile.java:541) at org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.close(SSTableSliceIterator.java:132) at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112) at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:300) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1347) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1209) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1144) at org.apache.cassandra.db.Table.getRow(Table.java:378) at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69) at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:51) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) {code} The node was not responding to reads on any CFs, so I was forced to do an emergency roll-back and abandon the upgrade. Node has roughly 3800 sstables. Both LCS and SizeTiered, as well as compressed and uncompressed CFs. After some digging on a test node, I've determined that the issue occurs when attempting to read/upgrade/scrub a compressed 1.0.11-generated sstable on 1.1.7. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5059) 1.0.11 - 1.1.7 upgrade results in unusable compressed sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13530475#comment-13530475 ] Jason Harvey commented on CASSANDRA-5059: - I just tested a creation of a brand-new schema with a single Snappy compressed CF (all other options being left default). Issue is still exhibited there. 1.0.11 - 1.1.7 upgrade results in unusable compressed sstables --- Key: CASSANDRA-5059 URL: https://issues.apache.org/jira/browse/CASSANDRA-5059 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Environment: ubuntu sun-java6 6.24-1build0.10.10.1 Reporter: Jason Harvey Attachments: LastModified.tar Upgraded a single node in my ring to 1.1.7. Upgrade process went normally with no errors. However, as soon as the node joined the ring, it started spewing this exception hundreds of times a second: {code} WARN [ReadStage:22] 2012-12-12 02:00:56,181 FileUtils.java (line 116) Failed closing org.apache.cassandra.db.columniterator.SSTableSliceIterator@5959baa2 java.io.IOException: Bad file descriptor at sun.nio.ch.FileDispatcher.preClose0(Native Method) at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.FileInputStream.close(FileInputStream.java:258) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.RandomAccessFile.close(RandomAccessFile.java:541) at org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.close(SSTableSliceIterator.java:132) at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112) at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:300) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1347) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1209) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1144) at org.apache.cassandra.db.Table.getRow(Table.java:378) at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69) at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:51) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) {code} The node was not responding to reads on any CFs, so I was forced to do an emergency roll-back and abandon the upgrade. Node has roughly 3800 sstables. Both LCS and SizeTiered, as well as compressed and uncompressed CFs. After some digging on a test node, I've determined that the issue occurs when attempting to read/upgrade/scrub a compressed 1.0.11-generated sstable on 1.1.7. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5059) 1.0.11 - 1.1.7 upgrade results in unusable compressed sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13530495#comment-13530495 ] Brandon Williams commented on CASSANDRA-5059: - These load fine for me. I suspect there's some sort of environmental problem with snappy's JNI. 1.0.11 - 1.1.7 upgrade results in unusable compressed sstables --- Key: CASSANDRA-5059 URL: https://issues.apache.org/jira/browse/CASSANDRA-5059 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Environment: ubuntu sun-java6 6.24-1build0.10.10.1 Reporter: Jason Harvey Attachments: LastModified.tar Upgraded a single node in my ring to 1.1.7. Upgrade process went normally with no errors. However, as soon as the node joined the ring, it started spewing this exception hundreds of times a second: {code} WARN [ReadStage:22] 2012-12-12 02:00:56,181 FileUtils.java (line 116) Failed closing org.apache.cassandra.db.columniterator.SSTableSliceIterator@5959baa2 java.io.IOException: Bad file descriptor at sun.nio.ch.FileDispatcher.preClose0(Native Method) at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.FileInputStream.close(FileInputStream.java:258) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.RandomAccessFile.close(RandomAccessFile.java:541) at org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.close(SSTableSliceIterator.java:132) at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112) at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:300) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1347) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1209) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1144) at org.apache.cassandra.db.Table.getRow(Table.java:378) at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69) at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:51) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) {code} The node was not responding to reads on any CFs, so I was forced to do an emergency roll-back and abandon the upgrade. Node has roughly 3800 sstables. Both LCS and SizeTiered, as well as compressed and uncompressed CFs. After some digging on a test node, I've determined that the issue occurs when attempting to read/upgrade/scrub a compressed 1.0.11-generated sstable on 1.1.7. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5059) 1.0.11 - 1.1.7 upgrade results in unusable compressed sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13530511#comment-13530511 ] Jason Harvey commented on CASSANDRA-5059: - Confirmed that this also occurs in my environment on a brand-new DeflateCompressor CF. 1.0.11 - 1.1.7 upgrade results in unusable compressed sstables --- Key: CASSANDRA-5059 URL: https://issues.apache.org/jira/browse/CASSANDRA-5059 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Environment: ubuntu sun-java6 6.24-1build0.10.10.1 Reporter: Jason Harvey Attachments: LastModified.tar Upgraded a single node in my ring to 1.1.7. Upgrade process went normally with no errors. However, as soon as the node joined the ring, it started spewing this exception hundreds of times a second: {code} WARN [ReadStage:22] 2012-12-12 02:00:56,181 FileUtils.java (line 116) Failed closing org.apache.cassandra.db.columniterator.SSTableSliceIterator@5959baa2 java.io.IOException: Bad file descriptor at sun.nio.ch.FileDispatcher.preClose0(Native Method) at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.FileInputStream.close(FileInputStream.java:258) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.RandomAccessFile.close(RandomAccessFile.java:541) at org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.close(SSTableSliceIterator.java:132) at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112) at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:300) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1347) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1209) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1144) at org.apache.cassandra.db.Table.getRow(Table.java:378) at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69) at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:51) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) {code} The node was not responding to reads on any CFs, so I was forced to do an emergency roll-back and abandon the upgrade. Node has roughly 3800 sstables. Both LCS and SizeTiered, as well as compressed and uncompressed CFs. After some digging on a test node, I've determined that the issue occurs when attempting to read/upgrade/scrub a compressed 1.0.11-generated sstable on 1.1.7. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-5059) 1.0.11 - 1.1.7 upgrade results in unusable compressed sstables
[ https://issues.apache.org/jira/browse/CASSANDRA-5059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13530511#comment-13530511 ] Jason Harvey edited comment on CASSANDRA-5059 at 12/13/12 12:01 AM: Confirmed that this also occurs in my environment when upgrading a brand-new DeflateCompressor CF. was (Author: alienth): Confirmed that this also occurs in my environment on a brand-new DeflateCompressor CF. 1.0.11 - 1.1.7 upgrade results in unusable compressed sstables --- Key: CASSANDRA-5059 URL: https://issues.apache.org/jira/browse/CASSANDRA-5059 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.7 Environment: ubuntu sun-java6 6.24-1build0.10.10.1 Reporter: Jason Harvey Attachments: LastModified.tar Upgraded a single node in my ring to 1.1.7. Upgrade process went normally with no errors. However, as soon as the node joined the ring, it started spewing this exception hundreds of times a second: {code} WARN [ReadStage:22] 2012-12-12 02:00:56,181 FileUtils.java (line 116) Failed closing org.apache.cassandra.db.columniterator.SSTableSliceIterator@5959baa2 java.io.IOException: Bad file descriptor at sun.nio.ch.FileDispatcher.preClose0(Native Method) at sun.nio.ch.FileDispatcher.preClose(FileDispatcher.java:59) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:96) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.FileInputStream.close(FileInputStream.java:258) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:131) at sun.nio.ch.FileChannelImpl.implCloseChannel(FileChannelImpl.java:121) at java.nio.channels.spi.AbstractInterruptibleChannel.close(AbstractInterruptibleChannel.java:97) at java.io.RandomAccessFile.close(RandomAccessFile.java:541) at org.apache.cassandra.io.util.RandomAccessReader.close(RandomAccessReader.java:224) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.close(CompressedRandomAccessReader.java:130) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.close(SSTableSliceIterator.java:132) at org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:112) at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:300) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1347) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1209) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1144) at org.apache.cassandra.db.Table.getRow(Table.java:378) at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69) at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:51) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) {code} The node was not responding to reads on any CFs, so I was forced to do an emergency roll-back and abandon the upgrade. Node has roughly 3800 sstables. Both LCS and SizeTiered, as well as compressed and uncompressed CFs. After some digging on a test node, I've determined that the issue occurs when attempting to read/upgrade/scrub a compressed 1.0.11-generated sstable on 1.1.7. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: Add gc_grace_seconds to system_auth.users
Updated Branches: refs/heads/cassandra-1.2.0 f562f0bb1 - 1bb78b0c4 Add gc_grace_seconds to system_auth.users Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1bb78b0c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1bb78b0c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1bb78b0c Branch: refs/heads/cassandra-1.2.0 Commit: 1bb78b0c45f410588189932ae968791f659deb59 Parents: f562f0b Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Dec 13 04:32:41 2012 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Dec 13 04:32:41 2012 +0300 -- .../org/apache/cassandra/config/CFMetaData.java|2 +- 1 files changed, 1 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1bb78b0c/src/java/org/apache/cassandra/config/CFMetaData.java -- diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java b/src/java/org/apache/cassandra/config/CFMetaData.java index 9d2e013..3a2e2b6 100644 --- a/src/java/org/apache/cassandra/config/CFMetaData.java +++ b/src/java/org/apache/cassandra/config/CFMetaData.java @@ -222,7 +222,7 @@ public final class CFMetaData public static final CFMetaData AuthUsersCf = compile(18, CREATE TABLE + Auth.USERS_CF + ( + name text PRIMARY KEY, + super boolean - + );, Auth.AUTH_KS); + + ) WITH gc_grace_seconds=864000;, Auth.AUTH_KS); public enum Caching {
[1/2] git commit: Merge branch 'cassandra-1.2.0' into cassandra-1.2
Updated Branches: refs/heads/cassandra-1.2 637438355 - 40762aec0 Merge branch 'cassandra-1.2.0' into cassandra-1.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/40762aec Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/40762aec Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/40762aec Branch: refs/heads/cassandra-1.2 Commit: 40762aec09288fc13c317381d83b067d6294808e Parents: 6374383 1bb78b0 Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Dec 13 04:35:43 2012 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Dec 13 04:35:43 2012 +0300 -- .../org/apache/cassandra/config/CFMetaData.java|2 +- 1 files changed, 1 insertions(+), 1 deletions(-) --
[2/2] git commit: Add gc_grace_seconds to system_auth.users
Add gc_grace_seconds to system_auth.users Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1bb78b0c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1bb78b0c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1bb78b0c Branch: refs/heads/cassandra-1.2 Commit: 1bb78b0c45f410588189932ae968791f659deb59 Parents: f562f0b Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Dec 13 04:32:41 2012 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Dec 13 04:32:41 2012 +0300 -- .../org/apache/cassandra/config/CFMetaData.java|2 +- 1 files changed, 1 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1bb78b0c/src/java/org/apache/cassandra/config/CFMetaData.java -- diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java b/src/java/org/apache/cassandra/config/CFMetaData.java index 9d2e013..3a2e2b6 100644 --- a/src/java/org/apache/cassandra/config/CFMetaData.java +++ b/src/java/org/apache/cassandra/config/CFMetaData.java @@ -222,7 +222,7 @@ public final class CFMetaData public static final CFMetaData AuthUsersCf = compile(18, CREATE TABLE + Auth.USERS_CF + ( + name text PRIMARY KEY, + super boolean - + );, Auth.AUTH_KS); + + ) WITH gc_grace_seconds=864000;, Auth.AUTH_KS); public enum Caching {
[1/3] git commit: Merge branch 'cassandra-1.2' into trunk
Updated Branches: refs/heads/trunk 65bd017fc - 32ad73bcd Merge branch 'cassandra-1.2' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/32ad73bc Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/32ad73bc Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/32ad73bc Branch: refs/heads/trunk Commit: 32ad73bcd4063151eb741b05f759b07f0fb87877 Parents: 65bd017 40762ae Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Dec 13 04:36:23 2012 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Dec 13 04:36:23 2012 +0300 -- .../org/apache/cassandra/config/CFMetaData.java|2 +- 1 files changed, 1 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/32ad73bc/src/java/org/apache/cassandra/config/CFMetaData.java --
[2/3] git commit: Merge branch 'cassandra-1.2.0' into cassandra-1.2
Merge branch 'cassandra-1.2.0' into cassandra-1.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/40762aec Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/40762aec Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/40762aec Branch: refs/heads/trunk Commit: 40762aec09288fc13c317381d83b067d6294808e Parents: 6374383 1bb78b0 Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Dec 13 04:35:43 2012 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Dec 13 04:35:43 2012 +0300 -- .../org/apache/cassandra/config/CFMetaData.java|2 +- 1 files changed, 1 insertions(+), 1 deletions(-) --
[3/3] git commit: Add gc_grace_seconds to system_auth.users
Add gc_grace_seconds to system_auth.users Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1bb78b0c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1bb78b0c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1bb78b0c Branch: refs/heads/trunk Commit: 1bb78b0c45f410588189932ae968791f659deb59 Parents: f562f0b Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Dec 13 04:32:41 2012 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Dec 13 04:32:41 2012 +0300 -- .../org/apache/cassandra/config/CFMetaData.java|2 +- 1 files changed, 1 insertions(+), 1 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1bb78b0c/src/java/org/apache/cassandra/config/CFMetaData.java -- diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java b/src/java/org/apache/cassandra/config/CFMetaData.java index 9d2e013..3a2e2b6 100644 --- a/src/java/org/apache/cassandra/config/CFMetaData.java +++ b/src/java/org/apache/cassandra/config/CFMetaData.java @@ -222,7 +222,7 @@ public final class CFMetaData public static final CFMetaData AuthUsersCf = compile(18, CREATE TABLE + Auth.USERS_CF + ( + name text PRIMARY KEY, + super boolean - + );, Auth.AUTH_KS); + + ) WITH gc_grace_seconds=864000;, Auth.AUTH_KS); public enum Caching {
[jira] [Updated] (CASSANDRA-4847) Bad disk causes death of node despite disk_failure_policy
[ https://issues.apache.org/jira/browse/CASSANDRA-4847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirk True updated CASSANDRA-4847: - Attachment: trunk-4847.txt Bad disk causes death of node despite disk_failure_policy - Key: CASSANDRA-4847 URL: https://issues.apache.org/jira/browse/CASSANDRA-4847 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.0 beta 1 Reporter: Kirk True Assignee: Kirk True Attachments: trunk-4847.txt Steps: # Create a bad disk via device mapper # Specify good disk and bad disk is data directory # Set {{disk_failure_policy}} to {{best_effort}} in cassandra.yaml # Start node Expected: Attempts to create system directories to fail (as expected) on bad disk, and have it added to blacklisted directories. Actual: Node start up aborts due to uncaught error: {noformat} FSWriteError in /mnt/bad_disk/system_traces/sessions at org.apache.cassandra.io.util.FileUtils.createDirectory(FileUtils.java:258) at org.apache.cassandra.db.Directories.init(Directories.java:104) at org.apache.cassandra.db.Directories.create(Directories.java:90) at org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:404) at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:227) at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:393) at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:436) Caused by: java.io.IOException: Failed to mkdirs /mnt/bad_disk/system_traces/sessions ... 7 more {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (CASSANDRA-3848) EmbeddedCassandraService needs a stop() method
[ https://issues.apache.org/jira/browse/CASSANDRA-3848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirk True reassigned CASSANDRA-3848: Assignee: (was: Kirk True) EmbeddedCassandraService needs a stop() method -- Key: CASSANDRA-3848 URL: https://issues.apache.org/jira/browse/CASSANDRA-3848 Project: Cassandra Issue Type: Improvement Components: Core Reporter: David Hawthorne Priority: Trivial I just need a stop() method in EmbeddedCassandraService so I can shut it down as part of my unit tests, so I can test fail behavior. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4858) Coverage analysis for low-CL queries
[ https://issues.apache.org/jira/browse/CASSANDRA-4858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13530765#comment-13530765 ] Vijay commented on CASSANDRA-4858: -- The problem is that we have to scan the nodes in token order so we dont break the existing API's, if we do so then we are sending a lot more requests and waiting for the response than the number of nodes. Its highly unlikely we will be able to query contiguous ranges from the same node. Still thinking of a better way Coverage analysis for low-CL queries Key: CASSANDRA-4858 URL: https://issues.apache.org/jira/browse/CASSANDRA-4858 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Jonathan Ellis Assignee: Vijay Fix For: 1.2.1 There are many cases where getRangeSlice creates more RangeSliceCommand than it should, because it always creates one for each range returned by getRestrictedRange. Especially for CL.ONE this does not take the replication factor into account and is potentially pretty wasteful. A range slice at CL.ONE on a 3 node cluster with RF=3 should only ever create one RangeSliceCommand. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira