[jira] [Commented] (CASSANDRA-8624) Cassandra Cluster's Status Inconsistency Strangely
[ https://issues.apache.org/jira/browse/CASSANDRA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14368387#comment-14368387 ] ZhongYu commented on CASSANDRA-8624: Maybe it's a network issues since we can't see it again. > Cassandra Cluster's Status Inconsistency Strangely > -- > > Key: CASSANDRA-8624 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8624 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: Cassandra 1.2.11 >Reporter: ZhongYu >Priority: Minor > Attachments: QQ截图20150115125254.png > > > We found a strange phenomenon about Cassandra Cluster's status that all the > nodes in the cluster found other node's status inconsistency. Especially, the > inconsistency has an interesting patten. See the following example: > There are 5 nodes (pc17, pc19, pc21, pc23, pc25) in the cluster. Their seeds > configuration are all "pc17, pc19, pc21, pc23, pc25". In a moment, > pc17 found others UP; > pc19 found pc17 DN, others UP; > pc21 found pc17, pc19 DN, others UP; > pc23 found pc17, pc19, pc21 DN, others UP; > pc25 found pc17, pc19, pc21, pc23 DN, only self UP; > See attachments as screen's snapshot. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8642) Cassandra crashed after stress test of write
[ https://issues.apache.org/jira/browse/CASSANDRA-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14368386#comment-14368386 ] ZhongYu commented on CASSANDRA-8642: We haven't reproduced this issue in 2.1.3 yet. > Cassandra crashed after stress test of write > > > Key: CASSANDRA-8642 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8642 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Cassandra 2.1.2, single node cluster, Ubuntu, 8 core > CPU, 16GB memory (heapsize 8G), Vmware virtual machine. >Reporter: ZhongYu > Fix For: 2.1.4 > > Attachments: QQ拼音截图未命名.png > > > When I am perform stress test of write using YCSB, Cassandra crashed. I look > at the logs, and here are the last and only log: > {code} > WARN [SharedPool-Worker-25] 2015-01-18 17:35:16,611 > AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread > Thread[SharedPool-Worker-25,5,main]: {} > java.lang.InternalError: a fault occurred in a recent unsafe memory access > operation in compiled Java code > at > org.apache.cassandra.utils.concurrent.OpOrder$Group.isBlockingSignal(OpOrder.java:302) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.utils.memory.MemtableAllocator$SubAllocator.allocate(MemtableAllocator.java:177) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.utils.memory.SlabAllocator.allocate(SlabAllocator.java:82) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.utils.memory.ContextAllocator.allocate(ContextAllocator.java:57) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.utils.memory.ContextAllocator.clone(ContextAllocator.java:47) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.utils.memory.MemtableBufferAllocator.clone(MemtableBufferAllocator.java:61) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at org.apache.cassandra.db.Memtable.put(Memtable.java:174) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1126) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:388) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:351) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:214) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.service.StorageProxy$7.runMayThrow(StorageProxy.java:999) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2117) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > ~[na:1.7.0_71] > at > org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [apache-cassandra-2.1.2.jar:2.1.2] > at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]{code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8642) Cassandra crashed after stress test of write
[ https://issues.apache.org/jira/browse/CASSANDRA-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14283562#comment-14283562 ] ZhongYu commented on CASSANDRA-8642: JDK 1.7.0_71 Ubuntu 12.04 LTS 64-bit > Cassandra crashed after stress test of write > > > Key: CASSANDRA-8642 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8642 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Cassandra 2.1.2, single node cluster, Ubuntu, 8 core > CPU, 16GB memory (heapsize 8G), Vmware virtual machine. >Reporter: ZhongYu > Fix For: 2.1.3 > > Attachments: QQ拼音截图未命名.png > > > When I am perform stress test of write using YCSB, Cassandra crashed. I look > at the logs, and here are the last and only log: > WARN [SharedPool-Worker-25] 2015-01-18 17:35:16,611 > AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread > Thread[SharedPool-Worker-25,5,main]: {} > java.lang.InternalError: a fault occurred in a recent unsafe memory access > operation in compiled Java code > at > org.apache.cassandra.utils.concurrent.OpOrder$Group.isBlockingSignal(OpOrder.java:302) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.utils.memory.MemtableAllocator$SubAllocator.allocate(MemtableAllocator.java:177) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.utils.memory.SlabAllocator.allocate(SlabAllocator.java:82) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.utils.memory.ContextAllocator.allocate(ContextAllocator.java:57) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.utils.memory.ContextAllocator.clone(ContextAllocator.java:47) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.utils.memory.MemtableBufferAllocator.clone(MemtableBufferAllocator.java:61) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at org.apache.cassandra.db.Memtable.put(Memtable.java:174) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1126) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:388) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:351) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:214) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.service.StorageProxy$7.runMayThrow(StorageProxy.java:999) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2117) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > ~[na:1.7.0_71] > at > org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [apache-cassandra-2.1.2.jar:2.1.2] > at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8642) Cassandra crashed after stress test of write
[ https://issues.apache.org/jira/browse/CASSANDRA-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhongYu updated CASSANDRA-8642: --- Component/s: Core > Cassandra crashed after stress test of write > > > Key: CASSANDRA-8642 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8642 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Cassandra 2.1.2, single node cluster, Ubuntu, 8 core > CPU, 16GB memory (heapsize 8G), Vmware virtual machine. >Reporter: ZhongYu > Attachments: QQ拼音截图未命名.png > > > When I am perform stress test of write using YCSB, Cassandra crashed. I look > at the logs, and here are the last and only log: > WARN [SharedPool-Worker-25] 2015-01-18 17:35:16,611 > AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread > Thread[SharedPool-Worker-25,5,main]: {} > java.lang.InternalError: a fault occurred in a recent unsafe memory access > operation in compiled Java code > at > org.apache.cassandra.utils.concurrent.OpOrder$Group.isBlockingSignal(OpOrder.java:302) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.utils.memory.MemtableAllocator$SubAllocator.allocate(MemtableAllocator.java:177) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.utils.memory.SlabAllocator.allocate(SlabAllocator.java:82) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.utils.memory.ContextAllocator.allocate(ContextAllocator.java:57) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.utils.memory.ContextAllocator.clone(ContextAllocator.java:47) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.utils.memory.MemtableBufferAllocator.clone(MemtableBufferAllocator.java:61) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at org.apache.cassandra.db.Memtable.put(Memtable.java:174) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1126) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:388) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:351) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:214) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.service.StorageProxy$7.runMayThrow(StorageProxy.java:999) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2117) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > ~[na:1.7.0_71] > at > org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [apache-cassandra-2.1.2.jar:2.1.2] > at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-8642) Cassandra crashed after stress test of write
ZhongYu created CASSANDRA-8642: -- Summary: Cassandra crashed after stress test of write Key: CASSANDRA-8642 URL: https://issues.apache.org/jira/browse/CASSANDRA-8642 Project: Cassandra Issue Type: Bug Environment: Cassandra 2.1.2, single node cluster, Ubuntu, 8 core CPU, 16GB memory (heapsize 8G), Vmware virtual machine. Reporter: ZhongYu Attachments: QQ拼音截图未命名.png When I am perform stress test of write using YCSB, Cassandra crashed. I look at the logs, and here are the last and only log: WARN [SharedPool-Worker-25] 2015-01-18 17:35:16,611 AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread Thread[SharedPool-Worker-25,5,main]: {} java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code at org.apache.cassandra.utils.concurrent.OpOrder$Group.isBlockingSignal(OpOrder.java:302) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.utils.memory.MemtableAllocator$SubAllocator.allocate(MemtableAllocator.java:177) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.utils.memory.SlabAllocator.allocate(SlabAllocator.java:82) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.utils.memory.ContextAllocator.allocate(ContextAllocator.java:57) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.utils.memory.ContextAllocator.clone(ContextAllocator.java:47) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.utils.memory.MemtableBufferAllocator.clone(MemtableBufferAllocator.java:61) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.db.Memtable.put(Memtable.java:174) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1126) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:388) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:351) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.db.Mutation.apply(Mutation.java:214) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.service.StorageProxy$7.runMayThrow(StorageProxy.java:999) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2117) ~[apache-cassandra-2.1.2.jar:2.1.2] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.7.0_71] at org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [apache-cassandra-2.1.2.jar:2.1.2] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8624) Cassandra Cluster's Status Inconsistency Strangely
[ https://issues.apache.org/jira/browse/CASSANDRA-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14281662#comment-14281662 ] ZhongYu commented on CASSANDRA-8624: Not that. The network environment is always right. The only thing during that period is that we are performing stress test on the cluster. > Cassandra Cluster's Status Inconsistency Strangely > -- > > Key: CASSANDRA-8624 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8624 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: Cassandra 1.2.11 >Reporter: ZhongYu >Priority: Minor > Attachments: QQ截图20150115125254.png > > > We found a strange phenomenon about Cassandra Cluster's status that all the > nodes in the cluster found other node's status inconsistency. Especially, the > inconsistency has an interesting patten. See the following example: > There are 5 nodes (pc17, pc19, pc21, pc23, pc25) in the cluster. Their seeds > configuration are all "pc17, pc19, pc21, pc23, pc25". In a moment, > pc17 found others UP; > pc19 found pc17 DN, others UP; > pc21 found pc17, pc19 DN, others UP; > pc23 found pc17, pc19, pc21 DN, others UP; > pc25 found pc17, pc19, pc21, pc23 DN, only self UP; > See attachments as screen's snapshot. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8625) LIST USERS and LIST PERMISSIONS command in cqlsh return "Keyspace None not found."
[ https://issues.apache.org/jira/browse/CASSANDRA-8625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14281660#comment-14281660 ] ZhongYu commented on CASSANDRA-8625: nice! > LIST USERS and LIST PERMISSIONS command in cqlsh return "Keyspace None not > found." > -- > > Key: CASSANDRA-8625 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8625 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native > protocol v3 >Reporter: ZhongYu > > When open Cassandra authorization and authentication, LIST USERS and LIST > PERMISSIONS command in cqlsh not work and always return "Keyspace None not > found." > When I login as super user "cassandra" and create some users. > cassandra@cqlsh> list users; > Keyspace None not found. > cassandra@cqlsh> list all permissions; > Keyspace None not found. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8625) LIST USERS and LIST PERMISSIONS command in cqlsh return "Keyspace None not found."
[ https://issues.apache.org/jira/browse/CASSANDRA-8625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14281130#comment-14281130 ] ZhongYu commented on CASSANDRA-8625: Are this bug fixed in 2.1.2? > LIST USERS and LIST PERMISSIONS command in cqlsh return "Keyspace None not > found." > -- > > Key: CASSANDRA-8625 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8625 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native > protocol v3 >Reporter: ZhongYu > > When open Cassandra authorization and authentication, LIST USERS and LIST > PERMISSIONS command in cqlsh not work and always return "Keyspace None not > found." > When I login as super user "cassandra" and create some users. > cassandra@cqlsh> list users; > Keyspace None not found. > cassandra@cqlsh> list all permissions; > Keyspace None not found. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-8625) LIST USERS and LIST PERMISSIONS command in cqlsh return "Keyspace None not found."
ZhongYu created CASSANDRA-8625: -- Summary: LIST USERS and LIST PERMISSIONS command in cqlsh return "Keyspace None not found." Key: CASSANDRA-8625 URL: https://issues.apache.org/jira/browse/CASSANDRA-8625 Project: Cassandra Issue Type: Bug Components: Tools Environment: cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native protocol v3 Reporter: ZhongYu When open Cassandra authorization and authentication, LIST USERS and LIST PERMISSIONS command in cqlsh not work and always return "Keyspace None not found." When I login as super user "cassandra" and create some users. cassandra@cqlsh> list users; Keyspace None not found. cassandra@cqlsh> list all permissions; Keyspace None not found. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-8624) Cassandra Cluster's Status Inconsistency Strangely
ZhongYu created CASSANDRA-8624: -- Summary: Cassandra Cluster's Status Inconsistency Strangely Key: CASSANDRA-8624 URL: https://issues.apache.org/jira/browse/CASSANDRA-8624 Project: Cassandra Issue Type: Bug Components: Tools Environment: Cassandra 1.2.11 Reporter: ZhongYu Priority: Minor Attachments: QQ截图20150115125254.png We found a strange phenomenon about Cassandra Cluster's status that all the nodes in the cluster found other node's status inconsistency. Especially, the inconsistency has an interesting patten. See the following example: There are 5 nodes (pc17, pc19, pc21, pc23, pc25) in the cluster. Their seeds configuration are all "pc17, pc19, pc21, pc23, pc25". In a moment, pc17 found others UP; pc19 found pc17 DN, others UP; pc21 found pc17, pc19 DN, others UP; pc23 found pc17, pc19, pc21 DN, others UP; pc25 found pc17, pc19, pc21, pc23 DN, only self UP; See attachments as screen's snapshot. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-494) add remove_slice to the api
[ https://issues.apache.org/jira/browse/CASSANDRA-494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14151103#comment-14151103 ] ZhongYu commented on CASSANDRA-494: --- Why not implement this feature? We are having trouble deleting columns like timestamp. There are too many columns to load to client. It is really slow to delete data by reading it first! We cost 10 days to delete 1,000,000,000 timestamp style data of about 1000 CF. Each CF have average 1 rows. If we can delete columns by ranges, I think the above operation can finish in serval minutes. > add remove_slice to the api > --- > > Key: CASSANDRA-494 > URL: https://issues.apache.org/jira/browse/CASSANDRA-494 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Dan Di Spaltro >Priority: Minor > > It would be nice to mimic how get_slice works for removing values. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7664) IndexOutOfBoundsException thrown during repair
[ https://issues.apache.org/jira/browse/CASSANDRA-7664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhongYu updated CASSANDRA-7664: --- Environment: RHEL 6.1 Casandra 1.2.3 - 1.2.18 was: RHEL 6.1 Casandra 1.2.3 > IndexOutOfBoundsException thrown during repair > -- > > Key: CASSANDRA-7664 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7664 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: RHEL 6.1 > Casandra 1.2.3 - 1.2.18 >Reporter: ZhongYu > > I was running repair command with moderate read and write load at the same > time. And I found tens of IndexOutOfBoundsException in system log as follows: > {quote} > ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) > Exception in thread Thread[Thread-6056,5,main] > java.lang.IndexOutOfBoundsException > at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75) > at > org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at java.lang.Thread.run(Thread.java:662) > {quote} > I read the source code of CompressedInputStream.java and found there surely > will throw IndexOutOfBoundsException in the following situation: > {code:title=CompressedInputStream.java|borderStyle=solid} > // Part of CompressedInputStream.java start from Line 139 > protected void runMayThrow() throws Exception > { > byte[] compressedWithCRC; > while (chunks.hasNext()) > { > CompressionMetadata.Chunk chunk = chunks.next(); > int readLength = chunk.length + 4; // read with CRC > compressedWithCRC = new byte[readLength]; > int bufferRead = 0; > while (bufferRead < readLength) > bufferRead += source.read(compressedWithCRC, bufferRead, > readLength - bufferRead); > dataBuffer.put(compressedWithCRC); > } > } > {code} > If read function read nothing because the end of the stream has been reached, > it will return -1, thus bufferRead can be negetive. In the next circle, read > function will throw IndexOutOfBoundsException because bufferRead is negetive. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7664) IndexOutOfBoundsException thrown during repair
[ https://issues.apache.org/jira/browse/CASSANDRA-7664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhongYu updated CASSANDRA-7664: --- Environment: RHEL 6.1 Casandra 1.2.3 was: RHEL 6.1 Casandra 1.2.18 > IndexOutOfBoundsException thrown during repair > -- > > Key: CASSANDRA-7664 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7664 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: RHEL 6.1 > Casandra 1.2.3 >Reporter: ZhongYu > > I was running repair command with moderate read and write load at the same > time. And I found tens of IndexOutOfBoundsException in system log as follows: > {quote} > ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) > Exception in thread Thread[Thread-6056,5,main] > java.lang.IndexOutOfBoundsException > at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75) > at > org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at java.lang.Thread.run(Thread.java:662) > {quote} > I read the source code of CompressedInputStream.java and found there surely > will throw IndexOutOfBoundsException in the following situation: > {code:title=CompressedInputStream.java|borderStyle=solid} > // Part of CompressedInputStream.java start from Line 139 > protected void runMayThrow() throws Exception > { > byte[] compressedWithCRC; > while (chunks.hasNext()) > { > CompressionMetadata.Chunk chunk = chunks.next(); > int readLength = chunk.length + 4; // read with CRC > compressedWithCRC = new byte[readLength]; > int bufferRead = 0; > while (bufferRead < readLength) > bufferRead += source.read(compressedWithCRC, bufferRead, > readLength - bufferRead); > dataBuffer.put(compressedWithCRC); > } > } > {code} > If read function read nothing because the end of the stream has been reached, > it will return -1, thus bufferRead can be negetive. In the next circle, read > function will throw IndexOutOfBoundsException because bufferRead is negetive. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7664) IndexOutOfBoundsException thrown during repair
[ https://issues.apache.org/jira/browse/CASSANDRA-7664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhongYu updated CASSANDRA-7664: --- Description: I was running repair command with moderate read and write load at the same time. And I found tens of IndexOutOfBoundsException in system log as follows: ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) Exception in thread Thread[Thread-6056,5,main] java.lang.IndexOutOfBoundsException at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75) at org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at java.lang.Thread.run(Thread.java:662) I read the source code of CompressedInputStream.java and found there surely will throw IndexOutOfBoundsException in the following situation: {code:title=CompressedInputStream.java|borderStyle=solid} // Part of CompressedInputStream.java start from Line 139 protected void runMayThrow() throws Exception { byte[] compressedWithCRC; while (chunks.hasNext()) { CompressionMetadata.Chunk chunk = chunks.next(); int readLength = chunk.length + 4; // read with CRC compressedWithCRC = new byte[readLength]; int bufferRead = 0; while (bufferRead < readLength) bufferRead += source.read(compressedWithCRC, bufferRead, readLength - bufferRead); dataBuffer.put(compressedWithCRC); } } {code} If read function read nothing because the end of the stream has been reached, it will return -1, thus bufferRead can be negetive. In the next circle, read function will throw IndexOutOfBoundsException because bufferRead is negetive. was: I was running repair command with moderate read and write load at the same time. And I found tens of IndexOutOfBoundsException in system log as follows: ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) Exception in thread Thread[Thread-6056,5,main] java.lang.IndexOutOfBoundsException at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75) at org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at java.lang.Thread.run(Thread.java:662) I read the source code of CompressedInputStream.java and found there surely will throw IndexOutOfBoundsException in the following situation: {code:title=CompressedInputStream.java|borderStyle=solid} // Part of CompressedInputStream.java start from Line 139 protected void runMayThrow() throws Exception { byte[] compressedWithCRC; while (chunks.hasNext()) { CompressionMetadata.Chunk chunk = chunks.next(); int readLength = chunk.length + 4; // read with CRC compressedWithCRC = new byte[readLength]; int bufferRead = 0; while (bufferRead < readLength) bufferRead += source.read(compressedWithCRC, bufferRead, readLength - bufferRead); dataBuffer.put(compressedWithCRC); } } {code} If read function read nothing because the end of the stream has been reached, it will return -1, thus bufferRead can be negetive. In the next circle, read function will throw IndexOutOfBoundsException because bufferRead is negetive. > IndexOutOfBoundsException thrown during repair > -- > > Key: CASSANDRA-7664 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7664 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: RHEL 6.1 > Casandra 1.2.18 >Reporter: ZhongYu > > I was running repair command with moderate read and write load at the same > time. And I found tens of IndexOutOfBoundsException in system log as follows: > ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) > Exception in thread Thread[Thread-6056,5,main] > java.lang.IndexOutOfBoundsException > at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75) > at > org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at java.lang.Thread.run(Thread.java:662) > I read the source code of CompressedInputStream.java and found there surely > will throw IndexOutOfBoundsException in the following situation: > {code:title=CompressedInputStream.java|borderStyle=solid} > /
[jira] [Updated] (CASSANDRA-7664) IndexOutOfBoundsException thrown during repair
[ https://issues.apache.org/jira/browse/CASSANDRA-7664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhongYu updated CASSANDRA-7664: --- Description: I was running repair command with moderate read and write load at the same time. And I found tens of IndexOutOfBoundsException in system log as follows: {quote} ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) Exception in thread Thread[Thread-6056,5,main] java.lang.IndexOutOfBoundsException at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75) at org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at java.lang.Thread.run(Thread.java:662) {quote} I read the source code of CompressedInputStream.java and found there surely will throw IndexOutOfBoundsException in the following situation: {code:title=CompressedInputStream.java|borderStyle=solid} // Part of CompressedInputStream.java start from Line 139 protected void runMayThrow() throws Exception { byte[] compressedWithCRC; while (chunks.hasNext()) { CompressionMetadata.Chunk chunk = chunks.next(); int readLength = chunk.length + 4; // read with CRC compressedWithCRC = new byte[readLength]; int bufferRead = 0; while (bufferRead < readLength) bufferRead += source.read(compressedWithCRC, bufferRead, readLength - bufferRead); dataBuffer.put(compressedWithCRC); } } {code} If read function read nothing because the end of the stream has been reached, it will return -1, thus bufferRead can be negetive. In the next circle, read function will throw IndexOutOfBoundsException because bufferRead is negetive. was: I was running repair command with moderate read and write load at the same time. And I found tens of IndexOutOfBoundsException in system log as follows: ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) Exception in thread Thread[Thread-6056,5,main] java.lang.IndexOutOfBoundsException at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75) at org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at java.lang.Thread.run(Thread.java:662) I read the source code of CompressedInputStream.java and found there surely will throw IndexOutOfBoundsException in the following situation: {code:title=CompressedInputStream.java|borderStyle=solid} // Part of CompressedInputStream.java start from Line 139 protected void runMayThrow() throws Exception { byte[] compressedWithCRC; while (chunks.hasNext()) { CompressionMetadata.Chunk chunk = chunks.next(); int readLength = chunk.length + 4; // read with CRC compressedWithCRC = new byte[readLength]; int bufferRead = 0; while (bufferRead < readLength) bufferRead += source.read(compressedWithCRC, bufferRead, readLength - bufferRead); dataBuffer.put(compressedWithCRC); } } {code} If read function read nothing because the end of the stream has been reached, it will return -1, thus bufferRead can be negetive. In the next circle, read function will throw IndexOutOfBoundsException because bufferRead is negetive. > IndexOutOfBoundsException thrown during repair > -- > > Key: CASSANDRA-7664 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7664 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: RHEL 6.1 > Casandra 1.2.18 >Reporter: ZhongYu > > I was running repair command with moderate read and write load at the same > time. And I found tens of IndexOutOfBoundsException in system log as follows: > {quote} > ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) > Exception in thread Thread[Thread-6056,5,main] > java.lang.IndexOutOfBoundsException > at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75) > at > org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at java.lang.Thread.run(Thread.java:662) > {quote} > I read the source code of CompressedInputStream.java and found there surely > will throw IndexOutOfBoundsException in the following situation: > {code:title=CompressedI
[jira] [Updated] (CASSANDRA-7664) IndexOutOfBoundsException thrown during repair
[ https://issues.apache.org/jira/browse/CASSANDRA-7664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhongYu updated CASSANDRA-7664: --- Description: I was running repair command with moderate read and write load at the same time. And I found tens of IndexOutOfBoundsException in system log as follows: ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) Exception in thread Thread[Thread-6056,5,main] java.lang.IndexOutOfBoundsException at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75) at org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at java.lang.Thread.run(Thread.java:662) I read the source code of CompressedInputStream.java and found there surely will throw IndexOutOfBoundsException in the following situation: {code:title=CompressedInputStream.java|borderStyle=solid} // Part of CompressedInputStream.java start from Line 139 protected void runMayThrow() throws Exception { byte[] compressedWithCRC; while (chunks.hasNext()) { CompressionMetadata.Chunk chunk = chunks.next(); int readLength = chunk.length + 4; // read with CRC compressedWithCRC = new byte[readLength]; int bufferRead = 0; while (bufferRead < readLength) bufferRead += source.read(compressedWithCRC, bufferRead, readLength - bufferRead); dataBuffer.put(compressedWithCRC); } } {code} If read function read nothing because the end of the stream has been reached, it will return -1, thus bufferRead can be negetive. In the next circle, read function will throw IndexOutOfBoundsException because bufferRead is negetive. was: I was running repair command with moderate read and write load at the same time. And I found tens of IndexOutOfBoundsException in system log as follows: ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) Exception in thread Thread[Thread-6056,5,main] java.lang.IndexOutOfBoundsException at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75) at org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at java.lang.Thread.run(Thread.java:662) I read the source code of CompressedInputStream.java and found there surely will throw IndexOutOfBoundsException in the following situation: {code:title=CompressedInputStream.java|borderStyle=solid} // Part of CompressedInputStream.java start from Line 139 protected void runMayThrow() throws Exception { byte[] compressedWithCRC; while (chunks.hasNext()) { CompressionMetadata.Chunk chunk = chunks.next(); int readLength = chunk.length + 4; // read with CRC compressedWithCRC = new byte[readLength]; int bufferRead = 0; while (bufferRead < readLength) bufferRead += source.read(compressedWithCRC, bufferRead, readLength - bufferRead); dataBuffer.put(compressedWithCRC); } } {code} If read function read nothing because the end of the stream has been reached, it will return -1, thus bufferRead can be negetive. In the next turn read function will throw IndexOutOfBoundsException because bufferRead is negetive. > IndexOutOfBoundsException thrown during repair > -- > > Key: CASSANDRA-7664 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7664 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: RHEL 6.1 > Casandra 1.2.18 >Reporter: ZhongYu > > I was running repair command with moderate read and write load at the same > time. And I found tens of IndexOutOfBoundsException in system log as follows: > ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) > Exception in thread Thread[Thread-6056,5,main] > java.lang.IndexOutOfBoundsException > at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75) > at > org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at java.lang.Thread.run(Thread.java:662) > I read the source code of CompressedInputStream.java and found there surely > will throw IndexOutOfBoundsException in the following situation: > {code:title=CompressedInputStream.java|borderStyle=solid} > // Pa
[jira] [Updated] (CASSANDRA-7664) IndexOutOfBoundsException thrown during repair
[ https://issues.apache.org/jira/browse/CASSANDRA-7664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhongYu updated CASSANDRA-7664: --- Description: I was running repair command with moderate read and write load at the same time. And I found tens of IndexOutOfBoundsException in system log as follows: ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) Exception in thread Thread[Thread-6056,5,main] java.lang.IndexOutOfBoundsException at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75) at org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at java.lang.Thread.run(Thread.java:662) I read the source code of CompressedInputStream.java and found there surely will throw IndexOutOfBoundsException in the following situation: {code:title=CompressedInputStream.java|borderStyle=solid} // Part of CompressedInputStream.java start from Line 139 protected void runMayThrow() throws Exception { byte[] compressedWithCRC; while (chunks.hasNext()) { CompressionMetadata.Chunk chunk = chunks.next(); int readLength = chunk.length + 4; // read with CRC compressedWithCRC = new byte[readLength]; int bufferRead = 0; while (bufferRead < readLength) bufferRead += source.read(compressedWithCRC, bufferRead, readLength - bufferRead); dataBuffer.put(compressedWithCRC); } } {code} If read function read nothing because the end of the stream has been reached, it will return -1, thus bufferRead can be negetive. In the next turn read function will throw IndexOutOfBoundsException because bufferRead is negetive. was: I was running repair command with moderate read and write load at the same time. And I found tens of IndexOutOfBoundsException in system log as follows: ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) Exception in thread Thread[Thread-6056,5,main] java.lang.IndexOutOfBoundsException at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75) at org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at java.lang.Thread.run(Thread.java:662) I read the source code of CompressedInputStream.java and found there surely will throw IndexOutOfBoundsException in the following situation: {code:title=Bar.java|borderStyle=solid} // Part of CompressedInputStream.java start from Line 139 protected void runMayThrow() throws Exception { byte[] compressedWithCRC; while (chunks.hasNext()) { CompressionMetadata.Chunk chunk = chunks.next(); int readLength = chunk.length + 4; // read with CRC compressedWithCRC = new byte[readLength]; int bufferRead = 0; while (bufferRead < readLength) bufferRead += source.read(compressedWithCRC, bufferRead, readLength - bufferRead); dataBuffer.put(compressedWithCRC); } } {code} If read function read nothing because the end of the stream has been reached, it will return -1, thus bufferRead can be negetive. In the next turn read function will throw IndexOutOfBoundsException because bufferRead is negetive. > IndexOutOfBoundsException thrown during repair > -- > > Key: CASSANDRA-7664 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7664 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: RHEL 6.1 > Casandra 1.2.18 >Reporter: ZhongYu > > I was running repair command with moderate read and write load at the same > time. And I found tens of IndexOutOfBoundsException in system log as follows: > ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) > Exception in thread Thread[Thread-6056,5,main] > java.lang.IndexOutOfBoundsException > at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75) > at > org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at java.lang.Thread.run(Thread.java:662) > I read the source code of CompressedInputStream.java and found there surely > will throw IndexOutOfBoundsException in the following situation: > {code:title=CompressedInputStream.java|borderStyle=solid} > // Part of CompressedInput
[jira] [Created] (CASSANDRA-7664) IndexOutOfBoundsException thrown during repair
ZhongYu created CASSANDRA-7664: -- Summary: IndexOutOfBoundsException thrown during repair Key: CASSANDRA-7664 URL: https://issues.apache.org/jira/browse/CASSANDRA-7664 Project: Cassandra Issue Type: Bug Components: Core Environment: RHEL 6.1 Casandra 1.2.18 Reporter: ZhongYu I was running repair command with moderate read and write load at the same time. And I found tens of IndexOutOfBoundsException in system log as follows: ERROR [Thread-6056] 2013-05-22 14:47:59,416 CassandraDaemon.java (line132) Exception in thread Thread[Thread-6056,5,main] java.lang.IndexOutOfBoundsException at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:75) at org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:151) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at java.lang.Thread.run(Thread.java:662) I read the source code of CompressedInputStream.java and found there surely will throw IndexOutOfBoundsException in the following situation: {code:title=Bar.java|borderStyle=solid} // Part of CompressedInputStream.java start from Line 139 protected void runMayThrow() throws Exception { byte[] compressedWithCRC; while (chunks.hasNext()) { CompressionMetadata.Chunk chunk = chunks.next(); int readLength = chunk.length + 4; // read with CRC compressedWithCRC = new byte[readLength]; int bufferRead = 0; while (bufferRead < readLength) bufferRead += source.read(compressedWithCRC, bufferRead, readLength - bufferRead); dataBuffer.put(compressedWithCRC); } } {code} If read function read nothing because the end of the stream has been reached, it will return -1, thus bufferRead can be negetive. In the next turn read function will throw IndexOutOfBoundsException because bufferRead is negetive. -- This message was sent by Atlassian JIRA (v6.2#6252)