[jira] [Assigned] (CASSANDRA-6952) Cannot bind variables to USE statements

2014-11-17 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer reassigned CASSANDRA-6952:
-

Assignee: Benjamin Lerer

 Cannot bind variables to USE statements
 ---

 Key: CASSANDRA-6952
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6952
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Matt Stump
Assignee: Benjamin Lerer
Priority: Minor
  Labels: cql3

 Attempting to bind a variable for a USE query results in a syntax error.
 Example Invocation:
 {code}
 ResultSet result = session.execute(USE ?, system);
 {code}
 Error:
 {code}
 ERROR SYNTAX_ERROR: line 1:4 no viable alternative at input '?', v=2
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8306) exception in nodetool enablebinary

2014-11-17 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214432#comment-14214432
 ] 

Brandon Williams commented on CASSANDRA-8306:
-

Why not make step 4 before step 1?

 exception in nodetool enablebinary
 --

 Key: CASSANDRA-8306
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8306
 Project: Cassandra
  Issue Type: Bug
Reporter: Rafał Furmański
 Attachments: system.log.zip


 I was trying to add new node (db4) to existing cluster - with no luck. I 
 can't see any errors in system.log. nodetool status shows, that node is 
 joining into cluster (for many hours). Attaching error and cluster info:
 {code}
 root@db4:~# nodetool enablebinary
 error: Error starting native transport: null
 -- StackTrace --
 java.lang.RuntimeException: Error starting native transport: null
   at 
 org.apache.cassandra.service.StorageService.startNativeTransport(StorageService.java:350)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 {code}
 {code}
 root@db4:~# nodetool describecluster
 Cluster Information:
   Name: Production Cluster
   Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
   Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
   Schema versions:
   b7e98bb9-717f-3f59-bac4-84bc19544e90: [10.195.15.163, 
 10.195.15.162, 10.195.15.167, 10.195.15.166]
 {code}
 {code}
 root@db4:~# nodetool status
 Datacenter: Ashburn
 ===
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  AddressLoad   Tokens  OwnsHost ID 
   Rack
 UN  10.195.15.163  12.05 GB   256 ?   
 0a9f478c-80b5-4c15-8b2e-e27df6684c69  RAC1
 UN  10.195.15.162  12.8 GB256 ?   
 c18d2218-ef84-4165-9c3a-05f592f512e9  RAC1
 UJ  10.195.15.167  18.61 GB   256 ?   
 

[jira] [Commented] (CASSANDRA-8306) exception in nodetool enablebinary

2014-11-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214436#comment-14214436
 ] 

Rafał Furmański commented on CASSANDRA-8306:


Because step 1 is creating all necessary folders like /etc/cassandra?

 exception in nodetool enablebinary
 --

 Key: CASSANDRA-8306
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8306
 Project: Cassandra
  Issue Type: Bug
Reporter: Rafał Furmański
 Attachments: system.log.zip


 I was trying to add new node (db4) to existing cluster - with no luck. I 
 can't see any errors in system.log. nodetool status shows, that node is 
 joining into cluster (for many hours). Attaching error and cluster info:
 {code}
 root@db4:~# nodetool enablebinary
 error: Error starting native transport: null
 -- StackTrace --
 java.lang.RuntimeException: Error starting native transport: null
   at 
 org.apache.cassandra.service.StorageService.startNativeTransport(StorageService.java:350)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 {code}
 {code}
 root@db4:~# nodetool describecluster
 Cluster Information:
   Name: Production Cluster
   Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
   Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
   Schema versions:
   b7e98bb9-717f-3f59-bac4-84bc19544e90: [10.195.15.163, 
 10.195.15.162, 10.195.15.167, 10.195.15.166]
 {code}
 {code}
 root@db4:~# nodetool status
 Datacenter: Ashburn
 ===
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  AddressLoad   Tokens  OwnsHost ID 
   Rack
 UN  10.195.15.163  12.05 GB   256 ?   
 0a9f478c-80b5-4c15-8b2e-e27df6684c69  RAC1
 UN  10.195.15.162  12.8 GB256 ?   
 c18d2218-ef84-4165-9c3a-05f592f512e9  RAC1
 UJ  10.195.15.167  18.61 GB   256 ?   

[jira] [Commented] (CASSANDRA-8306) exception in nodetool enablebinary

2014-11-17 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214449#comment-14214449
 ] 

Brandon Williams commented on CASSANDRA-8306:
-

That's too difficult?

 exception in nodetool enablebinary
 --

 Key: CASSANDRA-8306
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8306
 Project: Cassandra
  Issue Type: Bug
Reporter: Rafał Furmański
 Attachments: system.log.zip


 I was trying to add new node (db4) to existing cluster - with no luck. I 
 can't see any errors in system.log. nodetool status shows, that node is 
 joining into cluster (for many hours). Attaching error and cluster info:
 {code}
 root@db4:~# nodetool enablebinary
 error: Error starting native transport: null
 -- StackTrace --
 java.lang.RuntimeException: Error starting native transport: null
   at 
 org.apache.cassandra.service.StorageService.startNativeTransport(StorageService.java:350)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 {code}
 {code}
 root@db4:~# nodetool describecluster
 Cluster Information:
   Name: Production Cluster
   Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
   Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
   Schema versions:
   b7e98bb9-717f-3f59-bac4-84bc19544e90: [10.195.15.163, 
 10.195.15.162, 10.195.15.167, 10.195.15.166]
 {code}
 {code}
 root@db4:~# nodetool status
 Datacenter: Ashburn
 ===
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  AddressLoad   Tokens  OwnsHost ID 
   Rack
 UN  10.195.15.163  12.05 GB   256 ?   
 0a9f478c-80b5-4c15-8b2e-e27df6684c69  RAC1
 UN  10.195.15.162  12.8 GB256 ?   
 c18d2218-ef84-4165-9c3a-05f592f512e9  RAC1
 UJ  10.195.15.167  18.61 GB   256 ?   
 0d3999d9-1e33-4407-bbbd-10cf0a93b3ba  

[jira] [Commented] (CASSANDRA-8306) exception in nodetool enablebinary

2014-11-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214451#comment-14214451
 ] 

Rafał Furmański commented on CASSANDRA-8306:


Of course not. I was just following documentation.

 exception in nodetool enablebinary
 --

 Key: CASSANDRA-8306
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8306
 Project: Cassandra
  Issue Type: Bug
Reporter: Rafał Furmański
 Attachments: system.log.zip


 I was trying to add new node (db4) to existing cluster - with no luck. I 
 can't see any errors in system.log. nodetool status shows, that node is 
 joining into cluster (for many hours). Attaching error and cluster info:
 {code}
 root@db4:~# nodetool enablebinary
 error: Error starting native transport: null
 -- StackTrace --
 java.lang.RuntimeException: Error starting native transport: null
   at 
 org.apache.cassandra.service.StorageService.startNativeTransport(StorageService.java:350)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 {code}
 {code}
 root@db4:~# nodetool describecluster
 Cluster Information:
   Name: Production Cluster
   Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
   Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
   Schema versions:
   b7e98bb9-717f-3f59-bac4-84bc19544e90: [10.195.15.163, 
 10.195.15.162, 10.195.15.167, 10.195.15.166]
 {code}
 {code}
 root@db4:~# nodetool status
 Datacenter: Ashburn
 ===
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  AddressLoad   Tokens  OwnsHost ID 
   Rack
 UN  10.195.15.163  12.05 GB   256 ?   
 0a9f478c-80b5-4c15-8b2e-e27df6684c69  RAC1
 UN  10.195.15.162  12.8 GB256 ?   
 c18d2218-ef84-4165-9c3a-05f592f512e9  RAC1
 UJ  10.195.15.167  18.61 GB   256 ?   
 

[jira] [Commented] (CASSANDRA-7981) Refactor SelectStatement

2014-11-17 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214469#comment-14214469
 ] 

Benjamin Lerer commented on CASSANDRA-7981:
---

I updated the 
[branch|https://github.com/blerer/cassandra/compare/CASSANDRA-7981].
The patch address most  of the review comments.
{quote}The restriction class hierarchy is confusing, but I don't see an obvious 
way to simplify it{quote}
Did not found a way to simplify it but I am also not fully sure about what it 
is making it confusing. I probably have worked already too much with it. May be 
[~slebresne] will have an idea of how we can improve the things.

{quote}There seems to be a conflict between the {mergeWith(Restriction)} 
signatures in {{Restriction}} and {{Restrictions}}.{quote}
It is due to the fact that {{PrimaryKeyRestrictions}} is a composite of 
{{Restriction}}.

The problem with such a big refactoring is that in the end you lost a part of 
your ability to see if the changes that you have made improve really the 
readability of the code compare to the original version. 


 Refactor SelectStatement
 

 Key: CASSANDRA-7981
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7981
 Project: Cassandra
  Issue Type: Bug
Reporter: Benjamin Lerer
Assignee: Benjamin Lerer
 Fix For: 3.0


 The current state of the code of SelectStatement make fixing some issues or 
 adding new functionnalities really hard. It also contains some 
 functionnalities that we would like to reuse in ModificationStatement but 
 cannot for the moment.
 Ideally I would like to:
 * Perform as much validation as possible on Relations instead of performing 
 it on Restrictions as it will help for problem like the one of 
 #CASSANDRA-6075 (I believe that by clearly separating validation and 
 Restrictions building we will also make the code a lot clearer)
 * Provide a way to easily merge restrictions on the same columns as needed 
 for #CASSANDRA-7016
 * Have a preparation logic (validation + pre-processing) that we can easily 
 reuse for Delete statement #CASSANDRA-6237
 * Make the code much easier to read and safer to modify.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7981) Refactor SelectStatement

2014-11-17 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214473#comment-14214473
 ] 

Benjamin Lerer commented on CASSANDRA-7981:
---

Fogot to mention one important thing:
The refactoring break the IN ordering. The results are now returned ordered in 
the normal order not in the order specified in the IN condition. 

 Refactor SelectStatement
 

 Key: CASSANDRA-7981
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7981
 Project: Cassandra
  Issue Type: Bug
Reporter: Benjamin Lerer
Assignee: Benjamin Lerer
 Fix For: 3.0


 The current state of the code of SelectStatement make fixing some issues or 
 adding new functionnalities really hard. It also contains some 
 functionnalities that we would like to reuse in ModificationStatement but 
 cannot for the moment.
 Ideally I would like to:
 * Perform as much validation as possible on Relations instead of performing 
 it on Restrictions as it will help for problem like the one of 
 #CASSANDRA-6075 (I believe that by clearly separating validation and 
 Restrictions building we will also make the code a lot clearer)
 * Provide a way to easily merge restrictions on the same columns as needed 
 for #CASSANDRA-7016
 * Have a preparation logic (validation + pre-processing) that we can easily 
 reuse for Delete statement #CASSANDRA-6237
 * Make the code much easier to read and safer to modify.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6952) Cannot bind variables to USE statements

2014-11-17 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214483#comment-14214483
 ] 

Aleksey Yeschenko commented on CASSANDRA-6952:
--

FWIW I still think it's a bad idea, consistency-wise, to implement this, and we 
shouldn't do so.

 Cannot bind variables to USE statements
 ---

 Key: CASSANDRA-6952
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6952
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Matt Stump
Assignee: Benjamin Lerer
Priority: Minor
  Labels: cql3

 Attempting to bind a variable for a USE query results in a syntax error.
 Example Invocation:
 {code}
 ResultSet result = session.execute(USE ?, system);
 {code}
 Error:
 {code}
 ERROR SYNTAX_ERROR: line 1:4 no viable alternative at input '?', v=2
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8325) Cassandra 2.1.x fails to start on FreeBSD (JVM crash)

2014-11-17 Thread Leonid Shalupov (JIRA)
Leonid Shalupov created CASSANDRA-8325:
--

 Summary: Cassandra 2.1.x fails to start on FreeBSD (JVM crash)
 Key: CASSANDRA-8325
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8325
 Project: Cassandra
  Issue Type: Bug
 Environment: FreeBSD 10.0 with openjdk version 1.7.0_71, 64-Bit 
Server VM
Reporter: Leonid Shalupov
 Attachments: hs_err_pid1856.log

See attached error file after JVM crash

{quote}
FreeBSD xxx.intellij.net 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 
16 22:34:59 UTC 2014 r...@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC  
amd64
{quote}

{quote}
 % java -version
openjdk version 1.7.0_71
OpenJDK Runtime Environment (build 1.7.0_71-b14)
OpenJDK 64-Bit Server VM (build 24.71-b01, mixed mode)
{quote}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8325) Cassandra 2.1.x fails to start on FreeBSD (JVM crash)

2014-11-17 Thread Leonid Shalupov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214527#comment-14214527
 ] 

Leonid Shalupov commented on CASSANDRA-8325:


Related thread in FreeBSD mailing list: 
http://lists.freebsd.org/pipermail/freebsd-stable/2014-October/080834.html

{quote}
Looks like it might be cassandra bug, as it's calling
sun.misc.Unsafe.getByte()  which can crash the JVM with bad addresses.
{quote}

 Cassandra 2.1.x fails to start on FreeBSD (JVM crash)
 -

 Key: CASSANDRA-8325
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8325
 Project: Cassandra
  Issue Type: Bug
 Environment: FreeBSD 10.0 with openjdk version 1.7.0_71, 64-Bit 
 Server VM
Reporter: Leonid Shalupov
 Attachments: hs_err_pid1856.log


 See attached error file after JVM crash
 {quote}
 FreeBSD xxx.intellij.net 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu 
 Jan 16 22:34:59 UTC 2014 
 r...@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64
 {quote}
 {quote}
  % java -version
 openjdk version 1.7.0_71
 OpenJDK Runtime Environment (build 1.7.0_71-b14)
 OpenJDK 64-Bit Server VM (build 24.71-b01, mixed mode)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8325) Cassandra 2.1.x fails to start on FreeBSD (JVM crash)

2014-11-17 Thread Leonid Shalupov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonid Shalupov updated CASSANDRA-8325:
---
Attachment: system.log

 Cassandra 2.1.x fails to start on FreeBSD (JVM crash)
 -

 Key: CASSANDRA-8325
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8325
 Project: Cassandra
  Issue Type: Bug
 Environment: FreeBSD 10.0 with openjdk version 1.7.0_71, 64-Bit 
 Server VM
Reporter: Leonid Shalupov
 Attachments: hs_err_pid1856.log, system.log


 See attached error file after JVM crash
 {quote}
 FreeBSD xxx.intellij.net 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu 
 Jan 16 22:34:59 UTC 2014 
 r...@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64
 {quote}
 {quote}
  % java -version
 openjdk version 1.7.0_71
 OpenJDK Runtime Environment (build 1.7.0_71-b14)
 OpenJDK 64-Bit Server VM (build 24.71-b01, mixed mode)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8325) Cassandra 2.1.x fails to start on FreeBSD (JVM crash)

2014-11-17 Thread Leonid Shalupov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214543#comment-14214543
 ] 

Leonid Shalupov edited comment on CASSANDRA-8325 at 11/17/14 11:09 AM:
---

STR:

1. Get FreeBSD 10.0, install openjdk via pkg install openjdk 7
2. Download and unpack apache-cassandra-2.1.2-bin.tar.gz
3. Run ./cassandra in apache-cassandra-2.1.2/bin
4. In couple of seconds Cassandra will crash


was (Author: shalupov):
STR:

1. Get FreeBSD 10.1, install openjdk via pkg install openjdk 7
2. Download and unpack apache-cassandra-2.1.2-bin.tar.gz
3. Run ./cassandra in apache-cassandra-2.1.2/bin
4. In couple of seconds Cassandra will crash

 Cassandra 2.1.x fails to start on FreeBSD (JVM crash)
 -

 Key: CASSANDRA-8325
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8325
 Project: Cassandra
  Issue Type: Bug
 Environment: FreeBSD 10.0 with openjdk version 1.7.0_71, 64-Bit 
 Server VM
Reporter: Leonid Shalupov
 Attachments: hs_err_pid1856.log, system.log


 See attached error file after JVM crash
 {quote}
 FreeBSD xxx.intellij.net 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu 
 Jan 16 22:34:59 UTC 2014 
 r...@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64
 {quote}
 {quote}
  % java -version
 openjdk version 1.7.0_71
 OpenJDK Runtime Environment (build 1.7.0_71-b14)
 OpenJDK 64-Bit Server VM (build 24.71-b01, mixed mode)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8325) Cassandra 2.1.x fails to start on FreeBSD (JVM crash)

2014-11-17 Thread Leonid Shalupov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214543#comment-14214543
 ] 

Leonid Shalupov commented on CASSANDRA-8325:


STR:

1. Get FreeBSD 10.1, install openjdk via pkg install openjdk 7
2. Download and unpack apache-cassandra-2.1.2-bin.tar.gz
3. Run ./cassandra in apache-cassandra-2.1.2/bin
4. In couple of seconds Cassandra will crash

 Cassandra 2.1.x fails to start on FreeBSD (JVM crash)
 -

 Key: CASSANDRA-8325
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8325
 Project: Cassandra
  Issue Type: Bug
 Environment: FreeBSD 10.0 with openjdk version 1.7.0_71, 64-Bit 
 Server VM
Reporter: Leonid Shalupov
 Attachments: hs_err_pid1856.log, system.log


 See attached error file after JVM crash
 {quote}
 FreeBSD xxx.intellij.net 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu 
 Jan 16 22:34:59 UTC 2014 
 r...@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64
 {quote}
 {quote}
  % java -version
 openjdk version 1.7.0_71
 OpenJDK Runtime Environment (build 1.7.0_71-b14)
 OpenJDK 64-Bit Server VM (build 24.71-b01, mixed mode)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7563) UserType, TupleType and collections in UDFs

2014-11-17 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-7563:

Attachment: 7563v4.txt

 UserType, TupleType and collections in UDFs
 ---

 Key: CASSANDRA-7563
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7563
 Project: Cassandra
  Issue Type: Bug
Reporter: Robert Stupp
Assignee: Robert Stupp
 Fix For: 3.0

 Attachments: 7563-7740.txt, 7563.txt, 7563v2.txt, 7563v3.txt, 
 7563v4.txt


 * is Java Driver as a dependency required ?
 * is it possible to extract parts of the Java Driver for UDT/TT/coll support ?
 * CQL {{DROP TYPE}} must check UDFs
 * must check keyspace access permissions (if those exist)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7563) UserType, TupleType and collections in UDFs

2014-11-17 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214609#comment-14214609
 ] 

Robert Stupp commented on CASSANDRA-7563:
-

Attached v4 of the patch.

The protocol version stuff got a bit bigger than I expected. But the new tests 
in UFTest pass. These test execution using {{executeInternal}} and using 
protocol version 2 + 3 via the Java Driver.

Calling a UDF with a null value in a collection does not work - the Java Driver 
does not support that. Added a test for that (but with {{@Ignore}} annotation.

I tried to make the new tests a bit more readable - hope it looks better now.

Also added support in CQLTester to create UDFs (createFunction, 
createFunctionOverload).

 UserType, TupleType and collections in UDFs
 ---

 Key: CASSANDRA-7563
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7563
 Project: Cassandra
  Issue Type: Bug
Reporter: Robert Stupp
Assignee: Robert Stupp
 Fix For: 3.0

 Attachments: 7563-7740.txt, 7563.txt, 7563v2.txt, 7563v3.txt, 
 7563v4.txt


 * is Java Driver as a dependency required ?
 * is it possible to extract parts of the Java Driver for UDT/TT/coll support ?
 * CQL {{DROP TYPE}} must check UDFs
 * must check keyspace access permissions (if those exist)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8317) ExtendedFilter countCQL3Rows should be exposed to isCQLCount()

2014-11-17 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-8317.
-
Resolution: Invalid

You're misunderstanding what {{countCQL3Rows}} is. It does not indicate that 
it's a count operation. It's only a flag to inform the storage engine if it 
should group cells when counting them for the sake of the query limit. 

 ExtendedFilter countCQL3Rows should be exposed to isCQLCount()
 --

 Key: CASSANDRA-8317
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8317
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jacques-Henri Berthemet
Priority: Minor
   Original Estimate: 1h
  Remaining Estimate: 1h

 ExtendedFilter countCQL3Rows should be exposed to isCQLCount(). The goal is 
 that a SecondaryIndexSearcher implementation knowns that it just needs to 
 count rows, not load them.
 something like:
 public boolean isCQLCount() {
 return countCQL3Rows;
 }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8317) ExtendedFilter countCQL3Rows should be exposed to isCQLCount()

2014-11-17 Thread Jacques-Henri Berthemet (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214630#comment-14214630
 ] 

Jacques-Henri Berthemet commented on CASSANDRA-8317:


I see thanks for the info.

 ExtendedFilter countCQL3Rows should be exposed to isCQLCount()
 --

 Key: CASSANDRA-8317
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8317
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jacques-Henri Berthemet
Priority: Minor
   Original Estimate: 1h
  Remaining Estimate: 1h

 ExtendedFilter countCQL3Rows should be exposed to isCQLCount(). The goal is 
 that a SecondaryIndexSearcher implementation knowns that it just needs to 
 count rows, not load them.
 something like:
 public boolean isCQLCount() {
 return countCQL3Rows;
 }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8280) Cassandra crashing on inserting data over 64K into indexed strings

2014-11-17 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214634#comment-14214634
 ] 

Sam Tunnicliffe commented on CASSANDRA-8280:


There is already validation for this on the thrift path (from CASSANDRA-3057), 
but it appears to have been overlooked for CQL. 
Attaching separate patches for 2.0  2.1, mainly because CQLTester in 2.1 makes 
the unit test simpler but not easily mergeable.

 Cassandra crashing on inserting data over 64K into indexed strings
 --

 Key: CASSANDRA-8280
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8280
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian 7, Cassandra 2.1.1, java 1.7.0_60
Reporter: Cristian Marinescu
Assignee: Sam Tunnicliffe
Priority: Critical
 Fix For: 2.1.3


 An attemtp to instert 65536 bytes in a field that is a primary index throws 
 (correctly?) the cassandra.InvalidRequest exception. However, inserting the 
 same data *in a indexed field that is not a primary index* works just fine. 
 However, Cassandra will crash on next commit and never recover. So I rated it 
 as Critical as it can be used for DoS attacks.
 Reproduce: see the snippet below:
 {code}
 import uuid
 from cassandra import ConsistencyLevel
 from cassandra import InvalidRequest
 from cassandra.cluster import Cluster
 from cassandra.auth import PlainTextAuthProvider
 from cassandra.policies import ConstantReconnectionPolicy
 from cassandra.cqltypes import UUID
  
 # DROP KEYSPACE IF EXISTS cs;
 # CREATE KEYSPACE cs WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 # USE cs;
 # CREATE TABLE test3 (name text, value uuid, sentinel text, PRIMARY KEY 
 (name));
 # CREATE INDEX test3_sentinels ON test3(sentinel); 
  
 class CassandraDemo(object):
  
 def __init__(self):
 ips = [127.0.0.1]
 ap = PlainTextAuthProvider(username=cs, password=cs)
 reconnection_policy = ConstantReconnectionPolicy(20.0, 
 max_attempts=100)
 cluster = Cluster(ips, auth_provider=ap, protocol_version=3, 
 reconnection_policy=reconnection_policy)
 self.session = cluster.connect(cs)
  
 def exec_query(self, query, args):
 prepared_statement = self.session.prepare(query)
 prepared_statement.consistency_level = ConsistencyLevel.LOCAL_QUORUM
 self.session.execute(prepared_statement, args)
  
 def bug(self):
 k1 = UUID( str(uuid.uuid4()) )   
 long_string = X * 65536
 query = INSERT INTO test3 (name, value, sentinel) VALUES (?, ?, ?);
 args = (foo, k1, long_string)
  
 self.exec_query(query, args)
 self.session.execute(DROP KEYSPACE IF EXISTS cs_test, timeout=30)
 self.session.execute(CREATE KEYSPACE cs_test WITH replication = 
 {'class': 'SimpleStrategy', 'replication_factor': 1})
  
 c = CassandraDemo()
 #first run
 c.bug()
 #second run, Cassandra crashes with java.lang.AssertionError
 c.bug()
 {code}
 And here is the cassandra log:
 {code}
 ERROR [MemtableFlushWriter:3] 2014-11-06 16:44:49,263 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[MemtableFlushWriter:3,5,main]
 java.lang.AssertionError: 65536
 at 
 org.apache.cassandra.utils.ByteBufferUtil.writeWithShortLength(ByteBufferUtil.java:290)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.maybeWriteRowHeader(ColumnIndex.java:214)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.add(ColumnIndex.java:201) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:142) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:233)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:218) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:354)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:312) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
  ~[guava-16.0.jar:na]
 at 
 

[jira] [Updated] (CASSANDRA-8280) Cassandra crashing on inserting data over 64K into indexed strings

2014-11-17 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-8280:
---
Attachment: 8280-2.1.txt
8280-2.0.txt

 Cassandra crashing on inserting data over 64K into indexed strings
 --

 Key: CASSANDRA-8280
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8280
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian 7, Cassandra 2.1.1, java 1.7.0_60
Reporter: Cristian Marinescu
Assignee: Sam Tunnicliffe
Priority: Critical
 Fix For: 2.1.3

 Attachments: 8280-2.0.txt, 8280-2.1.txt


 An attemtp to instert 65536 bytes in a field that is a primary index throws 
 (correctly?) the cassandra.InvalidRequest exception. However, inserting the 
 same data *in a indexed field that is not a primary index* works just fine. 
 However, Cassandra will crash on next commit and never recover. So I rated it 
 as Critical as it can be used for DoS attacks.
 Reproduce: see the snippet below:
 {code}
 import uuid
 from cassandra import ConsistencyLevel
 from cassandra import InvalidRequest
 from cassandra.cluster import Cluster
 from cassandra.auth import PlainTextAuthProvider
 from cassandra.policies import ConstantReconnectionPolicy
 from cassandra.cqltypes import UUID
  
 # DROP KEYSPACE IF EXISTS cs;
 # CREATE KEYSPACE cs WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 # USE cs;
 # CREATE TABLE test3 (name text, value uuid, sentinel text, PRIMARY KEY 
 (name));
 # CREATE INDEX test3_sentinels ON test3(sentinel); 
  
 class CassandraDemo(object):
  
 def __init__(self):
 ips = [127.0.0.1]
 ap = PlainTextAuthProvider(username=cs, password=cs)
 reconnection_policy = ConstantReconnectionPolicy(20.0, 
 max_attempts=100)
 cluster = Cluster(ips, auth_provider=ap, protocol_version=3, 
 reconnection_policy=reconnection_policy)
 self.session = cluster.connect(cs)
  
 def exec_query(self, query, args):
 prepared_statement = self.session.prepare(query)
 prepared_statement.consistency_level = ConsistencyLevel.LOCAL_QUORUM
 self.session.execute(prepared_statement, args)
  
 def bug(self):
 k1 = UUID( str(uuid.uuid4()) )   
 long_string = X * 65536
 query = INSERT INTO test3 (name, value, sentinel) VALUES (?, ?, ?);
 args = (foo, k1, long_string)
  
 self.exec_query(query, args)
 self.session.execute(DROP KEYSPACE IF EXISTS cs_test, timeout=30)
 self.session.execute(CREATE KEYSPACE cs_test WITH replication = 
 {'class': 'SimpleStrategy', 'replication_factor': 1})
  
 c = CassandraDemo()
 #first run
 c.bug()
 #second run, Cassandra crashes with java.lang.AssertionError
 c.bug()
 {code}
 And here is the cassandra log:
 {code}
 ERROR [MemtableFlushWriter:3] 2014-11-06 16:44:49,263 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[MemtableFlushWriter:3,5,main]
 java.lang.AssertionError: 65536
 at 
 org.apache.cassandra.utils.ByteBufferUtil.writeWithShortLength(ByteBufferUtil.java:290)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.maybeWriteRowHeader(ColumnIndex.java:214)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.add(ColumnIndex.java:201) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:142) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:233)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:218) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:354)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:312) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
  ~[guava-16.0.jar:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1053)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_60]
 at 
 

[jira] [Commented] (CASSANDRA-8245) Cassandra nodes periodically die in 2-DC configuration

2014-11-17 Thread Oleg Poleshuk (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214638#comment-14214638
 ] 

Oleg Poleshuk commented on CASSANDRA-8245:
--

There are 6 nodes total: 3 per DC. 
Time difference is 2 seconds between 2 DCs. Within one DC it's 1 second.
Anyway, upgraded to 2.1 and the error is gone.

I would recommend to add an additional debug info to FailureDetector, hostname 
should be very useful that just Ignoring interval time 

 Cassandra nodes periodically die in 2-DC configuration
 --

 Key: CASSANDRA-8245
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8245
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Scientific Linux release 6.5
 java version 1.7.0_51
 Cassandra 2.0.9
Reporter: Oleg Poleshuk
Assignee: Brandon Williams
Priority: Minor
 Attachments: stack1.txt, stack2.txt, stack3.txt, stack4.txt, 
 stack5.txt


 We have 2 DCs with 3 nodes in each.
 Second DC periodically has 1-2 nodes down.
 Looks like it looses connectivity with another nodes and then Gossiper starts 
 to accumulate tasks until Cassandra dies with OOM.
 WARN [MemoryMeter:1] 2014-08-12 14:34:59,803 Memtable.java (line 470) setting 
 live ratio to maximum of 64.0 instead of Infinity
  WARN [GossipTasks:1] 2014-08-12 14:44:34,866 Gossiper.java (line 637) Gossip 
 stage has 1 pending tasks; skipping status check (no nodes will be marked 
 down)
  WARN [GossipTasks:1] 2014-08-12 14:44:35,968 Gossiper.java (line 637) Gossip 
 stage has 4 pending tasks; skipping status check (no nodes will be marked 
 down)
  WARN [GossipTasks:1] 2014-08-12 14:44:37,070 Gossiper.java (line 637) Gossip 
 stage has 8 pending tasks; skipping status check (no nodes will be marked 
 down)
  WARN [GossipTasks:1] 2014-08-12 14:44:38,171 Gossiper.java (line 637) Gossip 
 stage has 11 pending tasks; skipping status check (no nodes will be marked 
 down)
 ...
 WARN [GossipTasks:1] 2014-10-06 21:42:51,575 Gossiper.java (line 637) Gossip 
 stage has 1014764 pending tasks; skipping status check (no nodes will be 
 marked down)
  WARN [New I/O worker #13] 2014-10-06 21:54:27,010 Slf4JLogger.java (line 76) 
 Unexpected exception in the selector loop.
 java.lang.OutOfMemoryError: Java heap space
 Also those lines but not sure it is relevant:
 DEBUG [GossipStage:1] 2014-08-12 11:33:18,801 FailureDetector.java (line 338) 
 Ignoring interval time of 2085963047



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8280) Cassandra crashing on inserting data over 64K into indexed strings

2014-11-17 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-8280:
---
Attachment: (was: 8280-2.0.txt)

 Cassandra crashing on inserting data over 64K into indexed strings
 --

 Key: CASSANDRA-8280
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8280
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian 7, Cassandra 2.1.1, java 1.7.0_60
Reporter: Cristian Marinescu
Assignee: Sam Tunnicliffe
Priority: Critical
 Fix For: 2.1.3

 Attachments: 8280-2.1.txt


 An attemtp to instert 65536 bytes in a field that is a primary index throws 
 (correctly?) the cassandra.InvalidRequest exception. However, inserting the 
 same data *in a indexed field that is not a primary index* works just fine. 
 However, Cassandra will crash on next commit and never recover. So I rated it 
 as Critical as it can be used for DoS attacks.
 Reproduce: see the snippet below:
 {code}
 import uuid
 from cassandra import ConsistencyLevel
 from cassandra import InvalidRequest
 from cassandra.cluster import Cluster
 from cassandra.auth import PlainTextAuthProvider
 from cassandra.policies import ConstantReconnectionPolicy
 from cassandra.cqltypes import UUID
  
 # DROP KEYSPACE IF EXISTS cs;
 # CREATE KEYSPACE cs WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 # USE cs;
 # CREATE TABLE test3 (name text, value uuid, sentinel text, PRIMARY KEY 
 (name));
 # CREATE INDEX test3_sentinels ON test3(sentinel); 
  
 class CassandraDemo(object):
  
 def __init__(self):
 ips = [127.0.0.1]
 ap = PlainTextAuthProvider(username=cs, password=cs)
 reconnection_policy = ConstantReconnectionPolicy(20.0, 
 max_attempts=100)
 cluster = Cluster(ips, auth_provider=ap, protocol_version=3, 
 reconnection_policy=reconnection_policy)
 self.session = cluster.connect(cs)
  
 def exec_query(self, query, args):
 prepared_statement = self.session.prepare(query)
 prepared_statement.consistency_level = ConsistencyLevel.LOCAL_QUORUM
 self.session.execute(prepared_statement, args)
  
 def bug(self):
 k1 = UUID( str(uuid.uuid4()) )   
 long_string = X * 65536
 query = INSERT INTO test3 (name, value, sentinel) VALUES (?, ?, ?);
 args = (foo, k1, long_string)
  
 self.exec_query(query, args)
 self.session.execute(DROP KEYSPACE IF EXISTS cs_test, timeout=30)
 self.session.execute(CREATE KEYSPACE cs_test WITH replication = 
 {'class': 'SimpleStrategy', 'replication_factor': 1})
  
 c = CassandraDemo()
 #first run
 c.bug()
 #second run, Cassandra crashes with java.lang.AssertionError
 c.bug()
 {code}
 And here is the cassandra log:
 {code}
 ERROR [MemtableFlushWriter:3] 2014-11-06 16:44:49,263 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[MemtableFlushWriter:3,5,main]
 java.lang.AssertionError: 65536
 at 
 org.apache.cassandra.utils.ByteBufferUtil.writeWithShortLength(ByteBufferUtil.java:290)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.maybeWriteRowHeader(ColumnIndex.java:214)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.add(ColumnIndex.java:201) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:142) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:233)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:218) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:354)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:312) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
  ~[guava-16.0.jar:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1053)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_60]
 at 
 

[jira] [Updated] (CASSANDRA-8280) Cassandra crashing on inserting data over 64K into indexed strings

2014-11-17 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-8280:
---
Attachment: 8280-2.1.txt
8280-2.0.txt

 Cassandra crashing on inserting data over 64K into indexed strings
 --

 Key: CASSANDRA-8280
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8280
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian 7, Cassandra 2.1.1, java 1.7.0_60
Reporter: Cristian Marinescu
Assignee: Sam Tunnicliffe
Priority: Critical
 Fix For: 2.1.3

 Attachments: 8280-2.0.txt, 8280-2.1.txt


 An attemtp to instert 65536 bytes in a field that is a primary index throws 
 (correctly?) the cassandra.InvalidRequest exception. However, inserting the 
 same data *in a indexed field that is not a primary index* works just fine. 
 However, Cassandra will crash on next commit and never recover. So I rated it 
 as Critical as it can be used for DoS attacks.
 Reproduce: see the snippet below:
 {code}
 import uuid
 from cassandra import ConsistencyLevel
 from cassandra import InvalidRequest
 from cassandra.cluster import Cluster
 from cassandra.auth import PlainTextAuthProvider
 from cassandra.policies import ConstantReconnectionPolicy
 from cassandra.cqltypes import UUID
  
 # DROP KEYSPACE IF EXISTS cs;
 # CREATE KEYSPACE cs WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 # USE cs;
 # CREATE TABLE test3 (name text, value uuid, sentinel text, PRIMARY KEY 
 (name));
 # CREATE INDEX test3_sentinels ON test3(sentinel); 
  
 class CassandraDemo(object):
  
 def __init__(self):
 ips = [127.0.0.1]
 ap = PlainTextAuthProvider(username=cs, password=cs)
 reconnection_policy = ConstantReconnectionPolicy(20.0, 
 max_attempts=100)
 cluster = Cluster(ips, auth_provider=ap, protocol_version=3, 
 reconnection_policy=reconnection_policy)
 self.session = cluster.connect(cs)
  
 def exec_query(self, query, args):
 prepared_statement = self.session.prepare(query)
 prepared_statement.consistency_level = ConsistencyLevel.LOCAL_QUORUM
 self.session.execute(prepared_statement, args)
  
 def bug(self):
 k1 = UUID( str(uuid.uuid4()) )   
 long_string = X * 65536
 query = INSERT INTO test3 (name, value, sentinel) VALUES (?, ?, ?);
 args = (foo, k1, long_string)
  
 self.exec_query(query, args)
 self.session.execute(DROP KEYSPACE IF EXISTS cs_test, timeout=30)
 self.session.execute(CREATE KEYSPACE cs_test WITH replication = 
 {'class': 'SimpleStrategy', 'replication_factor': 1})
  
 c = CassandraDemo()
 #first run
 c.bug()
 #second run, Cassandra crashes with java.lang.AssertionError
 c.bug()
 {code}
 And here is the cassandra log:
 {code}
 ERROR [MemtableFlushWriter:3] 2014-11-06 16:44:49,263 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[MemtableFlushWriter:3,5,main]
 java.lang.AssertionError: 65536
 at 
 org.apache.cassandra.utils.ByteBufferUtil.writeWithShortLength(ByteBufferUtil.java:290)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.maybeWriteRowHeader(ColumnIndex.java:214)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.add(ColumnIndex.java:201) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:142) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:233)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:218) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:354)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:312) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
  ~[guava-16.0.jar:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1053)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_60]
 at 
 

[jira] [Updated] (CASSANDRA-8280) Cassandra crashing on inserting data over 64K into indexed strings

2014-11-17 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-8280:
---
Attachment: (was: 8280-2.1.txt)

 Cassandra crashing on inserting data over 64K into indexed strings
 --

 Key: CASSANDRA-8280
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8280
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian 7, Cassandra 2.1.1, java 1.7.0_60
Reporter: Cristian Marinescu
Assignee: Sam Tunnicliffe
Priority: Critical
 Fix For: 2.1.3

 Attachments: 8280-2.0.txt, 8280-2.1.txt


 An attemtp to instert 65536 bytes in a field that is a primary index throws 
 (correctly?) the cassandra.InvalidRequest exception. However, inserting the 
 same data *in a indexed field that is not a primary index* works just fine. 
 However, Cassandra will crash on next commit and never recover. So I rated it 
 as Critical as it can be used for DoS attacks.
 Reproduce: see the snippet below:
 {code}
 import uuid
 from cassandra import ConsistencyLevel
 from cassandra import InvalidRequest
 from cassandra.cluster import Cluster
 from cassandra.auth import PlainTextAuthProvider
 from cassandra.policies import ConstantReconnectionPolicy
 from cassandra.cqltypes import UUID
  
 # DROP KEYSPACE IF EXISTS cs;
 # CREATE KEYSPACE cs WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 # USE cs;
 # CREATE TABLE test3 (name text, value uuid, sentinel text, PRIMARY KEY 
 (name));
 # CREATE INDEX test3_sentinels ON test3(sentinel); 
  
 class CassandraDemo(object):
  
 def __init__(self):
 ips = [127.0.0.1]
 ap = PlainTextAuthProvider(username=cs, password=cs)
 reconnection_policy = ConstantReconnectionPolicy(20.0, 
 max_attempts=100)
 cluster = Cluster(ips, auth_provider=ap, protocol_version=3, 
 reconnection_policy=reconnection_policy)
 self.session = cluster.connect(cs)
  
 def exec_query(self, query, args):
 prepared_statement = self.session.prepare(query)
 prepared_statement.consistency_level = ConsistencyLevel.LOCAL_QUORUM
 self.session.execute(prepared_statement, args)
  
 def bug(self):
 k1 = UUID( str(uuid.uuid4()) )   
 long_string = X * 65536
 query = INSERT INTO test3 (name, value, sentinel) VALUES (?, ?, ?);
 args = (foo, k1, long_string)
  
 self.exec_query(query, args)
 self.session.execute(DROP KEYSPACE IF EXISTS cs_test, timeout=30)
 self.session.execute(CREATE KEYSPACE cs_test WITH replication = 
 {'class': 'SimpleStrategy', 'replication_factor': 1})
  
 c = CassandraDemo()
 #first run
 c.bug()
 #second run, Cassandra crashes with java.lang.AssertionError
 c.bug()
 {code}
 And here is the cassandra log:
 {code}
 ERROR [MemtableFlushWriter:3] 2014-11-06 16:44:49,263 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[MemtableFlushWriter:3,5,main]
 java.lang.AssertionError: 65536
 at 
 org.apache.cassandra.utils.ByteBufferUtil.writeWithShortLength(ByteBufferUtil.java:290)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.maybeWriteRowHeader(ColumnIndex.java:214)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.add(ColumnIndex.java:201) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:142) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:233)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:218) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:354)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:312) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
  ~[guava-16.0.jar:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1053)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_60]
 at 
 

[jira] [Updated] (CASSANDRA-8280) Cassandra crashing on inserting data over 64K into indexed strings

2014-11-17 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-8280:
---
Attachment: (was: 8280-2.1.txt)

 Cassandra crashing on inserting data over 64K into indexed strings
 --

 Key: CASSANDRA-8280
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8280
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian 7, Cassandra 2.1.1, java 1.7.0_60
Reporter: Cristian Marinescu
Assignee: Sam Tunnicliffe
Priority: Critical
 Fix For: 2.1.3


 An attemtp to instert 65536 bytes in a field that is a primary index throws 
 (correctly?) the cassandra.InvalidRequest exception. However, inserting the 
 same data *in a indexed field that is not a primary index* works just fine. 
 However, Cassandra will crash on next commit and never recover. So I rated it 
 as Critical as it can be used for DoS attacks.
 Reproduce: see the snippet below:
 {code}
 import uuid
 from cassandra import ConsistencyLevel
 from cassandra import InvalidRequest
 from cassandra.cluster import Cluster
 from cassandra.auth import PlainTextAuthProvider
 from cassandra.policies import ConstantReconnectionPolicy
 from cassandra.cqltypes import UUID
  
 # DROP KEYSPACE IF EXISTS cs;
 # CREATE KEYSPACE cs WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 # USE cs;
 # CREATE TABLE test3 (name text, value uuid, sentinel text, PRIMARY KEY 
 (name));
 # CREATE INDEX test3_sentinels ON test3(sentinel); 
  
 class CassandraDemo(object):
  
 def __init__(self):
 ips = [127.0.0.1]
 ap = PlainTextAuthProvider(username=cs, password=cs)
 reconnection_policy = ConstantReconnectionPolicy(20.0, 
 max_attempts=100)
 cluster = Cluster(ips, auth_provider=ap, protocol_version=3, 
 reconnection_policy=reconnection_policy)
 self.session = cluster.connect(cs)
  
 def exec_query(self, query, args):
 prepared_statement = self.session.prepare(query)
 prepared_statement.consistency_level = ConsistencyLevel.LOCAL_QUORUM
 self.session.execute(prepared_statement, args)
  
 def bug(self):
 k1 = UUID( str(uuid.uuid4()) )   
 long_string = X * 65536
 query = INSERT INTO test3 (name, value, sentinel) VALUES (?, ?, ?);
 args = (foo, k1, long_string)
  
 self.exec_query(query, args)
 self.session.execute(DROP KEYSPACE IF EXISTS cs_test, timeout=30)
 self.session.execute(CREATE KEYSPACE cs_test WITH replication = 
 {'class': 'SimpleStrategy', 'replication_factor': 1})
  
 c = CassandraDemo()
 #first run
 c.bug()
 #second run, Cassandra crashes with java.lang.AssertionError
 c.bug()
 {code}
 And here is the cassandra log:
 {code}
 ERROR [MemtableFlushWriter:3] 2014-11-06 16:44:49,263 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[MemtableFlushWriter:3,5,main]
 java.lang.AssertionError: 65536
 at 
 org.apache.cassandra.utils.ByteBufferUtil.writeWithShortLength(ByteBufferUtil.java:290)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.maybeWriteRowHeader(ColumnIndex.java:214)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.add(ColumnIndex.java:201) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:142) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:233)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:218) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:354)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:312) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
  ~[guava-16.0.jar:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1053)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_60]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  

[jira] [Updated] (CASSANDRA-8280) Cassandra crashing on inserting data over 64K into indexed strings

2014-11-17 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-8280:
---
Attachment: (was: 8280-2.0.txt)

 Cassandra crashing on inserting data over 64K into indexed strings
 --

 Key: CASSANDRA-8280
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8280
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian 7, Cassandra 2.1.1, java 1.7.0_60
Reporter: Cristian Marinescu
Assignee: Sam Tunnicliffe
Priority: Critical
 Fix For: 2.1.3


 An attemtp to instert 65536 bytes in a field that is a primary index throws 
 (correctly?) the cassandra.InvalidRequest exception. However, inserting the 
 same data *in a indexed field that is not a primary index* works just fine. 
 However, Cassandra will crash on next commit and never recover. So I rated it 
 as Critical as it can be used for DoS attacks.
 Reproduce: see the snippet below:
 {code}
 import uuid
 from cassandra import ConsistencyLevel
 from cassandra import InvalidRequest
 from cassandra.cluster import Cluster
 from cassandra.auth import PlainTextAuthProvider
 from cassandra.policies import ConstantReconnectionPolicy
 from cassandra.cqltypes import UUID
  
 # DROP KEYSPACE IF EXISTS cs;
 # CREATE KEYSPACE cs WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 # USE cs;
 # CREATE TABLE test3 (name text, value uuid, sentinel text, PRIMARY KEY 
 (name));
 # CREATE INDEX test3_sentinels ON test3(sentinel); 
  
 class CassandraDemo(object):
  
 def __init__(self):
 ips = [127.0.0.1]
 ap = PlainTextAuthProvider(username=cs, password=cs)
 reconnection_policy = ConstantReconnectionPolicy(20.0, 
 max_attempts=100)
 cluster = Cluster(ips, auth_provider=ap, protocol_version=3, 
 reconnection_policy=reconnection_policy)
 self.session = cluster.connect(cs)
  
 def exec_query(self, query, args):
 prepared_statement = self.session.prepare(query)
 prepared_statement.consistency_level = ConsistencyLevel.LOCAL_QUORUM
 self.session.execute(prepared_statement, args)
  
 def bug(self):
 k1 = UUID( str(uuid.uuid4()) )   
 long_string = X * 65536
 query = INSERT INTO test3 (name, value, sentinel) VALUES (?, ?, ?);
 args = (foo, k1, long_string)
  
 self.exec_query(query, args)
 self.session.execute(DROP KEYSPACE IF EXISTS cs_test, timeout=30)
 self.session.execute(CREATE KEYSPACE cs_test WITH replication = 
 {'class': 'SimpleStrategy', 'replication_factor': 1})
  
 c = CassandraDemo()
 #first run
 c.bug()
 #second run, Cassandra crashes with java.lang.AssertionError
 c.bug()
 {code}
 And here is the cassandra log:
 {code}
 ERROR [MemtableFlushWriter:3] 2014-11-06 16:44:49,263 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[MemtableFlushWriter:3,5,main]
 java.lang.AssertionError: 65536
 at 
 org.apache.cassandra.utils.ByteBufferUtil.writeWithShortLength(ByteBufferUtil.java:290)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.maybeWriteRowHeader(ColumnIndex.java:214)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.add(ColumnIndex.java:201) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:142) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:233)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:218) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:354)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:312) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
  ~[guava-16.0.jar:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1053)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_60]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  

[jira] [Updated] (CASSANDRA-8280) Cassandra crashing on inserting data over 64K into indexed strings

2014-11-17 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-8280:
---
Attachment: 8280-2.1.txt
8280-2.0.txt

 Cassandra crashing on inserting data over 64K into indexed strings
 --

 Key: CASSANDRA-8280
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8280
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian 7, Cassandra 2.1.1, java 1.7.0_60
Reporter: Cristian Marinescu
Assignee: Sam Tunnicliffe
Priority: Critical
 Fix For: 2.1.3

 Attachments: 8280-2.0.txt, 8280-2.1.txt


 An attemtp to instert 65536 bytes in a field that is a primary index throws 
 (correctly?) the cassandra.InvalidRequest exception. However, inserting the 
 same data *in a indexed field that is not a primary index* works just fine. 
 However, Cassandra will crash on next commit and never recover. So I rated it 
 as Critical as it can be used for DoS attacks.
 Reproduce: see the snippet below:
 {code}
 import uuid
 from cassandra import ConsistencyLevel
 from cassandra import InvalidRequest
 from cassandra.cluster import Cluster
 from cassandra.auth import PlainTextAuthProvider
 from cassandra.policies import ConstantReconnectionPolicy
 from cassandra.cqltypes import UUID
  
 # DROP KEYSPACE IF EXISTS cs;
 # CREATE KEYSPACE cs WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1};
 # USE cs;
 # CREATE TABLE test3 (name text, value uuid, sentinel text, PRIMARY KEY 
 (name));
 # CREATE INDEX test3_sentinels ON test3(sentinel); 
  
 class CassandraDemo(object):
  
 def __init__(self):
 ips = [127.0.0.1]
 ap = PlainTextAuthProvider(username=cs, password=cs)
 reconnection_policy = ConstantReconnectionPolicy(20.0, 
 max_attempts=100)
 cluster = Cluster(ips, auth_provider=ap, protocol_version=3, 
 reconnection_policy=reconnection_policy)
 self.session = cluster.connect(cs)
  
 def exec_query(self, query, args):
 prepared_statement = self.session.prepare(query)
 prepared_statement.consistency_level = ConsistencyLevel.LOCAL_QUORUM
 self.session.execute(prepared_statement, args)
  
 def bug(self):
 k1 = UUID( str(uuid.uuid4()) )   
 long_string = X * 65536
 query = INSERT INTO test3 (name, value, sentinel) VALUES (?, ?, ?);
 args = (foo, k1, long_string)
  
 self.exec_query(query, args)
 self.session.execute(DROP KEYSPACE IF EXISTS cs_test, timeout=30)
 self.session.execute(CREATE KEYSPACE cs_test WITH replication = 
 {'class': 'SimpleStrategy', 'replication_factor': 1})
  
 c = CassandraDemo()
 #first run
 c.bug()
 #second run, Cassandra crashes with java.lang.AssertionError
 c.bug()
 {code}
 And here is the cassandra log:
 {code}
 ERROR [MemtableFlushWriter:3] 2014-11-06 16:44:49,263 
 CassandraDaemon.java:153 - Exception in thread 
 Thread[MemtableFlushWriter:3,5,main]
 java.lang.AssertionError: 65536
 at 
 org.apache.cassandra.utils.ByteBufferUtil.writeWithShortLength(ByteBufferUtil.java:290)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.maybeWriteRowHeader(ColumnIndex.java:214)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.add(ColumnIndex.java:201) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:142) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:233)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:218) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:354)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:312) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
  ~[guava-16.0.jar:na]
 at 
 org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1053)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_60]
 at 
 

[jira] [Commented] (CASSANDRA-8326) Cqlsh cannot connect in trunk

2014-11-17 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214814#comment-14214814
 ] 

Philip Thompson commented on CASSANDRA-8326:


See https://datastax-oss.atlassian.net/browse/PYTHON-185

 Cqlsh cannot connect in trunk
 -

 Key: CASSANDRA-8326
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8326
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Tyler Hobbs
 Fix For: 3.0


 Cqlsh errors with {code}Unable to connect to any servers', {'127.0.0.1': 
 KeyError('column_aliases',)}{code}. To fix, we need to pull in a newer 
 version of the python driver that doesn't assume certain metadata exists. 
 This was broken by the cassandra commit 
 {{cbbc1191ce1656a92354a4fa3859626cb10083e5}}. A fix should be in version 
 2.1.3 of the driver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8326) Cqlsh cannot connect in trunk

2014-11-17 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-8326:
--

 Summary: Cqlsh cannot connect in trunk
 Key: CASSANDRA-8326
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8326
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Tyler Hobbs
 Fix For: 3.0


Cqlsh errors with {code}Unable to connect to any servers', {'127.0.0.1': 
KeyError('column_aliases',)}{code}. To fix, we need to pull in a newer version 
of the python driver that doesn't assume certain metadata exists. This was 
broken by the cassandra commit {{cbbc1191ce1656a92354a4fa3859626cb10083e5}}. A 
fix should be in version 2.1.3 of the driver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8326) Cqlsh cannot connect in trunk

2014-11-17 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214821#comment-14214821
 ] 

Robert Stupp commented on CASSANDRA-8326:
-

Issue introduced by CASSANDRA-8261 (currently ongoing schema table refactoring)

 Cqlsh cannot connect in trunk
 -

 Key: CASSANDRA-8326
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8326
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Tyler Hobbs
 Fix For: 3.0


 Cqlsh errors with {code}Unable to connect to any servers', {'127.0.0.1': 
 KeyError('column_aliases',)}{code}. To fix, we need to pull in a newer 
 version of the python driver that doesn't assume certain metadata exists. 
 This was broken by the cassandra commit 
 {{cbbc1191ce1656a92354a4fa3859626cb10083e5}}. A fix should be in version 
 2.1.3 of the driver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8228) Log malfunctioning host on prepareForRepair

2014-11-17 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214822#comment-14214822
 ] 

Yuki Morishita commented on CASSANDRA-8228:
---

There can be multiple nodes failing, so you should use concurrent Set.

 Log malfunctioning host on prepareForRepair
 ---

 Key: CASSANDRA-8228
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8228
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Juho Mäkinen
Assignee: Rajanarayanan Thottuvaikkatumana
Priority: Trivial
  Labels: lhf
 Attachments: cassandra-trunk-8228.txt


 Repair startup goes thru ActiveRepairService.prepareForRepair() which might 
 result with Repair failed with error Did not get positive replies from all 
 endpoints. error, but there's no other logging regarding to this error.
 It seems that it would be trivial to modify the prepareForRepair() to log the 
 host address which caused the error, thus ease the debugging effort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8228) Log malfunctioning host on prepareForRepair

2014-11-17 Thread Rajanarayanan Thottuvaikkatumana (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajanarayanan Thottuvaikkatumana updated CASSANDRA-8228:

Attachment: (was: cassandra-trunk-8228.txt)

 Log malfunctioning host on prepareForRepair
 ---

 Key: CASSANDRA-8228
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8228
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Juho Mäkinen
Assignee: Rajanarayanan Thottuvaikkatumana
Priority: Trivial
  Labels: lhf

 Repair startup goes thru ActiveRepairService.prepareForRepair() which might 
 result with Repair failed with error Did not get positive replies from all 
 endpoints. error, but there's no other logging regarding to this error.
 It seems that it would be trivial to modify the prepareForRepair() to log the 
 host address which caused the error, thus ease the debugging effort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8228) Log malfunctioning host on prepareForRepair

2014-11-17 Thread Rajanarayanan Thottuvaikkatumana (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajanarayanan Thottuvaikkatumana updated CASSANDRA-8228:

Attachment: cassandra-trunk-8228.txt

Latest Patch for CASSANDRA-8228 in the trunk. 

 Log malfunctioning host on prepareForRepair
 ---

 Key: CASSANDRA-8228
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8228
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Juho Mäkinen
Assignee: Rajanarayanan Thottuvaikkatumana
Priority: Trivial
  Labels: lhf
 Attachments: cassandra-trunk-8228.txt


 Repair startup goes thru ActiveRepairService.prepareForRepair() which might 
 result with Repair failed with error Did not get positive replies from all 
 endpoints. error, but there's no other logging regarding to this error.
 It seems that it would be trivial to modify the prepareForRepair() to log the 
 host address which caused the error, thus ease the debugging effort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8228) Log malfunctioning host on prepareForRepair

2014-11-17 Thread Rajanarayanan Thottuvaikkatumana (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214907#comment-14214907
 ] 

Rajanarayanan Thottuvaikkatumana commented on CASSANDRA-8228:
-

[~yukim], The latest patch with the changes as per your suggestions have been 
uploaded. Please have a look at it. Thanks

 Log malfunctioning host on prepareForRepair
 ---

 Key: CASSANDRA-8228
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8228
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Juho Mäkinen
Assignee: Rajanarayanan Thottuvaikkatumana
Priority: Trivial
  Labels: lhf
 Attachments: cassandra-trunk-8228.txt


 Repair startup goes thru ActiveRepairService.prepareForRepair() which might 
 result with Repair failed with error Did not get positive replies from all 
 endpoints. error, but there's no other logging regarding to this error.
 It seems that it would be trivial to modify the prepareForRepair() to log the 
 host address which caused the error, thus ease the debugging effort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8286) Regression in ORDER BY

2014-11-17 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214917#comment-14214917
 ] 

Benjamin Lerer commented on CASSANDRA-8286:
---

Just a few nits:
* The patch for 2.0 does not compile as ModificationStatement need to be 
modified
* I can understand why you used {{processSelection}} and 
{{selectionsNeedProcessing}} but as everything else is about functions I find 
it a bit confusing. 
* Renaming {{SelectionWithFunctions}} to {{SelectionWithProcessing}} only in 
version 2.1 is not really consistent
* It will be good if you can add also some unit tests to test the ordering 
behavior with functions

 Regression in ORDER BY
 --

 Key: CASSANDRA-8286
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8286
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Tyler Hobbs
  Labels: cql
 Fix For: 2.0.12, 2.1.3

 Attachments: 8286-2.0.txt, 8286-2.1.txt, 8286-trunk.txt


 The dtest {{cql_tests.py:TestCQL.order_by_multikey_test}} is now failing in 
 2.0:
 http://cassci.datastax.com/job/cassandra-2.0_dtest/lastCompletedBuild/testReport/cql_tests/TestCQL/order_by_multikey_test/history/
 This failure began at the commit for CASSANDRA-8178.
 The error message reads 
 {code}==
 ERROR: order_by_multikey_test (cql_tests.TestCQL)
 --
 Traceback (most recent call last):
   File /Users/philipthompson/cstar/cassandra-dtest/dtest.py, line 524, in 
 wrapped
 f(obj)
   File /Users/philipthompson/cstar/cassandra-dtest/cql_tests.py, line 1807, 
 in order_by_multikey_test
 res = cursor.execute(SELECT col1 FROM test WHERE my_id in('key1', 
 'key2', 'key3') ORDER BY col1;)
   File /Library/Python/2.7/site-packages/cassandra/cluster.py, line 1281, 
 in execute
 result = future.result(timeout)
   File /Library/Python/2.7/site-packages/cassandra/cluster.py, line 2771, 
 in result
 raise self._final_exception
 InvalidRequest: code=2200 [Invalid query] message=ORDER BY could not be used 
 on columns missing in select clause.{code}
 and occurs at the query {{SELECT col1 FROM test WHERE my_id in('key1', 
 'key2', 'key3') ORDER BY col1;}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8327) snapshots taken before repair are not cleared if snapshot fails

2014-11-17 Thread MASSIMO CELLI (JIRA)
MASSIMO CELLI created CASSANDRA-8327:


 Summary: snapshots taken before repair are not cleared if snapshot 
fails
 Key: CASSANDRA-8327
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8327
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: cassandra 2.0.10.71
Reporter: MASSIMO CELLI
Priority: Minor
 Fix For: 2.0.12


running repair service the following directory was created for the snapshots:
drwxr-xr-x 2 cassandra cassandra 36864 Nov 5 07:47 
073d16e0-64c0-11e4-8e9a-7b3d4674c508 

but the system.log reports the following error which suggests the snapshot 
failed:
ERROR [RMI TCP Connection(3251)-10.150.27.78] 2014-11-05 07:47:55,734 
StorageService.java (line 2599) Repair session 
073d16e0-64c0-11e4-8e9a-7b3d4674c508 for range 
(7530018576963469312,7566047373982433280] failed with error 
java.io.IOException: Failed during snapshot creation. 
java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
java.io.IOException: Failed during snapshot creation.  ERROR 
[AntiEntropySessions:3312] 2014-11-05 07:47:55,731 RepairSession.java (line 
288) [repair #073d16e0-64c0-11e4-8e9a-7b3d4674c508] session completed with the 
following error java.io.IOException: Failed during snapshot creation.

the problem is that the directory for the snapshots that fail are just left on 
the disk and don't get cleaned up. They must be removed manually, which is not 
ideal.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8327) snapshots taken before repair are not cleared if snapshot fails

2014-11-17 Thread MASSIMO CELLI (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MASSIMO CELLI updated CASSANDRA-8327:
-
Fix Version/s: (was: 2.0.12)

 snapshots taken before repair are not cleared if snapshot fails
 ---

 Key: CASSANDRA-8327
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8327
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: cassandra 2.0.10.71
Reporter: MASSIMO CELLI
Priority: Minor

 running repair service the following directory was created for the snapshots:
 drwxr-xr-x 2 cassandra cassandra 36864 Nov 5 07:47 
 073d16e0-64c0-11e4-8e9a-7b3d4674c508 
 but the system.log reports the following error which suggests the snapshot 
 failed:
 ERROR [RMI TCP Connection(3251)-10.150.27.78] 2014-11-05 07:47:55,734 
 StorageService.java (line 2599) Repair session 
 073d16e0-64c0-11e4-8e9a-7b3d4674c508 for range 
 (7530018576963469312,7566047373982433280] failed with error 
 java.io.IOException: Failed during snapshot creation. 
 java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
 java.io.IOException: Failed during snapshot creation.  ERROR 
 [AntiEntropySessions:3312] 2014-11-05 07:47:55,731 RepairSession.java (line 
 288) [repair #073d16e0-64c0-11e4-8e9a-7b3d4674c508] session completed with 
 the following error java.io.IOException: Failed during snapshot creation.
 the problem is that the directory for the snapshots that fail are just left 
 on the disk and don't get cleaned up. They must be removed manually, which is 
 not ideal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8327) snapshots taken before repair are not cleared if snapshot fails

2014-11-17 Thread Nick Bailey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214947#comment-14214947
 ] 

Nick Bailey commented on CASSANDRA-8327:


I think that ideally, c* would be using a specific directory for snapshots 
created by repair. In addition to cleaning up after the snapshot fails for some 
reason, c* can be restarted while a repair is ongoing and leave these 
directories behind. By using a specific directory, the c* process can also 
simply clean up this directory when the process starts.

 snapshots taken before repair are not cleared if snapshot fails
 ---

 Key: CASSANDRA-8327
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8327
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: cassandra 2.0.10.71
Reporter: MASSIMO CELLI
Priority: Minor

 running repair service the following directory was created for the snapshots:
 drwxr-xr-x 2 cassandra cassandra 36864 Nov 5 07:47 
 073d16e0-64c0-11e4-8e9a-7b3d4674c508 
 but the system.log reports the following error which suggests the snapshot 
 failed:
 ERROR [RMI TCP Connection(3251)-10.150.27.78] 2014-11-05 07:47:55,734 
 StorageService.java (line 2599) Repair session 
 073d16e0-64c0-11e4-8e9a-7b3d4674c508 for range 
 (7530018576963469312,7566047373982433280] failed with error 
 java.io.IOException: Failed during snapshot creation. 
 java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
 java.io.IOException: Failed during snapshot creation.  ERROR 
 [AntiEntropySessions:3312] 2014-11-05 07:47:55,731 RepairSession.java (line 
 288) [repair #073d16e0-64c0-11e4-8e9a-7b3d4674c508] session completed with 
 the following error java.io.IOException: Failed during snapshot creation.
 the problem is that the directory for the snapshots that fail are just left 
 on the disk and don't get cleaned up. They must be removed manually, which is 
 not ideal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8248) Possible memory leak

2014-11-17 Thread Shawn Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14214961#comment-14214961
 ] 

Shawn Kumar commented on CASSANDRA-8248:


Alex, have you determined the issue with resident being larger, and are you 
still seeing this problem? If there are any further details you can provide 
(are you carrying out incremental repairs? do compactions have any affect?), 
they would be much appreciated.

 Possible memory leak 
 -

 Key: CASSANDRA-8248
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8248
 Project: Cassandra
  Issue Type: Bug
Reporter: Alexander Sterligov
Assignee: Shawn Kumar
 Attachments: thread_dump


 Sometimes during repair cassandra starts to consume more memory than expected.
 Total amount of data on node is about 20GB.
 Size of the data directory is 66GC because of snapshots.
 Top reports: 
 {noformat}
   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 15724 loadbase  20   0  493g  55g  44g S   28 44.2   4043:24 java
 {noformat}
 At the /proc/15724/maps there are a lot of deleted file maps
 {quote}
 7f63a6102000-7f63a6332000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a6332000-7f63a6562000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a6562000-7f63a6792000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a6792000-7f63a69c2000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a69c2000-7f63a6bf2000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a6bf2000-7f63a6e22000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a6e22000-7f63a7052000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a7052000-7f63a7282000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a7282000-7f63a74b2000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a74b2000-7f63a76e2000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a76e2000-7f63a7912000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a7912000-7f63a7b42000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a7b42000-7f63a7d72000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a7d72000-7f63a7fa2000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a7fa2000-7f63a81d2000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a81d2000-7f63a8402000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a8402000-7f63a8622000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a8622000-7f63a8842000 r--s  08:21 9442763
 

[jira] [Created] (CASSANDRA-8328) Expose Thread Pool maximum pool size in metrics

2014-11-17 Thread Chris Lohfink (JIRA)
Chris Lohfink created CASSANDRA-8328:


 Summary: Expose Thread Pool maximum pool size in metrics
 Key: CASSANDRA-8328
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8328
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Chris Lohfink
Priority: Trivial


The max pool size is exposed in the original (o.a.c.internal/request/transport) 
metrics from CASSANDRA-5044, but they are not available in the o.a.c.metrics 
beans.  Pretty minor change to also expose this there, which gives context to 
the maximum number of active tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8328) Expose Thread Pool maximum pool size in metrics

2014-11-17 Thread Chris Lohfink (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Lohfink updated CASSANDRA-8328:
-
Attachment: 0001-add-max-pool-size-metrics-for-CASSANDRA-8328.patch

 Expose Thread Pool maximum pool size in metrics
 ---

 Key: CASSANDRA-8328
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8328
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Chris Lohfink
Priority: Trivial
  Labels: metrics
 Attachments: 0001-add-max-pool-size-metrics-for-CASSANDRA-8328.patch


 The max pool size is exposed in the original 
 (o.a.c.internal/request/transport) metrics from CASSANDRA-5044, but they are 
 not available in the o.a.c.metrics beans.  Pretty minor change to also expose 
 this there, which gives context to the maximum number of active tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8326) Cqlsh cannot connect in trunk

2014-11-17 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215055#comment-14215055
 ] 

Philip Thompson commented on CASSANDRA-8326:


This python driver issue is also breaking 90% of dtests on trunk, which will 
also be fixed once a 2.1.3 python driver release is out.

 Cqlsh cannot connect in trunk
 -

 Key: CASSANDRA-8326
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8326
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Tyler Hobbs
 Fix For: 3.0


 Cqlsh errors with {code}Unable to connect to any servers', {'127.0.0.1': 
 KeyError('column_aliases',)}{code}. To fix, we need to pull in a newer 
 version of the python driver that doesn't assume certain metadata exists. 
 This was broken by the cassandra commit 
 {{cbbc1191ce1656a92354a4fa3859626cb10083e5}}. A fix should be in version 
 2.1.3 of the driver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7830) Decommissioning fails on a live node

2014-11-17 Thread Donald Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215093#comment-14215093
 ] 

Donald Smith commented on CASSANDRA-7830:
-

Yes, I'm seeing this with 2.0.11:
{noformat}
Exception in thread main java.lang.UnsupportedOperationException: data is 
currently moving to this node; unable to leave the ring
at 
org.apache.cassandra.service.StorageService.decommission(StorageService.java:2912)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
...
{noformat}
And *nodetool netstats* shows:
{noformat}
dc1-cassandra13.dc01 ~ nodetool netstats
Mode: NORMAL
Restore replica count d7efb410-6c58-11e4-896c-a1382b792927
Read Repair Statistics:
Attempted: 1123
Mismatch (Blocking): 0
Mismatch (Background): 540
Pool NameActive   Pending  Completed
Commandsn/a 0 1494743209
Responses   n/a 1 1651558975
{noformat}


 Decommissioning fails on a live node
 

 Key: CASSANDRA-7830
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7830
 Project: Cassandra
  Issue Type: Bug
Reporter: Ananthkumar K S

 Exception in thread main java.lang.UnsupportedOperationException: data is 
 currently moving to this node; unable to leave the ring at 
 org.apache.cassandra.service.StorageService.decommission(StorageService.java:2629)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:601) at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
  at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
  at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235) 
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) at 
 com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250) at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
  at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791) at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
  at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
  at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
  at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
  at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
  at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source) at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:601) at 
 sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322) at 
 sun.rmi.transport.Transport$1.run(Transport.java:177) at 
 sun.rmi.transport.Transport$1.run(Transport.java:174) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 sun.rmi.transport.Transport.serviceCall(Transport.java:173) at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556) at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
  at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
  at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
  at java.lang.Thread.run(Thread.java:722)
 I got the following exception when i was trying to decommission a live node. 
 There is no reference in the manual saying that i need to stop the data 
 coming into this node. Even then, decommissioning is specified for live nodes.
 Can anyone let me know if am doing something wrong or if this is a bug on 
 cassandra part?
 Cassandra Version Used : 2.0.3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-7830) Decommissioning fails on a live node

2014-11-17 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson reopened CASSANDRA-7830:


 Decommissioning fails on a live node
 

 Key: CASSANDRA-7830
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7830
 Project: Cassandra
  Issue Type: Bug
Reporter: Ananthkumar K S

 Exception in thread main java.lang.UnsupportedOperationException: data is 
 currently moving to this node; unable to leave the ring at 
 org.apache.cassandra.service.StorageService.decommission(StorageService.java:2629)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:601) at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
  at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
  at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235) 
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) at 
 com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250) at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
  at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791) at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
  at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
  at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
  at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
  at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
  at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source) at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:601) at 
 sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322) at 
 sun.rmi.transport.Transport$1.run(Transport.java:177) at 
 sun.rmi.transport.Transport$1.run(Transport.java:174) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 sun.rmi.transport.Transport.serviceCall(Transport.java:173) at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556) at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
  at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
  at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
  at java.lang.Thread.run(Thread.java:722)
 I got the following exception when i was trying to decommission a live node. 
 There is no reference in the manual saying that i need to stop the data 
 coming into this node. Even then, decommissioning is specified for live nodes.
 Can anyone let me know if am doing something wrong or if this is a bug on 
 cassandra part?
 Cassandra Version Used : 2.0.3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7830) Decommissioning fails on a live node

2014-11-17 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-7830:
---
Reproduced In: 2.0.11, 2.0.3  (was: 2.0.3)
  Description: 
{code}Exception in thread main java.lang.UnsupportedOperationException: data 
is currently moving to this node; unable to leave the ring at 
org.apache.cassandra.service.StorageService.decommission(StorageService.java:2629)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601) at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
 at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
 at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235) 
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) at 
com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250) at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791) at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
 at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
 at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
 at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
 at 
javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
 at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601) at 
sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322) at 
sun.rmi.transport.Transport$1.run(Transport.java:177) at 
sun.rmi.transport.Transport$1.run(Transport.java:174) at 
java.security.AccessController.doPrivileged(Native Method) at 
sun.rmi.transport.Transport.serviceCall(Transport.java:173) at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556) at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
 at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) 
at java.lang.Thread.run(Thread.java:722){code}

I got the following exception when i was trying to decommission a live node. 
There is no reference in the manual saying that i need to stop the data coming 
into this node. Even then, decommissioning is specified for live nodes.

Can anyone let me know if am doing something wrong or if this is a bug on 
cassandra part?

Cassandra Version Used : 2.0.3

  was:
Exception in thread main java.lang.UnsupportedOperationException: data is 
currently moving to this node; unable to leave the ring at 
org.apache.cassandra.service.StorageService.decommission(StorageService.java:2629)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601) at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
 at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
 at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235) 
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) at 
com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250) at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791) at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
 at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
 at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
 at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
 at 
javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
 at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at 

[jira] [Commented] (CASSANDRA-7830) Decommissioning fails on a live node

2014-11-17 Thread Donald Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215101#comment-14215101
 ] 

Donald Smith commented on CASSANDRA-7830:
-

Stopping and restarting the cassandra process did not help.

 Decommissioning fails on a live node
 

 Key: CASSANDRA-7830
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7830
 Project: Cassandra
  Issue Type: Bug
Reporter: Ananthkumar K S

 {code}Exception in thread main java.lang.UnsupportedOperationException: 
 data is currently moving to this node; unable to leave the ring at 
 org.apache.cassandra.service.StorageService.decommission(StorageService.java:2629)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:601) at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
  at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
  at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235) 
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) at 
 com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250) at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
  at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791) at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
  at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
  at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
  at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
  at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
  at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source) at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:601) at 
 sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322) at 
 sun.rmi.transport.Transport$1.run(Transport.java:177) at 
 sun.rmi.transport.Transport$1.run(Transport.java:174) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 sun.rmi.transport.Transport.serviceCall(Transport.java:173) at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556) at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
  at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
  at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
  at java.lang.Thread.run(Thread.java:722){code}
 I got the following exception when i was trying to decommission a live node. 
 There is no reference in the manual saying that i need to stop the data 
 coming into this node. Even then, decommissioning is specified for live nodes.
 Can anyone let me know if am doing something wrong or if this is a bug on 
 cassandra part?
 Cassandra Version Used : 2.0.3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8061) tmplink files are not removed

2014-11-17 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215107#comment-14215107
 ] 

Michael Shuler commented on CASSANDRA-8061:
---

Linking 7803, and I also found 8157 when searching jira for recent tmplink 
tickets - 8248 looks similar, too.

 tmplink files are not removed
 -

 Key: CASSANDRA-8061
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8061
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux
Reporter: Gianluca Borello
Assignee: Shawn Kumar
 Fix For: 2.1.3


 After installing 2.1.0, I'm experiencing a bunch of tmplink files that are 
 filling my disk. I found https://issues.apache.org/jira/browse/CASSANDRA-7803 
 and that is very similar, and I confirm it happens both on 2.1.0 as well as 
 from the latest commit on the cassandra-2.1 branch 
 (https://github.com/apache/cassandra/commit/aca80da38c3d86a40cc63d9a122f7d45258e4685
  from the cassandra-2.1)
 Even starting with a clean keyspace, after a few hours I get:
 {noformat}
 $ sudo find /raid0 | grep tmplink | xargs du -hs
 2.7G  
 /raid0/cassandra/data/draios/protobuf1-ccc6dce04beb11e4abf997b38fbf920b/draios-protobuf1-tmplink-ka-4515-Data.db
 13M   
 /raid0/cassandra/data/draios/protobuf1-ccc6dce04beb11e4abf997b38fbf920b/draios-protobuf1-tmplink-ka-4515-Index.db
 1.8G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-1788-Data.db
 12M   
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-1788-Index.db
 5.2M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-2678-Index.db
 822M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-2678-Data.db
 7.3M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3283-Index.db
 1.2G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3283-Data.db
 6.7M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3951-Index.db
 1.1G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3951-Data.db
 11M   
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-4799-Index.db
 1.7G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-4799-Data.db
 812K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-234-Index.db
 122M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-208-Data.db
 744K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-739-Index.db
 660K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-193-Index.db
 796K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-230-Index.db
 137M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-230-Data.db
 161M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-269-Data.db
 139M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-234-Data.db
 940K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-786-Index.db
 936K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-269-Index.db
 161M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-786-Data.db
 672K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-197-Index.db
 113M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-193-Data.db
 116M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-197-Data.db
 712K  
 

[jira] [Updated] (CASSANDRA-8061) tmplink files are not removed

2014-11-17 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-8061:
--
Priority: Critical  (was: Major)

 tmplink files are not removed
 -

 Key: CASSANDRA-8061
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8061
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux
Reporter: Gianluca Borello
Assignee: Shawn Kumar
Priority: Critical
 Fix For: 2.1.3


 After installing 2.1.0, I'm experiencing a bunch of tmplink files that are 
 filling my disk. I found https://issues.apache.org/jira/browse/CASSANDRA-7803 
 and that is very similar, and I confirm it happens both on 2.1.0 as well as 
 from the latest commit on the cassandra-2.1 branch 
 (https://github.com/apache/cassandra/commit/aca80da38c3d86a40cc63d9a122f7d45258e4685
  from the cassandra-2.1)
 Even starting with a clean keyspace, after a few hours I get:
 {noformat}
 $ sudo find /raid0 | grep tmplink | xargs du -hs
 2.7G  
 /raid0/cassandra/data/draios/protobuf1-ccc6dce04beb11e4abf997b38fbf920b/draios-protobuf1-tmplink-ka-4515-Data.db
 13M   
 /raid0/cassandra/data/draios/protobuf1-ccc6dce04beb11e4abf997b38fbf920b/draios-protobuf1-tmplink-ka-4515-Index.db
 1.8G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-1788-Data.db
 12M   
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-1788-Index.db
 5.2M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-2678-Index.db
 822M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-2678-Data.db
 7.3M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3283-Index.db
 1.2G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3283-Data.db
 6.7M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3951-Index.db
 1.1G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3951-Data.db
 11M   
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-4799-Index.db
 1.7G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-4799-Data.db
 812K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-234-Index.db
 122M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-208-Data.db
 744K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-739-Index.db
 660K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-193-Index.db
 796K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-230-Index.db
 137M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-230-Data.db
 161M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-269-Data.db
 139M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-234-Data.db
 940K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-786-Index.db
 936K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-269-Index.db
 161M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-786-Data.db
 672K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-197-Index.db
 113M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-193-Data.db
 116M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-197-Data.db
 712K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-208-Index.db
 

[jira] [Comment Edited] (CASSANDRA-7830) Decommissioning fails on a live node

2014-11-17 Thread Donald Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215101#comment-14215101
 ] 

Donald Smith edited comment on CASSANDRA-7830 at 11/17/14 8:29 PM:
---

Stopping and restarting the cassandra process did not help.

Also, I tried it on two other nodes and it didn't work there either, even when 
I first stopped the process.


was (Author: thinkerfeeler):
Stopping and restarting the cassandra process did not help.

 Decommissioning fails on a live node
 

 Key: CASSANDRA-7830
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7830
 Project: Cassandra
  Issue Type: Bug
Reporter: Ananthkumar K S

 {code}Exception in thread main java.lang.UnsupportedOperationException: 
 data is currently moving to this node; unable to leave the ring at 
 org.apache.cassandra.service.StorageService.decommission(StorageService.java:2629)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:601) at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
  at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
  at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235) 
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) at 
 com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250) at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
  at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791) at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
  at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
  at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
  at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
  at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
  at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source) at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:601) at 
 sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322) at 
 sun.rmi.transport.Transport$1.run(Transport.java:177) at 
 sun.rmi.transport.Transport$1.run(Transport.java:174) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 sun.rmi.transport.Transport.serviceCall(Transport.java:173) at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556) at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
  at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
  at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
  at java.lang.Thread.run(Thread.java:722){code}
 I got the following exception when i was trying to decommission a live node. 
 There is no reference in the manual saying that i need to stop the data 
 coming into this node. Even then, decommissioning is specified for live nodes.
 Can anyone let me know if am doing something wrong or if this is a bug on 
 cassandra part?
 Cassandra Version Used : 2.0.3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7830) Decommissioning fails on a live node

2014-11-17 Thread Donald Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215101#comment-14215101
 ] 

Donald Smith edited comment on CASSANDRA-7830 at 11/17/14 8:30 PM:
---

Stopping and restarting the cassandra process did not help.

Also, I tried it on two other nodes and it didn't work there either, even when 
I first stopped and restarted the process.


was (Author: thinkerfeeler):
Stopping and restarting the cassandra process did not help.

Also, I tried it on two other nodes and it didn't work there either, even when 
I first stopped the process.

 Decommissioning fails on a live node
 

 Key: CASSANDRA-7830
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7830
 Project: Cassandra
  Issue Type: Bug
Reporter: Ananthkumar K S

 {code}Exception in thread main java.lang.UnsupportedOperationException: 
 data is currently moving to this node; unable to leave the ring at 
 org.apache.cassandra.service.StorageService.decommission(StorageService.java:2629)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:601) at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
  at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
  at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235) 
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) at 
 com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250) at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
  at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791) at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
  at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
  at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
  at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
  at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
  at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source) at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:601) at 
 sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322) at 
 sun.rmi.transport.Transport$1.run(Transport.java:177) at 
 sun.rmi.transport.Transport$1.run(Transport.java:174) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 sun.rmi.transport.Transport.serviceCall(Transport.java:173) at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556) at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
  at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
  at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
  at java.lang.Thread.run(Thread.java:722){code}
 I got the following exception when i was trying to decommission a live node. 
 There is no reference in the manual saying that i need to stop the data 
 coming into this node. Even then, decommissioning is specified for live nodes.
 Can anyone let me know if am doing something wrong or if this is a bug on 
 cassandra part?
 Cassandra Version Used : 2.0.3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8061) tmplink files are not removed

2014-11-17 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-8061:
--
Assignee: Marcus Eriksson  (was: Shawn Kumar)

 tmplink files are not removed
 -

 Key: CASSANDRA-8061
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8061
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux
Reporter: Gianluca Borello
Assignee: Marcus Eriksson
Priority: Critical
 Fix For: 2.1.3


 After installing 2.1.0, I'm experiencing a bunch of tmplink files that are 
 filling my disk. I found https://issues.apache.org/jira/browse/CASSANDRA-7803 
 and that is very similar, and I confirm it happens both on 2.1.0 as well as 
 from the latest commit on the cassandra-2.1 branch 
 (https://github.com/apache/cassandra/commit/aca80da38c3d86a40cc63d9a122f7d45258e4685
  from the cassandra-2.1)
 Even starting with a clean keyspace, after a few hours I get:
 {noformat}
 $ sudo find /raid0 | grep tmplink | xargs du -hs
 2.7G  
 /raid0/cassandra/data/draios/protobuf1-ccc6dce04beb11e4abf997b38fbf920b/draios-protobuf1-tmplink-ka-4515-Data.db
 13M   
 /raid0/cassandra/data/draios/protobuf1-ccc6dce04beb11e4abf997b38fbf920b/draios-protobuf1-tmplink-ka-4515-Index.db
 1.8G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-1788-Data.db
 12M   
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-1788-Index.db
 5.2M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-2678-Index.db
 822M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-2678-Data.db
 7.3M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3283-Index.db
 1.2G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3283-Data.db
 6.7M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3951-Index.db
 1.1G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3951-Data.db
 11M   
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-4799-Index.db
 1.7G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-4799-Data.db
 812K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-234-Index.db
 122M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-208-Data.db
 744K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-739-Index.db
 660K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-193-Index.db
 796K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-230-Index.db
 137M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-230-Data.db
 161M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-269-Data.db
 139M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-234-Data.db
 940K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-786-Index.db
 936K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-269-Index.db
 161M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-786-Data.db
 672K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-197-Index.db
 113M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-193-Data.db
 116M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-197-Data.db
 712K  
 

[jira] [Commented] (CASSANDRA-7830) Decommissioning fails on a live node

2014-11-17 Thread Donald Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215136#comment-14215136
 ] 

Donald Smith commented on CASSANDRA-7830:
-

Following the advice in 
http://comments.gmane.org/gmane.comp.db.cassandra.user/5554, I stopped all 
nodes and restarted. Now the decommission is working. So this is a workaround.

 Decommissioning fails on a live node
 

 Key: CASSANDRA-7830
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7830
 Project: Cassandra
  Issue Type: Bug
Reporter: Ananthkumar K S

 {code}Exception in thread main java.lang.UnsupportedOperationException: 
 data is currently moving to this node; unable to leave the ring at 
 org.apache.cassandra.service.StorageService.decommission(StorageService.java:2629)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:601) at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
  at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
  at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235) 
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) at 
 com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250) at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
  at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791) at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
  at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
  at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
  at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
  at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
  at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source) at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:601) at 
 sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322) at 
 sun.rmi.transport.Transport$1.run(Transport.java:177) at 
 sun.rmi.transport.Transport$1.run(Transport.java:174) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 sun.rmi.transport.Transport.serviceCall(Transport.java:173) at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556) at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
  at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
  at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
  at java.lang.Thread.run(Thread.java:722){code}
 I got the following exception when i was trying to decommission a live node. 
 There is no reference in the manual saying that i need to stop the data 
 coming into this node. Even then, decommissioning is specified for live nodes.
 Can anyone let me know if am doing something wrong or if this is a bug on 
 cassandra part?
 Cassandra Version Used : 2.0.3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8061) tmplink files are not removed

2014-11-17 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215144#comment-14215144
 ] 

Michael Shuler commented on CASSANDRA-8061:
---

The unit test SSTableRewriterTest.testNumberOfFilesAndSizes looks that we've 
deleted tmplink and tmp files, but since it does not keep c* running, the DEL 
files from lsof are never tested (nor am I sure how a unit test can look for 
open deleted files).

To just see what's going on, I bumped the sleep in that test and looked at lsof 
in another term, and I do see DEL state tmplink files.
{noformat}
diff --git a/build.xml b/build.xml
index 43fa531..a5fd82a 100644
--- a/build.xml
+++ b/build.xml
@@ -90,7 +90,7 @@
 property name=maven-repository-url 
value=https://repository.apache.org/content/repositories/snapshots/
 property name=maven-repository-id value=apache.snapshots.https/
 
-property name=test.timeout value=6 /
+property name=test.timeout value=60 /
 property name=test.long.timeout value=60 /
 
 !-- default for cql tests. Can be override by 
-Dcassandra.test.use_prepared=false --
diff --git a/test/unit/org/apache/cassandra/io/sstable/SSTableRewriterTest.java 
b/test/unit/org/apache/cassandra/io/sstable/SSTableRewriterTest.java
index 8a494a6..e2cce41 100644
--- a/test/unit/org/apache/cassandra/io/sstable/SSTableRewriterTest.java
+++ b/test/unit/org/apache/cassandra/io/sstable/SSTableRewriterTest.java
@@ -187,7 +187,7 @@ public class SSTableRewriterTest extends SchemaLoader
 assertEquals(startStorageMetricsLoad - s.bytesOnDisk() + sum, 
StorageMetrics.load.count());
 assertEquals(files, sstables.size());
 assertEquals(files, cfs.getSSTables().size());
-Thread.sleep(1000);
+Thread.sleep(10);
 // tmplink and tmp files should be gone:
 assertEquals(sum, cfs.metric.totalDiskSpaceUsed.count());
 assertFileCounts(s.descriptor.directory.list(), 0, 0);
{noformat}

 tmplink files are not removed
 -

 Key: CASSANDRA-8061
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8061
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux
Reporter: Gianluca Borello
Assignee: Marcus Eriksson
Priority: Critical
 Fix For: 2.1.3


 After installing 2.1.0, I'm experiencing a bunch of tmplink files that are 
 filling my disk. I found https://issues.apache.org/jira/browse/CASSANDRA-7803 
 and that is very similar, and I confirm it happens both on 2.1.0 as well as 
 from the latest commit on the cassandra-2.1 branch 
 (https://github.com/apache/cassandra/commit/aca80da38c3d86a40cc63d9a122f7d45258e4685
  from the cassandra-2.1)
 Even starting with a clean keyspace, after a few hours I get:
 {noformat}
 $ sudo find /raid0 | grep tmplink | xargs du -hs
 2.7G  
 /raid0/cassandra/data/draios/protobuf1-ccc6dce04beb11e4abf997b38fbf920b/draios-protobuf1-tmplink-ka-4515-Data.db
 13M   
 /raid0/cassandra/data/draios/protobuf1-ccc6dce04beb11e4abf997b38fbf920b/draios-protobuf1-tmplink-ka-4515-Index.db
 1.8G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-1788-Data.db
 12M   
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-1788-Index.db
 5.2M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-2678-Index.db
 822M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-2678-Data.db
 7.3M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3283-Index.db
 1.2G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3283-Data.db
 6.7M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3951-Index.db
 1.1G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3951-Data.db
 11M   
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-4799-Index.db
 1.7G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-4799-Data.db
 812K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-234-Index.db
 122M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-208-Data.db
 744K  
 

[jira] [Resolved] (CASSANDRA-8248) Possible memory leak

2014-11-17 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler resolved CASSANDRA-8248.
---
Resolution: Duplicate

Just so we don't have two conversations on the same topic, please add 
comments/notes to CASSANDRA-8061 - these tickets have been linked, and that 
ticket has a bit more info.  I'm going to add the thread dump from here to 8061 
as well.  Thanks!

 Possible memory leak 
 -

 Key: CASSANDRA-8248
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8248
 Project: Cassandra
  Issue Type: Bug
Reporter: Alexander Sterligov
Assignee: Shawn Kumar
 Attachments: thread_dump


 Sometimes during repair cassandra starts to consume more memory than expected.
 Total amount of data on node is about 20GB.
 Size of the data directory is 66GC because of snapshots.
 Top reports: 
 {noformat}
   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 15724 loadbase  20   0  493g  55g  44g S   28 44.2   4043:24 java
 {noformat}
 At the /proc/15724/maps there are a lot of deleted file maps
 {quote}
 7f63a6102000-7f63a6332000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a6332000-7f63a6562000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a6562000-7f63a6792000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a6792000-7f63a69c2000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a69c2000-7f63a6bf2000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a6bf2000-7f63a6e22000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a6e22000-7f63a7052000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a7052000-7f63a7282000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a7282000-7f63a74b2000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a74b2000-7f63a76e2000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a76e2000-7f63a7912000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a7912000-7f63a7b42000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a7b42000-7f63a7d72000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a7d72000-7f63a7fa2000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a7fa2000-7f63a81d2000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a81d2000-7f63a8402000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a8402000-7f63a8622000 r--s  08:21 9442763
 /ssd/cassandra/data/iss/feedback_history-d32bc7e048c011e49b989bc3e8a5a440/iss-feedback_history-tmplink-ka-328671-Index.db
  (deleted)
 7f63a8622000-7f63a8842000 r--s  08:21 9442763
 

[jira] [Updated] (CASSANDRA-8061) tmplink files are not removed

2014-11-17 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-8061:
--
Attachment: 8248-thread_dump.txt

(Adding the thread_dump from 8248 here)

 tmplink files are not removed
 -

 Key: CASSANDRA-8061
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8061
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux
Reporter: Gianluca Borello
Assignee: Marcus Eriksson
Priority: Critical
 Fix For: 2.1.3

 Attachments: 8248-thread_dump.txt


 After installing 2.1.0, I'm experiencing a bunch of tmplink files that are 
 filling my disk. I found https://issues.apache.org/jira/browse/CASSANDRA-7803 
 and that is very similar, and I confirm it happens both on 2.1.0 as well as 
 from the latest commit on the cassandra-2.1 branch 
 (https://github.com/apache/cassandra/commit/aca80da38c3d86a40cc63d9a122f7d45258e4685
  from the cassandra-2.1)
 Even starting with a clean keyspace, after a few hours I get:
 {noformat}
 $ sudo find /raid0 | grep tmplink | xargs du -hs
 2.7G  
 /raid0/cassandra/data/draios/protobuf1-ccc6dce04beb11e4abf997b38fbf920b/draios-protobuf1-tmplink-ka-4515-Data.db
 13M   
 /raid0/cassandra/data/draios/protobuf1-ccc6dce04beb11e4abf997b38fbf920b/draios-protobuf1-tmplink-ka-4515-Index.db
 1.8G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-1788-Data.db
 12M   
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-1788-Index.db
 5.2M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-2678-Index.db
 822M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-2678-Data.db
 7.3M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3283-Index.db
 1.2G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3283-Data.db
 6.7M  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3951-Index.db
 1.1G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3951-Data.db
 11M   
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-4799-Index.db
 1.7G  
 /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-4799-Data.db
 812K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-234-Index.db
 122M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-208-Data.db
 744K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-739-Index.db
 660K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-193-Index.db
 796K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-230-Index.db
 137M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-230-Data.db
 161M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-269-Data.db
 139M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-234-Data.db
 940K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-786-Index.db
 936K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-269-Index.db
 161M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-786-Data.db
 672K  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-197-Index.db
 113M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-193-Data.db
 116M  
 /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-197-Data.db
 712K  
 

[jira] [Created] (CASSANDRA-8329) LeveledCompactionStrategy should split large files across data directories when compacting

2014-11-17 Thread J.B. Langston (JIRA)
J.B. Langston created CASSANDRA-8329:


 Summary: LeveledCompactionStrategy should split large files across 
data directories when compacting
 Key: CASSANDRA-8329
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8329
 Project: Cassandra
  Issue Type: Bug
Reporter: J.B. Langston


Because we fall back to STCS for L0 when LCS gets behind, the sstables in L0 
can get quite large during sustained periods of heavy writes.  This can result 
in large imbalances between data volumes when using JBOD support.  

Eventually these large files get broken up as L0 sstables are moved up into 
higher levels; however, because LCS only chooses a single volume on which to 
write all of the sstables created during a single compaction, the imbalance is 
persisted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8279) Geo-Red : Streaming is working fine on two nodes but failing on one node repeatedly.

2014-11-17 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215273#comment-14215273
 ] 

Michael Shuler commented on CASSANDRA-8279:
---

Thanks for the update, [~Akhtar_ecil] - I'm glad that worked out for you.

 Geo-Red  : Streaming is working fine on two nodes but failing on one node 
 repeatedly.
 -

 Key: CASSANDRA-8279
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8279
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: LINUX
Reporter: Akhtar Hussain

 Exception in thread main java.lang.RuntimeException: Error while rebuilding 
 node: Stream failed
 at 
 org.apache.cassandra.service.StorageService.rebuild(StorageService.java:896)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
 at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
 at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
 at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
 at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
 at 
 com.sun.jmx.remote.security.MBeanServerAccessController.invoke(MBeanServerAccessController.java:468)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
 at java.security.AccessController.doPrivileged(Native Method)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1427)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
 at sun.rmi.transport.Transport$1.run(Transport.java:177)
 at sun.rmi.transport.Transport$1.run(Transport.java:174)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
 at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
 at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8306) exception in nodetool enablebinary

2014-11-17 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler resolved CASSANDRA-8306.
---
Resolution: Incomplete

So it sounds like bootstrapping failed for some reason, previously, and 
starting a fresh bootstrap completed successfully. This can happen for various 
reasons, and when it does, the standard procedure is to start the bootstrap 
over. Without the failed bootstrap logs to look at, it's difficult to guess.

I'm glad that you were able to work through the problem!

 exception in nodetool enablebinary
 --

 Key: CASSANDRA-8306
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8306
 Project: Cassandra
  Issue Type: Bug
Reporter: Rafał Furmański
 Attachments: system.log.zip


 I was trying to add new node (db4) to existing cluster - with no luck. I 
 can't see any errors in system.log. nodetool status shows, that node is 
 joining into cluster (for many hours). Attaching error and cluster info:
 {code}
 root@db4:~# nodetool enablebinary
 error: Error starting native transport: null
 -- StackTrace --
 java.lang.RuntimeException: Error starting native transport: null
   at 
 org.apache.cassandra.service.StorageService.startNativeTransport(StorageService.java:350)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 {code}
 {code}
 root@db4:~# nodetool describecluster
 Cluster Information:
   Name: Production Cluster
   Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
   Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
   Schema versions:
   b7e98bb9-717f-3f59-bac4-84bc19544e90: [10.195.15.163, 
 10.195.15.162, 10.195.15.167, 10.195.15.166]
 {code}
 {code}
 root@db4:~# nodetool status
 Datacenter: Ashburn
 ===
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  AddressLoad   Tokens  OwnsHost 

[jira] [Commented] (CASSANDRA-8325) Cassandra 2.1.x fails to start on FreeBSD (JVM crash)

2014-11-17 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215298#comment-14215298
 ] 

Michael Shuler commented on CASSANDRA-8325:
---

There are a multitude of places that Unsafe is used. While Cassandra on FreeBSD 
is interesting to me, personally, it's not a supported (quoted intentionally, 
because it should just work on Unix-like systems) OS.

Are you able to trigger this in a repeatable manner just starting Cassandra as 
above, or was this a random occurrence? Does the same happen on another JDK 
(does Oracle release one?)

Thanks for the bug report!

 Cassandra 2.1.x fails to start on FreeBSD (JVM crash)
 -

 Key: CASSANDRA-8325
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8325
 Project: Cassandra
  Issue Type: Bug
 Environment: FreeBSD 10.0 with openjdk version 1.7.0_71, 64-Bit 
 Server VM
Reporter: Leonid Shalupov
 Attachments: hs_err_pid1856.log, system.log


 See attached error file after JVM crash
 {quote}
 FreeBSD xxx.intellij.net 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu 
 Jan 16 22:34:59 UTC 2014 
 r...@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64
 {quote}
 {quote}
  % java -version
 openjdk version 1.7.0_71
 OpenJDK Runtime Environment (build 1.7.0_71-b14)
 OpenJDK 64-Bit Server VM (build 24.71-b01, mixed mode)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8325) Cassandra 2.1.x fails to start on FreeBSD (JVM crash)

2014-11-17 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215300#comment-14215300
 ] 

Michael Shuler commented on CASSANDRA-8325:
---

Answered my own question - runs under linux compat - 
https://www.freebsd.org/java/install.html

 Cassandra 2.1.x fails to start on FreeBSD (JVM crash)
 -

 Key: CASSANDRA-8325
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8325
 Project: Cassandra
  Issue Type: Bug
 Environment: FreeBSD 10.0 with openjdk version 1.7.0_71, 64-Bit 
 Server VM
Reporter: Leonid Shalupov
 Attachments: hs_err_pid1856.log, system.log


 See attached error file after JVM crash
 {quote}
 FreeBSD xxx.intellij.net 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu 
 Jan 16 22:34:59 UTC 2014 
 r...@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64
 {quote}
 {quote}
  % java -version
 openjdk version 1.7.0_71
 OpenJDK Runtime Environment (build 1.7.0_71-b14)
 OpenJDK 64-Bit Server VM (build 24.71-b01, mixed mode)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8325) Cassandra 2.1.x fails to start on FreeBSD (JVM crash)

2014-11-17 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215310#comment-14215310
 ] 

Michael Shuler commented on CASSANDRA-8325:
---

From the mailing list post, it looks like openjdk8 was also attempted.

 Cassandra 2.1.x fails to start on FreeBSD (JVM crash)
 -

 Key: CASSANDRA-8325
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8325
 Project: Cassandra
  Issue Type: Bug
 Environment: FreeBSD 10.0 with openjdk version 1.7.0_71, 64-Bit 
 Server VM
Reporter: Leonid Shalupov
 Attachments: hs_err_pid1856.log, system.log


 See attached error file after JVM crash
 {quote}
 FreeBSD xxx.intellij.net 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu 
 Jan 16 22:34:59 UTC 2014 
 r...@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64
 {quote}
 {quote}
  % java -version
 openjdk version 1.7.0_71
 OpenJDK Runtime Environment (build 1.7.0_71-b14)
 OpenJDK 64-Bit Server VM (build 24.71-b01, mixed mode)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8329) LeveledCompactionStrategy should split large files across data directories when compacting

2014-11-17 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-8329:
--
  Component/s: Core
Fix Version/s: 2.0.12
 Assignee: Marcus Eriksson
   Issue Type: Improvement  (was: Bug)

 LeveledCompactionStrategy should split large files across data directories 
 when compacting
 --

 Key: CASSANDRA-8329
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8329
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: J.B. Langston
Assignee: Marcus Eriksson
 Fix For: 2.0.12


 Because we fall back to STCS for L0 when LCS gets behind, the sstables in L0 
 can get quite large during sustained periods of heavy writes.  This can 
 result in large imbalances between data volumes when using JBOD support.  
 Eventually these large files get broken up as L0 sstables are moved up into 
 higher levels; however, because LCS only chooses a single volume on which to 
 write all of the sstables created during a single compaction, the imbalance 
 is persisted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8192) AssertionError in Memory.java

2014-11-17 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215336#comment-14215336
 ] 

Jonathan Ellis commented on CASSANDRA-8192:
---

We'll be happy to review any patches you propose, but we're not going to invest 
more time into 32bit Windows otherwise.  Sorry.  As Josh says, if 2.0 works for 
you that's probably your best bet.

 AssertionError in Memory.java
 -

 Key: CASSANDRA-8192
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8192
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3GB RAM, Java 1.7.0_67
Reporter: Andreas Schnitzerling
Assignee: Joshua McKenzie
 Attachments: cassandra.bat, cassandra.yaml, system.log


 Since update of 1 of 12 nodes from 2.1.0-rel to 2.1.1-rel Exception during 
 start up.
 {panel:title=system.log}
 ERROR [SSTableBatchOpen:1] 2014-10-27 09:44:00,079 CassandraDaemon.java:153 - 
 Exception in thread Thread[SSTableBatchOpen:1,5,main]
 java.lang.AssertionError: null
   at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:135)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:83)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:50)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:48)
  ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:766) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:725) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:402) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:302) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at 
 org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:438) 
 ~[apache-cassandra-2.1.1.jar:2.1.1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 {panel}
 In the attached log you can still see as well CASSANDRA-8069 and 
 CASSANDRA-6283.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8295) Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex

2014-11-17 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215370#comment-14215370
 ] 

Jonathan Ellis commented on CASSANDRA-8295:
---

Unthrottling compaction will make it worse, not better.

Fundamentally you can only throw data at the disks so fast.  Cassandra will 
start sending back timeout exceptions if you exceed that and it has to load 
shed (MUTATION messages dropped).  At that point you can either respect the 
load shed and back off, add capacity to meet your desired ingest rate.  Right 
now you are doing neither and suffering for it.

 Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex
 -

 Key: CASSANDRA-8295
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8295
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.3 Cassandra 2.0.11.82
Reporter: Jose Martinez Poblete
 Attachments: alln01-ats-cas3.cassandra.yaml, output.tgz, system.tgz, 
 system.tgz.1, system.tgz.2, system.tgz.3


 Customer runs a 3 node cluster 
 Their dataset is less than 1Tb and during data load, one of the nodes enter a 
 GC death spiral:
 {noformat}
  INFO [ScheduledTasks:1] 2014-11-07 23:31:08,094 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 3348 ms for 2 collections, 1658268944 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,486 GCInspector.java (line 116) 
 GC for ParNew: 442 ms for 2 collections, 6079570032 used; max is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,487 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 7351 ms for 2 collections, 6084678280 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:01,836 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 603 ms for 1 collections, 7132546096 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:09,626 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 761 ms for 1 collections, 7286946984 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:15,265 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 703 ms for 1 collections, 7251213520 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:25,027 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 1205 ms for 1 collections, 6507586104 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:41,374 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 13835 ms for 3 collections, 6514187192 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:54,137 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 6834 ms for 2 collections, 6521656200 used; max 
 is 8375238656
 ...
  INFO [ScheduledTasks:1] 2014-11-08 12:13:11,086 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 43967 ms for 2 collections, 8368777672 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:14:14,151 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 63968 ms for 3 collections, 8369623824 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:14:55,643 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 41307 ms for 2 collections, 8370115376 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:20:06,197 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 309634 ms for 15 collections, 8374994928 used; 
 max is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 13:07:33,617 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 2681100 ms for 143 collections, 8347631560 used; 
 max is 8375238656
 {noformat} 
 Their application waits 1 minute before a retry when a timeout is returned
 This is what we find on their heapdumps:
 {noformat}
 Class Name
   
   
| Shallow Heap 
 | Retained Heap | Percentage
 -
 org.apache.cassandra.db.Memtable @ 0x773f52f80
   
   
|   72 
 | 8,086,073,504 | 96.66%
 |- 

[jira] [Updated] (CASSANDRA-8053) Support for user defined aggregate functions

2014-11-17 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-8053:
--
Reviewer: Tyler Hobbs

[~thobbs] to review

 Support for user defined aggregate functions
 

 Key: CASSANDRA-8053
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8053
 Project: Cassandra
  Issue Type: New Feature
Reporter: Robert Stupp
Assignee: Robert Stupp
  Labels: cql, udf
 Fix For: 3.0

 Attachments: 8053v1.txt


 CASSANDRA-4914 introduces aggregate functions.
 This ticket is about to decide how we can support user defined aggregate 
 functions. UD aggregate functions should be supported for all UDF flavors 
 (class, java, jsr223).
 Things to consider:
 * Special implementations for each scripting language should be omitted
 * No exposure of internal APIs (e.g. {{AggregateFunction}} interface)
 * No need for users to deal with serializers / codecs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8308) Windows: Commitlog access violations on unit tests

2014-11-17 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-8308:
---
Attachment: 8308_v1.txt

# Swapped to FileChannel on CommitLogSegment
# Use reflection to grab FD out of it to use in trySkipCache (new method in 
CLibrary)
# Added clibrary method for strerror
# Added logger warning on non-0 result from trySkipCache w/error message
# Tweaked logic on SchemaLoader to recursively delete contents of commitlog 
folder rather than the folder itself

This fixes 3 of the 4 unit tests above; SSTableRewriterTest assumes that when a 
sstable is successfully deleted it won't be present in the filesystem, however 
on Windows when you delete a file with FILE_SHARE_DELETE and something else 
still has a handle to the file, it stays in its original location on the drive 
rather than getting removed and a link preserved in the /proc filesystem.  I'll 
open another ticket to make the SSTableRewriter tests more multi-platform 
friendly.

The warning on trySkipCache has uncovered that we have invalid fd references on 
our trySkipCache calls in SSTR.cloneWithNewStart - I haven't dug into it yet 
but it might be an ordering issue where a file's deleted before we 
posix_fadvise it.

 Windows: Commitlog access violations on unit tests
 --

 Key: CASSANDRA-8308
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8308
 Project: Cassandra
  Issue Type: Bug
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
Priority: Minor
  Labels: Windows
 Fix For: 3.0

 Attachments: 8308_v1.txt


 We have four unit tests failing on trunk on Windows, all with 
 FileSystemException's related to the SchemaLoader:
 {noformat}
 [junit] Test 
 org.apache.cassandra.db.compaction.DateTieredCompactionStrategyTest FAILED
 [junit] Test org.apache.cassandra.cql3.ThriftCompatibilityTest FAILED
 [junit] Test org.apache.cassandra.io.sstable.SSTableRewriterTest FAILED
 [junit] Test org.apache.cassandra.repair.LocalSyncTaskTest FAILED
 {noformat}
 Example error:
 {noformat}
 [junit] Caused by: java.nio.file.FileSystemException: 
 build\test\cassandra\commitlog;0\CommitLog-5-1415908745965.log: The process 
 cannot access the file because it is being used by another process.
 [junit]
 [junit] at 
 sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
 [junit] at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
 [junit] at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
 [junit] at 
 sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
 [junit] at 
 sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
 [junit] at java.nio.file.Files.delete(Files.java:1079)
 [junit] at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:125)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8328) Expose Thread Pool maximum pool size in metrics

2014-11-17 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-8328:
--
Reviewer: Joshua McKenzie
Assignee: Chris Lohfink

[~JoshuaMcKenzie] to review

 Expose Thread Pool maximum pool size in metrics
 ---

 Key: CASSANDRA-8328
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8328
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Chris Lohfink
Assignee: Chris Lohfink
Priority: Trivial
  Labels: metrics
 Attachments: 0001-add-max-pool-size-metrics-for-CASSANDRA-8328.patch


 The max pool size is exposed in the original 
 (o.a.c.internal/request/transport) metrics from CASSANDRA-5044, but they are 
 not available in the o.a.c.metrics beans.  Pretty minor change to also expose 
 this there, which gives context to the maximum number of active tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8228) Log malfunctioning host on prepareForRepair

2014-11-17 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-8228:
--
Reviewer: Yuki Morishita

 Log malfunctioning host on prepareForRepair
 ---

 Key: CASSANDRA-8228
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8228
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Juho Mäkinen
Assignee: Rajanarayanan Thottuvaikkatumana
Priority: Trivial
  Labels: lhf
 Attachments: cassandra-trunk-8228.txt


 Repair startup goes thru ActiveRepairService.prepareForRepair() which might 
 result with Repair failed with error Did not get positive replies from all 
 endpoints. error, but there's no other logging regarding to this error.
 It seems that it would be trivial to modify the prepareForRepair() to log the 
 host address which caused the error, thus ease the debugging effort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8327) snapshots taken before repair are not cleared if snapshot fails

2014-11-17 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-8327:
--
Assignee: Yuki Morishita

 snapshots taken before repair are not cleared if snapshot fails
 ---

 Key: CASSANDRA-8327
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8327
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: cassandra 2.0.10.71
Reporter: MASSIMO CELLI
Assignee: Yuki Morishita
Priority: Minor

 running repair service the following directory was created for the snapshots:
 drwxr-xr-x 2 cassandra cassandra 36864 Nov 5 07:47 
 073d16e0-64c0-11e4-8e9a-7b3d4674c508 
 but the system.log reports the following error which suggests the snapshot 
 failed:
 ERROR [RMI TCP Connection(3251)-10.150.27.78] 2014-11-05 07:47:55,734 
 StorageService.java (line 2599) Repair session 
 073d16e0-64c0-11e4-8e9a-7b3d4674c508 for range 
 (7530018576963469312,7566047373982433280] failed with error 
 java.io.IOException: Failed during snapshot creation. 
 java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
 java.io.IOException: Failed during snapshot creation.  ERROR 
 [AntiEntropySessions:3312] 2014-11-05 07:47:55,731 RepairSession.java (line 
 288) [repair #073d16e0-64c0-11e4-8e9a-7b3d4674c508] session completed with 
 the following error java.io.IOException: Failed during snapshot creation.
 the problem is that the directory for the snapshots that fail are just left 
 on the disk and don't get cleaned up. They must be removed manually, which is 
 not ideal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8326) Cqlsh cannot connect in trunk

2014-11-17 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215468#comment-14215468
 ] 

Aleksey Yeschenko commented on CASSANDRA-8326:
--

FWIW the python driver shouldn't be needing the alias columns since 2.0, and 
index_interval since 2.1. And *all* of these tables will be gone after 
CASSANDRA-6717.

 Cqlsh cannot connect in trunk
 -

 Key: CASSANDRA-8326
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8326
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Tyler Hobbs
 Fix For: 3.0


 Cqlsh errors with {code}Unable to connect to any servers', {'127.0.0.1': 
 KeyError('column_aliases',)}{code}. To fix, we need to pull in a newer 
 version of the python driver that doesn't assume certain metadata exists. 
 This was broken by the cassandra commit 
 {{cbbc1191ce1656a92354a4fa3859626cb10083e5}}. A fix should be in version 
 2.1.3 of the driver.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-7830) Decommissioning fails on a live node

2014-11-17 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler resolved CASSANDRA-7830.
---
   Resolution: Not a Problem
Reproduced In: 2.0.11, 2.0.3  (was: 2.0.3, 2.0.11)

In this case, the error is clear: data is currently moving to this node; 
unable to leave the ring

Without knowing clearly what ring changes were made and the statuses, etc., 
it's difficult to tell why that occurred. In addition, at this point, the node 
is gone, so it would be difficult to troubleshoot any longer  :)

 Decommissioning fails on a live node
 

 Key: CASSANDRA-7830
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7830
 Project: Cassandra
  Issue Type: Bug
Reporter: Ananthkumar K S

 {code}Exception in thread main java.lang.UnsupportedOperationException: 
 data is currently moving to this node; unable to leave the ring at 
 org.apache.cassandra.service.StorageService.decommission(StorageService.java:2629)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:601) at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
  at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
  at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235) 
 at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) at 
 com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250) at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
  at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791) at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
  at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
  at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
  at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
  at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
  at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source) at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:601) at 
 sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322) at 
 sun.rmi.transport.Transport$1.run(Transport.java:177) at 
 sun.rmi.transport.Transport$1.run(Transport.java:174) at 
 java.security.AccessController.doPrivileged(Native Method) at 
 sun.rmi.transport.Transport.serviceCall(Transport.java:173) at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556) at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
  at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
  at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
  at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
  at java.lang.Thread.run(Thread.java:722){code}
 I got the following exception when i was trying to decommission a live node. 
 There is no reference in the manual saying that i need to stop the data 
 coming into this node. Even then, decommissioning is specified for live nodes.
 Can anyone let me know if am doing something wrong or if this is a bug on 
 cassandra part?
 Cassandra Version Used : 2.0.3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7281) SELECT on tuple relations are broken for mixed ASC/DESC clustering order

2014-11-17 Thread Marcin Szymaniuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcin Szymaniuk updated CASSANDRA-7281:

Attachment: 0001-CASSANDRA-7281-SELECT-on-tuple-relations-are-broken-.patch

This patch makes the tuple query working

There are two concerns though:
1.  I don't get why we have the last line assert in 
testMultipleClusteringReversedComponents. The fact result set is not empty  now 
seem to be correct to me.

2. Not sure about validateSlices. The way slices are represented makes it fail 
sometimes. Not sure if it's a bug in my code or I the validation is not correct 
anymore. Here is example of validate failing:

create table foo (a int, b int, c int, d int , e int, PRIMARY KEY (a, b, c, d, 
e) )
WITH CLUSTERING ORDER BY (b DESC, c ASC, d ASC, e DESC);

SELECT * FROM foo WHERE a=0 AND (b, c, d, e)  (0, 1, 1, 0);

I transform this restriction to 4 independent restrictions:
b0
b=0 c=1 d=1 e0
b=0 c=1 c1
b=0 c1
when we create slices out of it it ends up as:
slices = [[, 0004ff], [000404000104000100, 
000404000104000104ff], 
[000404000104000101, 000404000101], 
[000404000101, 000401]]

so slices[2][1]==slices[3][0] which seems fine to me (maybe not optimal?)

then this condition in validateSlices make it fail:
if (i  0  comparator.compare(slices[i - 1].finish, start) = 0)

In the patch I commented out the problematic condition in validation and the 
query works. If the validation is still valid I will have to fix the code but I 
need some sugestions. 
Also I have bunch of dtests. Do I submit them independently?






 SELECT on tuple relations are broken for mixed ASC/DESC clustering order
 

 Key: CASSANDRA-7281
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7281
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
 Fix For: 2.0.12

 Attachments: 
 0001-CASSANDRA-7281-SELECT-on-tuple-relations-are-broken-.patch


 As noted on 
 [CASSANDRA-6875|https://issues.apache.org/jira/browse/CASSANDRA-6875?focusedCommentId=13992153page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13992153],
  the tuple notation is broken when the clustering order mixes ASC and DESC 
 directives because the range of data they describe don't correspond to a 
 single continuous slice internally. To copy the example from CASSANDRA-6875:
 {noformat}
 cqlsh:ks create table foo (a int, b int, c int, PRIMARY KEY (a, b, c)) WITH 
 CLUSTERING ORDER BY (b DESC, c ASC);
 cqlsh:ks INSERT INTO foo (a, b, c) VALUES (0, 2, 0);
 cqlsh:ks INSERT INTO foo (a, b, c) VALUES (0, 1, 0);
 cqlsh:ks INSERT INTO foo (a, b, c) VALUES (0, 1, 1);
 cqlsh:ks INSERT INTO foo (a, b, c) VALUES (0, 0, 0);
 cqlsh:ks SELECT * FROM foo WHERE a=0;
  a | b | c
 ---+---+---
  0 | 2 | 0
  0 | 1 | 0
  0 | 1 | 1
  0 | 0 | 0
 (4 rows)
 cqlsh:ks SELECT * FROM foo WHERE a=0 AND (b, c)  (1, 0);
  a | b | c
 ---+---+---
  0 | 2 | 0
 (1 rows)
 {noformat}
 The last query should really return {{(0, 2, 0)}} and {{(0, 1, 1)}}.
 For that specific example we should generate 2 internal slices, but I believe 
 that with more clustering columns we may have more slices.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7281) SELECT on tuple relations are broken for mixed ASC/DESC clustering order

2014-11-17 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215488#comment-14215488
 ] 

Michael Shuler commented on CASSANDRA-7281:
---

Nit:  if you're going to remove lines in 
{{src/java/org/apache/cassandra/db/filter/ColumnSlice.java}}, remove them, 
please, don't comment them out.

 SELECT on tuple relations are broken for mixed ASC/DESC clustering order
 

 Key: CASSANDRA-7281
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7281
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
 Fix For: 2.0.12

 Attachments: 
 0001-CASSANDRA-7281-SELECT-on-tuple-relations-are-broken-.patch


 As noted on 
 [CASSANDRA-6875|https://issues.apache.org/jira/browse/CASSANDRA-6875?focusedCommentId=13992153page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13992153],
  the tuple notation is broken when the clustering order mixes ASC and DESC 
 directives because the range of data they describe don't correspond to a 
 single continuous slice internally. To copy the example from CASSANDRA-6875:
 {noformat}
 cqlsh:ks create table foo (a int, b int, c int, PRIMARY KEY (a, b, c)) WITH 
 CLUSTERING ORDER BY (b DESC, c ASC);
 cqlsh:ks INSERT INTO foo (a, b, c) VALUES (0, 2, 0);
 cqlsh:ks INSERT INTO foo (a, b, c) VALUES (0, 1, 0);
 cqlsh:ks INSERT INTO foo (a, b, c) VALUES (0, 1, 1);
 cqlsh:ks INSERT INTO foo (a, b, c) VALUES (0, 0, 0);
 cqlsh:ks SELECT * FROM foo WHERE a=0;
  a | b | c
 ---+---+---
  0 | 2 | 0
  0 | 1 | 0
  0 | 1 | 1
  0 | 0 | 0
 (4 rows)
 cqlsh:ks SELECT * FROM foo WHERE a=0 AND (b, c)  (1, 0);
  a | b | c
 ---+---+---
  0 | 2 | 0
 (1 rows)
 {noformat}
 The last query should really return {{(0, 2, 0)}} and {{(0, 1, 1)}}.
 For that specific example we should generate 2 internal slices, but I believe 
 that with more clustering columns we may have more slices.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7281) SELECT on tuple relations are broken for mixed ASC/DESC clustering order

2014-11-17 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215496#comment-14215496
 ] 

Michael Shuler commented on CASSANDRA-7281:
---

Missed the dtest comment - send those as a pull request on 
https://github.com/riptano/cassandra-dtest - thanks!

 SELECT on tuple relations are broken for mixed ASC/DESC clustering order
 

 Key: CASSANDRA-7281
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7281
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
 Fix For: 2.0.12

 Attachments: 
 0001-CASSANDRA-7281-SELECT-on-tuple-relations-are-broken-.patch


 As noted on 
 [CASSANDRA-6875|https://issues.apache.org/jira/browse/CASSANDRA-6875?focusedCommentId=13992153page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13992153],
  the tuple notation is broken when the clustering order mixes ASC and DESC 
 directives because the range of data they describe don't correspond to a 
 single continuous slice internally. To copy the example from CASSANDRA-6875:
 {noformat}
 cqlsh:ks create table foo (a int, b int, c int, PRIMARY KEY (a, b, c)) WITH 
 CLUSTERING ORDER BY (b DESC, c ASC);
 cqlsh:ks INSERT INTO foo (a, b, c) VALUES (0, 2, 0);
 cqlsh:ks INSERT INTO foo (a, b, c) VALUES (0, 1, 0);
 cqlsh:ks INSERT INTO foo (a, b, c) VALUES (0, 1, 1);
 cqlsh:ks INSERT INTO foo (a, b, c) VALUES (0, 0, 0);
 cqlsh:ks SELECT * FROM foo WHERE a=0;
  a | b | c
 ---+---+---
  0 | 2 | 0
  0 | 1 | 0
  0 | 1 | 1
  0 | 0 | 0
 (4 rows)
 cqlsh:ks SELECT * FROM foo WHERE a=0 AND (b, c)  (1, 0);
  a | b | c
 ---+---+---
  0 | 2 | 0
 (1 rows)
 {noformat}
 The last query should really return {{(0, 2, 0)}} and {{(0, 1, 1)}}.
 For that specific example we should generate 2 internal slices, but I believe 
 that with more clustering columns we may have more slices.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7281) SELECT on tuple relations are broken for mixed ASC/DESC clustering order

2014-11-17 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-7281:
--
Reviewer: Sylvain Lebresne

 SELECT on tuple relations are broken for mixed ASC/DESC clustering order
 

 Key: CASSANDRA-7281
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7281
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
 Fix For: 2.0.12

 Attachments: 
 0001-CASSANDRA-7281-SELECT-on-tuple-relations-are-broken-.patch


 As noted on 
 [CASSANDRA-6875|https://issues.apache.org/jira/browse/CASSANDRA-6875?focusedCommentId=13992153page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13992153],
  the tuple notation is broken when the clustering order mixes ASC and DESC 
 directives because the range of data they describe don't correspond to a 
 single continuous slice internally. To copy the example from CASSANDRA-6875:
 {noformat}
 cqlsh:ks create table foo (a int, b int, c int, PRIMARY KEY (a, b, c)) WITH 
 CLUSTERING ORDER BY (b DESC, c ASC);
 cqlsh:ks INSERT INTO foo (a, b, c) VALUES (0, 2, 0);
 cqlsh:ks INSERT INTO foo (a, b, c) VALUES (0, 1, 0);
 cqlsh:ks INSERT INTO foo (a, b, c) VALUES (0, 1, 1);
 cqlsh:ks INSERT INTO foo (a, b, c) VALUES (0, 0, 0);
 cqlsh:ks SELECT * FROM foo WHERE a=0;
  a | b | c
 ---+---+---
  0 | 2 | 0
  0 | 1 | 0
  0 | 1 | 1
  0 | 0 | 0
 (4 rows)
 cqlsh:ks SELECT * FROM foo WHERE a=0 AND (b, c)  (1, 0);
  a | b | c
 ---+---+---
  0 | 2 | 0
 (1 rows)
 {noformat}
 The last query should really return {{(0, 2, 0)}} and {{(0, 1, 1)}}.
 For that specific example we should generate 2 internal slices, but I believe 
 that with more clustering columns we may have more slices.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8330) Confusing Message: ConfigurationException: Found system keyspace files, but they couldn't be loaded!

2014-11-17 Thread Karl Mueller (JIRA)
Karl Mueller created CASSANDRA-8330:
---

 Summary: Confusing Message: ConfigurationException: Found system 
keyspace files, but they couldn't be loaded!
 Key: CASSANDRA-8330
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8330
 Project: Cassandra
  Issue Type: Bug
 Environment: cassandra 2.0.10
Reporter: Karl Mueller
Priority: Minor


I restarted a node which was not responding to cqlsh. It produced this error:

 INFO [SSTableBatchOpen:3] 2014-11-17 16:36:50,388 SSTableReader.java (line 
223) Opening /data2/data-cassandra/system/local/system-local-jb-304 (133 bytes)
 INFO [SSTableBatchOpen:2] 2014-11-17 16:36:50,388 SSTableReader.java (line 
223) Opening /data2/data-cassandra/system/local/system-local-jb-305 (80 bytes)
 INFO [main] 2014-11-17 16:36:50,393 AutoSavingCache.java (line 114) reading 
saved cache /data2/cache-cassandra/system-local-KeyCache-b.db
ERROR [main] 2014-11-17 16:36:50,543 CassandraDaemon.java (line 265) Fatal 
exception during initialization
org.apache.cassandra.exceptions.ConfigurationException: Found system keyspace 
files, but they couldn't be loaded!
at 
org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:554)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:261)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)

After deleting the cache, I still got this error:

 INFO 16:41:43,718 Opening 
/data2/data-cassandra/system/local/system-local-jb-304 (133 bytes)
 INFO 16:41:43,718 Opening 
/data2/data-cassandra/system/local/system-local-jb-305 (80 bytes)
ERROR 16:41:43,877 Fatal exception during initialization
org.apache.cassandra.exceptions.ConfigurationException: Found system keyspace 
files, but they couldn't be loaded!
at 
org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:554)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:261)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)



I think possibly the node had corrupted one of the files due to it being in a 
bad state. This would be impossible to replicate, so I don't think the actual 
bug is that helpful.

What I did find very confusing was the error message. There's nothing to 
indicate what the problem is! Is it a corrupt file? A valid file with bad 
information in it? Referencing something that doesn't exist?! 

I fixed it by deleting the system keyspace and starting it with its token, but 
many people wouldn't know to do that at all.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8295) Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex

2014-11-17 Thread Jose Martinez Poblete (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215590#comment-14215590
 ] 

Jose Martinez Poblete commented on CASSANDRA-8295:
--

Thanks [~jbellis]

 Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex
 -

 Key: CASSANDRA-8295
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8295
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DSE 4.5.3 Cassandra 2.0.11.82
Reporter: Jose Martinez Poblete
 Attachments: alln01-ats-cas3.cassandra.yaml, output.tgz, system.tgz, 
 system.tgz.1, system.tgz.2, system.tgz.3


 Customer runs a 3 node cluster 
 Their dataset is less than 1Tb and during data load, one of the nodes enter a 
 GC death spiral:
 {noformat}
  INFO [ScheduledTasks:1] 2014-11-07 23:31:08,094 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 3348 ms for 2 collections, 1658268944 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,486 GCInspector.java (line 116) 
 GC for ParNew: 442 ms for 2 collections, 6079570032 used; max is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,487 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 7351 ms for 2 collections, 6084678280 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:01,836 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 603 ms for 1 collections, 7132546096 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:09,626 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 761 ms for 1 collections, 7286946984 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:15,265 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 703 ms for 1 collections, 7251213520 used; max is 
 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:25,027 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 1205 ms for 1 collections, 6507586104 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:41,374 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 13835 ms for 3 collections, 6514187192 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-07 23:41:54,137 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 6834 ms for 2 collections, 6521656200 used; max 
 is 8375238656
 ...
  INFO [ScheduledTasks:1] 2014-11-08 12:13:11,086 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 43967 ms for 2 collections, 8368777672 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:14:14,151 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 63968 ms for 3 collections, 8369623824 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:14:55,643 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 41307 ms for 2 collections, 8370115376 used; max 
 is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 12:20:06,197 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 309634 ms for 15 collections, 8374994928 used; 
 max is 8375238656
  INFO [ScheduledTasks:1] 2014-11-08 13:07:33,617 GCInspector.java (line 116) 
 GC for ConcurrentMarkSweep: 2681100 ms for 143 collections, 8347631560 used; 
 max is 8375238656
 {noformat} 
 Their application waits 1 minute before a retry when a timeout is returned
 This is what we find on their heapdumps:
 {noformat}
 Class Name
   
   
| Shallow Heap 
 | Retained Heap | Percentage
 -
 org.apache.cassandra.db.Memtable @ 0x773f52f80
   
   
|   72 
 | 8,086,073,504 | 96.66%
 |- java.util.concurrent.ConcurrentSkipListMap @ 0x724508fe8   
   
   
|   48 
 | 8,086,073,320 | 96.66%
 |  |- 

[jira] [Created] (CASSANDRA-8331) Expose rate of change of values in histograms

2014-11-17 Thread Chris Lohfink (JIRA)
Chris Lohfink created CASSANDRA-8331:


 Summary: Expose rate of change of values in histograms
 Key: CASSANDRA-8331
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8331
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Chris Lohfink
Priority: Minor


In the Metrics JMX Histograms, Sum has value for systems that want to get 
derivatives of the data to find rates of change.  Particularly if the timing of 
collection is not perfect.  The meters give some perspective to the rate of 
which things occur but not of the values themselves.  This could be opened up 
in metrics but its been shot down:
https://github.com/dropwizard/metrics/pull/304
I think theres still value in this. He does have point on distribution of data 
within the period, but in small windows with not very extreme values (ie 
sstables per read per second) I think its moot.  

Alternatively we can provide Meters for each of the histograms to use its 
implementation of a exponentially weighted moving average to get this value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8331) Expose rate of change of values in histograms

2014-11-17 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215636#comment-14215636
 ] 

Chris Lohfink commented on CASSANDRA-8331:
--

I will be willing to give a shot at a patch for exposing Sum or adding a meter 
to a few choice histograms if its a good idea.

 Expose rate of change of values in histograms
 -

 Key: CASSANDRA-8331
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8331
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Chris Lohfink
Priority: Minor
  Labels: metrics

 In the Metrics JMX Histograms, Sum has value for systems that want to get 
 derivatives of the data to find rates of change.  Particularly if the timing 
 of collection is not perfect.  The meters give some perspective to the rate 
 of which things occur but not of the values themselves.  This could be opened 
 up in metrics but its been shot down:
 https://github.com/dropwizard/metrics/pull/304
 I think theres still value in this. He does have point on distribution of 
 data within the period, but in small windows with not very extreme values (ie 
 sstables per read per second) I think its moot.  
 Alternatively we can provide Meters for each of the histograms to use its 
 implementation of a exponentially weighted moving average to get this value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7386) JBOD threshold to prevent unbalanced disk utilization

2014-11-17 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215660#comment-14215660
 ] 

Alan Boudreault commented on CASSANDRA-7386:


[~lyubent] I am currently testing the patch. What metrics did you use to 
generate you graphes (per disk)? Thanks

 JBOD threshold to prevent unbalanced disk utilization
 -

 Key: CASSANDRA-7386
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7386
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Chris Lohfink
Assignee: Robert Stupp
Priority: Minor
 Fix For: 2.1.3

 Attachments: 7386-2.0-v3.txt, 7386-2.1-v3.txt, 7386-v1.patch, 
 7386v2.diff, Mappe1.ods, mean-writevalue-7disks.png, 
 patch_2_1_branch_proto.diff, sstable-count-second-run.png


 Currently the pick the disks are picked first by number of current tasks, 
 then by free space.  This helps with performance but can lead to large 
 differences in utilization in some (unlikely but possible) scenarios.  Ive 
 seen 55% to 10% and heard reports of 90% to 10% on IRC.  With both LCS and 
 STCS (although my suspicion is that STCS makes it worse since harder to be 
 balanced).
 I purpose the algorithm change a little to have some maximum range of 
 utilization where it will pick by free space over load (acknowledging it can 
 be slower).  So if a disk A is 30% full and disk B is 5% full it will never 
 pick A over B until it balances out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7386) JBOD threshold to prevent unbalanced disk utilization

2014-11-17 Thread Lyuben Todorov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215665#comment-14215665
 ] 

Lyuben Todorov commented on CASSANDRA-7386:
---

[~aboudreault] It's *WriteValueMean* (wvm) that I used. The graph shows the wvm 
drop for the five discs that were un-saturated (they have a higher wvm to begin 
with during the second run as they are choosen more frequently over the two 
drives that were previously saturated).  

 JBOD threshold to prevent unbalanced disk utilization
 -

 Key: CASSANDRA-7386
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7386
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Chris Lohfink
Assignee: Robert Stupp
Priority: Minor
 Fix For: 2.1.3

 Attachments: 7386-2.0-v3.txt, 7386-2.1-v3.txt, 7386-v1.patch, 
 7386v2.diff, Mappe1.ods, mean-writevalue-7disks.png, 
 patch_2_1_branch_proto.diff, sstable-count-second-run.png


 Currently the pick the disks are picked first by number of current tasks, 
 then by free space.  This helps with performance but can lead to large 
 differences in utilization in some (unlikely but possible) scenarios.  Ive 
 seen 55% to 10% and heard reports of 90% to 10% on IRC.  With both LCS and 
 STCS (although my suspicion is that STCS makes it worse since harder to be 
 balanced).
 I purpose the algorithm change a little to have some maximum range of 
 utilization where it will pick by free space over load (acknowledging it can 
 be slower).  So if a disk A is 30% full and disk B is 5% full it will never 
 pick A over B until it balances out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8332) Null pointer after droping keyspace

2014-11-17 Thread Chris Lohfink (JIRA)
Chris Lohfink created CASSANDRA-8332:


 Summary: Null pointer after droping keyspace
 Key: CASSANDRA-8332
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8332
 Project: Cassandra
  Issue Type: Bug
Reporter: Chris Lohfink
Priority: Minor


After dropping keyspace, sometimes I see this in logs:
{code}
ERROR 03:40:29 Exception in thread Thread[CompactionExecutor:2,1,main]
java.lang.AssertionError: null
at 
org.apache.cassandra.io.compress.CompressionParameters.setLiveMetadata(CompressionParameters.java:108)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.SSTableReader.getCompressionMetadata(SSTableReader.java:1142)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1896)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:68) 
~[main/:na]
at 
org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1681)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1693)
 ~[main/:na]
at 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getScanners(LeveledCompactionStrategy.java:181)
 ~[main/:na]
at 
org.apache.cassandra.db.compaction.WrappingCompactionStrategy.getScanners(WrappingCompactionStrategy.java:320)
 ~[main/:na]
at 
org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:340)
 ~[main/:na]
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:151)
 ~[main/:na]
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 ~[main/:na]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[main/:na]
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
 ~[main/:na]
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 ~[main/:na]
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:233)
 ~[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_71]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
~[na:1.7.0_71]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_71]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_71]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
{code}
Minor issue since doesn't really affect anything, but the error makes it look 
like somethings wrong.  Seen on 2.1 branch 
(1b21aef8152d96a180e75f2fcc5afad9ded6c595), not sure how far back (may be post 
2.1.2).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)