[jira] [Comment Edited] (CASSANDRA-14410) tablehistograms with non-existent table gives an exception

2018-04-24 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451676#comment-16451676
 ] 

Chris Lohfink edited comment on CASSANDRA-14410 at 4/25/18 5:38 AM:


You are missing the case where args.size() == 1 and keyspace.table is passed 
instead of "keyspace table".

nitpick: {{Collections.singletonList(table)}} instead of {{new 
ArrayList(Arrays.asList(table))}} (I realize this is from previous not 
added by your patch)


was (Author: cnlwsu):
nitpick: {{Collections.singletonList(table)}} instead of {{new 
ArrayList(Arrays.asList(table))}} (I realize this is from previous not 
added by your patch)

+1 from me with that

> tablehistograms with non-existent table gives an exception
> --
>
> Key: CASSANDRA-14410
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14410
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Hannu Kröger
>Assignee: Hannu Kröger
>Priority: Major
>  Labels: lhf
> Fix For: 3.11.x
>
>
> nodetool tablehistograms with non-existent table gives a crazy exception. It 
> should give a nice error message like "Table acdc.abba doesn't exist" or 
> something like that.
>  
> Example:
> {code:java}
> $ nodetool tablehistograms acdc.abba
> error: 
> org.apache.cassandra.metrics:type=Table,keyspace=acdc,scope=abba,name=EstimatedPartitionSizeHistogram
> -- StackTrace --
> javax.management.InstanceNotFoundException: 
> org.apache.cassandra.metrics:type=Table,keyspace=acdc,scope=abba,name=EstimatedPartitionSizeHistogram
>     at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
>     at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
>     at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1445)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:639)
>     at sun.reflect.GeneratedMethodAccessor297.invoke(Unknown Source)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
>     at sun.rmi.transport.Transport$1.run(Transport.java:200)
>     at sun.rmi.transport.Transport$1.run(Transport.java:197)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
>     at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
>     at 
> sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:283)
>     at 
> sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:260)
>     at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161)
>     at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl_Stub.getAttribute(Unknown 
> Source)
>     at 
> javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getAttribute(RMIConnector.java:903)
>     at 
> javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:273)
>     at com.sun.proxy.$Proxy20.getValue(Unknown Source)
>     at 
> org.apache.cassandra.tools.NodeProbe.getColumnFamilyMetric(NodeProbe.java:1334)
>     at 
> org.apache.cassandra.tools.nodetool.TableHistograms.execute(TableHistograms.java:62)
>     at org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:254)
>     at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:168){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (CASSANDRA-14410) tablehistograms with non-existent table gives an exception

2018-04-24 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451676#comment-16451676
 ] 

Chris Lohfink edited comment on CASSANDRA-14410 at 4/25/18 5:34 AM:


nitpick: {{Collections.singletonList(table)}} instead of {{new 
ArrayList(Arrays.asList(table))}} (I realize this is from previous not 
added by your patch)

+1 from me with that


was (Author: cnlwsu):
nitpick: {{Collections.singletonList(table)}} instead of {{new 
ArrayList(Arrays.asList(input[1]))}} (I realize this is from previous 
not added by your patch)

+1 from me with that

> tablehistograms with non-existent table gives an exception
> --
>
> Key: CASSANDRA-14410
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14410
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Hannu Kröger
>Assignee: Hannu Kröger
>Priority: Major
>  Labels: lhf
> Fix For: 3.11.x
>
>
> nodetool tablehistograms with non-existent table gives a crazy exception. It 
> should give a nice error message like "Table acdc.abba doesn't exist" or 
> something like that.
>  
> Example:
> {code:java}
> $ nodetool tablehistograms acdc.abba
> error: 
> org.apache.cassandra.metrics:type=Table,keyspace=acdc,scope=abba,name=EstimatedPartitionSizeHistogram
> -- StackTrace --
> javax.management.InstanceNotFoundException: 
> org.apache.cassandra.metrics:type=Table,keyspace=acdc,scope=abba,name=EstimatedPartitionSizeHistogram
>     at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
>     at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
>     at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1445)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:639)
>     at sun.reflect.GeneratedMethodAccessor297.invoke(Unknown Source)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
>     at sun.rmi.transport.Transport$1.run(Transport.java:200)
>     at sun.rmi.transport.Transport$1.run(Transport.java:197)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
>     at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
>     at 
> sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:283)
>     at 
> sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:260)
>     at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161)
>     at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl_Stub.getAttribute(Unknown 
> Source)
>     at 
> javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getAttribute(RMIConnector.java:903)
>     at 
> javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:273)
>     at com.sun.proxy.$Proxy20.getValue(Unknown Source)
>     at 
> org.apache.cassandra.tools.NodeProbe.getColumnFamilyMetric(NodeProbe.java:1334)
>     at 
> org.apache.cassandra.tools.nodetool.TableHistograms.execute(TableHistograms.java:62)
>     at org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:254)
>     at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:168){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Commented] (CASSANDRA-14410) tablehistograms with non-existent table gives an exception

2018-04-24 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451676#comment-16451676
 ] 

Chris Lohfink commented on CASSANDRA-14410:
---

nitpick: {{Collections.singletonList(table)}} instead of {{new 
ArrayList(Arrays.asList(input[1]))}} (I realize this is from previous 
not added by your patch)

+1 from me with that

> tablehistograms with non-existent table gives an exception
> --
>
> Key: CASSANDRA-14410
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14410
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Hannu Kröger
>Assignee: Hannu Kröger
>Priority: Major
>  Labels: lhf
> Fix For: 3.11.x
>
>
> nodetool tablehistograms with non-existent table gives a crazy exception. It 
> should give a nice error message like "Table acdc.abba doesn't exist" or 
> something like that.
>  
> Example:
> {code:java}
> $ nodetool tablehistograms acdc.abba
> error: 
> org.apache.cassandra.metrics:type=Table,keyspace=acdc,scope=abba,name=EstimatedPartitionSizeHistogram
> -- StackTrace --
> javax.management.InstanceNotFoundException: 
> org.apache.cassandra.metrics:type=Table,keyspace=acdc,scope=abba,name=EstimatedPartitionSizeHistogram
>     at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
>     at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
>     at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1445)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:639)
>     at sun.reflect.GeneratedMethodAccessor297.invoke(Unknown Source)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
>     at sun.rmi.transport.Transport$1.run(Transport.java:200)
>     at sun.rmi.transport.Transport$1.run(Transport.java:197)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
>     at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
>     at 
> sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:283)
>     at 
> sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:260)
>     at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161)
>     at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl_Stub.getAttribute(Unknown 
> Source)
>     at 
> javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getAttribute(RMIConnector.java:903)
>     at 
> javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:273)
>     at com.sun.proxy.$Proxy20.getValue(Unknown Source)
>     at 
> org.apache.cassandra.tools.NodeProbe.getColumnFamilyMetric(NodeProbe.java:1334)
>     at 
> org.apache.cassandra.tools.nodetool.TableHistograms.execute(TableHistograms.java:62)
>     at org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:254)
>     at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:168){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14410) tablehistograms with non-existent table gives an exception

2018-04-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451663#comment-16451663
 ] 

Hannu Kröger commented on CASSANDRA-14410:
--

Based on what I see in the trunk version is that you can now run 
tablehistograms also without arguments and then it displays histograms for all 
tables. But what I didn't see is the option to limit it to just one keyspace.

Anyways: Here is the same thing on top of trunk: 
[https://github.com/hkroger/cassandra/tree/CASSANDRA-14410-trunk]

 

> tablehistograms with non-existent table gives an exception
> --
>
> Key: CASSANDRA-14410
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14410
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Hannu Kröger
>Assignee: Hannu Kröger
>Priority: Major
>  Labels: lhf
> Fix For: 3.11.x
>
>
> nodetool tablehistograms with non-existent table gives a crazy exception. It 
> should give a nice error message like "Table acdc.abba doesn't exist" or 
> something like that.
>  
> Example:
> {code:java}
> $ nodetool tablehistograms acdc.abba
> error: 
> org.apache.cassandra.metrics:type=Table,keyspace=acdc,scope=abba,name=EstimatedPartitionSizeHistogram
> -- StackTrace --
> javax.management.InstanceNotFoundException: 
> org.apache.cassandra.metrics:type=Table,keyspace=acdc,scope=abba,name=EstimatedPartitionSizeHistogram
>     at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
>     at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
>     at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1445)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:639)
>     at sun.reflect.GeneratedMethodAccessor297.invoke(Unknown Source)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
>     at sun.rmi.transport.Transport$1.run(Transport.java:200)
>     at sun.rmi.transport.Transport$1.run(Transport.java:197)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
>     at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
>     at 
> sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:283)
>     at 
> sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:260)
>     at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161)
>     at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl_Stub.getAttribute(Unknown 
> Source)
>     at 
> javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getAttribute(RMIConnector.java:903)
>     at 
> javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:273)
>     at com.sun.proxy.$Proxy20.getValue(Unknown Source)
>     at 
> org.apache.cassandra.tools.NodeProbe.getColumnFamilyMetric(NodeProbe.java:1334)
>     at 
> org.apache.cassandra.tools.nodetool.TableHistograms.execute(TableHistograms.java:62)
>     at org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:254)
>     at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:168){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14410) tablehistograms with non-existent table gives an exception

2018-04-24 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451655#comment-16451655
 ] 

Chris Lohfink commented on CASSANDRA-14410:
---

Can you make a trunk version? This logic has changed a little since your branch 
with allowing entire keyspace or all tables to be printed out.

> tablehistograms with non-existent table gives an exception
> --
>
> Key: CASSANDRA-14410
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14410
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Hannu Kröger
>Assignee: Hannu Kröger
>Priority: Major
>  Labels: lhf
> Fix For: 3.11.x
>
>
> nodetool tablehistograms with non-existent table gives a crazy exception. It 
> should give a nice error message like "Table acdc.abba doesn't exist" or 
> something like that.
>  
> Example:
> {code:java}
> $ nodetool tablehistograms acdc.abba
> error: 
> org.apache.cassandra.metrics:type=Table,keyspace=acdc,scope=abba,name=EstimatedPartitionSizeHistogram
> -- StackTrace --
> javax.management.InstanceNotFoundException: 
> org.apache.cassandra.metrics:type=Table,keyspace=acdc,scope=abba,name=EstimatedPartitionSizeHistogram
>     at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
>     at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
>     at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1445)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:639)
>     at sun.reflect.GeneratedMethodAccessor297.invoke(Unknown Source)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
>     at sun.rmi.transport.Transport$1.run(Transport.java:200)
>     at sun.rmi.transport.Transport$1.run(Transport.java:197)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
>     at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
>     at 
> sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:283)
>     at 
> sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:260)
>     at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161)
>     at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl_Stub.getAttribute(Unknown 
> Source)
>     at 
> javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getAttribute(RMIConnector.java:903)
>     at 
> javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:273)
>     at com.sun.proxy.$Proxy20.getValue(Unknown Source)
>     at 
> org.apache.cassandra.tools.NodeProbe.getColumnFamilyMetric(NodeProbe.java:1334)
>     at 
> org.apache.cassandra.tools.nodetool.TableHistograms.execute(TableHistograms.java:62)
>     at org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:254)
>     at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:168){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-10699) Make schema alterations strongly consistent

2018-04-24 Thread Brian O'Neill (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451647#comment-16451647
 ] 

Brian O'Neill edited comment on CASSANDRA-10699 at 4/25/18 4:23 AM:


With this change, will schema alterations be versioned? I think this would be 
helpful when plugging in other storage engines. 
[CASSANDRA-13474|https://issues.apache.org/jira/browse/CASSANDRA-13474]


was (Author: bronee):
With this change, will schema alterations be versioned? I think this would be 
helpful when plugging in other storage engines. 
https://issues.apache.org/jira/browse/CASSANDRA-13474

> Make schema alterations strongly consistent
> ---
>
> Key: CASSANDRA-10699
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10699
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Major
> Fix For: 4.0
>
>
> Schema changes do not necessarily commute. This has been the case before 
> CASSANDRA-5202, but now is particularly problematic.
> We should employ a strongly consistent protocol instead of relying on 
> marshalling {{Mutation}} objects with schema changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-10699) Make schema alterations strongly consistent

2018-04-24 Thread Brian O'Neill (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451647#comment-16451647
 ] 

Brian O'Neill edited comment on CASSANDRA-10699 at 4/25/18 4:22 AM:


With this change, will schema alterations be versioned? I think this would be 
helpful when plugging in other storage engines. 
https://issues.apache.org/jira/browse/CASSANDRA-13474


was (Author: bronee):
With this change, will schema alterations be versioned? I think this would be 
helpful when plugging in other storage engines. [#CASSANDRA-13474]

> Make schema alterations strongly consistent
> ---
>
> Key: CASSANDRA-10699
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10699
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Major
> Fix For: 4.0
>
>
> Schema changes do not necessarily commute. This has been the case before 
> CASSANDRA-5202, but now is particularly problematic.
> We should employ a strongly consistent protocol instead of relying on 
> marshalling {{Mutation}} objects with schema changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10699) Make schema alterations strongly consistent

2018-04-24 Thread Brian O'Neill (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451647#comment-16451647
 ] 

Brian O'Neill commented on CASSANDRA-10699:
---

With this change, will schema alterations be versioned? I think this would be 
helpful when plugging in other storage engines. [#CASSANDRA-13474]

> Make schema alterations strongly consistent
> ---
>
> Key: CASSANDRA-10699
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10699
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>Priority: Major
> Fix For: 4.0
>
>
> Schema changes do not necessarily commute. This has been the case before 
> CASSANDRA-5202, but now is particularly problematic.
> We should employ a strongly consistent protocol instead of relying on 
> marshalling {{Mutation}} objects with schema changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14281) Improve LatencyMetrics performance by reducing write path processing

2018-04-24 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-14281:
---
Labels: perfomance  (was: )

> Improve LatencyMetrics performance by reducing write path processing
> 
>
> Key: CASSANDRA-14281
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14281
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Michael Burman
>Assignee: Michael Burman
>Priority: Major
>  Labels: perfomance
> Fix For: 4.0
>
> Attachments: bench.png, bench2.png, benchmark.html, benchmark2.png
>
>
> Currently for each write/read/rangequery/CAS touching the CFS we write a 
> latency metric which takes a lot of processing time (up to 66% of the total 
> processing time if the update was empty). 
> The way latencies are recorded is to use both a dropwizard "Timer" as well as 
> "Counter". Latter is used for totalLatency and the previous is decaying 
> metric for rates and certain percentile metrics. We then replicate all of 
> these CFS writes to the KeyspaceMetrics and globalWriteLatencies. 
> Instead of doing this on the write phase we should merge the metrics when 
> they're read. This is much less common occurrence and thus we save a lot of 
> CPU time in total. This also speeds up the write path.
> Currently, the DecayingEstimatedHistogramReservoir acquires a lock for each 
> update operation, which causes a contention if there are more than one thread 
> updating the histogram. This impacts scalability when using larger machines. 
> We should make it lock-free as much as possible and also avoid a single 
> CAS-update from blocking all the concurrent threads from making an update.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14281) Improve LatencyMetrics performance by reducing write path processing

2018-04-24 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-14281:
---
Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

Committed as {{4e744e7688e01d35a6acac1cf8a7a3ff2573836f}} (fixed some style 
issues on commit).

> Improve LatencyMetrics performance by reducing write path processing
> 
>
> Key: CASSANDRA-14281
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14281
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Michael Burman
>Assignee: Michael Burman
>Priority: Major
> Fix For: 4.0
>
> Attachments: bench.png, bench2.png, benchmark.html, benchmark2.png
>
>
> Currently for each write/read/rangequery/CAS touching the CFS we write a 
> latency metric which takes a lot of processing time (up to 66% of the total 
> processing time if the update was empty). 
> The way latencies are recorded is to use both a dropwizard "Timer" as well as 
> "Counter". Latter is used for totalLatency and the previous is decaying 
> metric for rates and certain percentile metrics. We then replicate all of 
> these CFS writes to the KeyspaceMetrics and globalWriteLatencies. 
> Instead of doing this on the write phase we should merge the metrics when 
> they're read. This is much less common occurrence and thus we save a lot of 
> CPU time in total. This also speeds up the write path.
> Currently, the DecayingEstimatedHistogramReservoir acquires a lock for each 
> update operation, which causes a contention if there are more than one thread 
> updating the histogram. This impacts scalability when using larger machines. 
> We should make it lock-free as much as possible and also avoid a single 
> CAS-update from blocking all the concurrent threads from making an update.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14410) tablehistograms with non-existent table gives an exception

2018-04-24 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hannu Kröger updated CASSANDRA-14410:
-
Fix Version/s: 3.11.x
   Status: Patch Available  (was: Open)

> tablehistograms with non-existent table gives an exception
> --
>
> Key: CASSANDRA-14410
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14410
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Hannu Kröger
>Assignee: Hannu Kröger
>Priority: Major
>  Labels: lhf
> Fix For: 3.11.x
>
>
> nodetool tablehistograms with non-existent table gives a crazy exception. It 
> should give a nice error message like "Table acdc.abba doesn't exist" or 
> something like that.
>  
> Example:
> {code:java}
> $ nodetool tablehistograms acdc.abba
> error: 
> org.apache.cassandra.metrics:type=Table,keyspace=acdc,scope=abba,name=EstimatedPartitionSizeHistogram
> -- StackTrace --
> javax.management.InstanceNotFoundException: 
> org.apache.cassandra.metrics:type=Table,keyspace=acdc,scope=abba,name=EstimatedPartitionSizeHistogram
>     at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
>     at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
>     at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1445)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:639)
>     at sun.reflect.GeneratedMethodAccessor297.invoke(Unknown Source)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
>     at sun.rmi.transport.Transport$1.run(Transport.java:200)
>     at sun.rmi.transport.Transport$1.run(Transport.java:197)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
>     at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
>     at 
> sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:283)
>     at 
> sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:260)
>     at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161)
>     at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl_Stub.getAttribute(Unknown 
> Source)
>     at 
> javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getAttribute(RMIConnector.java:903)
>     at 
> javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:273)
>     at com.sun.proxy.$Proxy20.getValue(Unknown Source)
>     at 
> org.apache.cassandra.tools.NodeProbe.getColumnFamilyMetric(NodeProbe.java:1334)
>     at 
> org.apache.cassandra.tools.nodetool.TableHistograms.execute(TableHistograms.java:62)
>     at org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:254)
>     at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:168){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Improve LatencyMetrics performance by reducing write path processing

2018-04-24 Thread jjirsa
Repository: cassandra
Updated Branches:
  refs/heads/trunk 1d387f5e7 -> 4e744e768


Improve LatencyMetrics performance by reducing write path processing

Patch by Michael Burman; Reviewed by Chris Lohfink for CASSANDRA-14281


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4e744e76
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4e744e76
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4e744e76

Branch: refs/heads/trunk
Commit: 4e744e7688e01d35a6acac1cf8a7a3ff2573836f
Parents: 1d387f5
Author: Michael Burman 
Authored: Thu Mar 1 14:59:53 2018 +0200
Committer: Jeff Jirsa 
Committed: Tue Apr 24 20:58:09 2018 -0700

--
 CHANGES.txt |   1 +
 .../DecayingEstimatedHistogramReservoir.java| 230 +++
 .../cassandra/metrics/LatencyMetrics.java   | 158 +++--
 .../test/microbench/LatencyTrackingBench.java   | 118 ++
 ...DecayingEstimatedHistogramReservoirTest.java |  35 +++
 .../cassandra/metrics/LatencyMetricsTest.java   |  70 +-
 6 files changed, 502 insertions(+), 110 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4e744e76/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 784fa2b..fe03ae1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Improve LatencyMetrics performance by reducing write path processing 
(CASSANDRA-14281)
  * Add network authz (CASSANDRA-13985)
  * Use the correct IP/Port for Streaming when localAddress is left unbound 
(CASSANDAR-14389)
  * nodetool listsnapshots is missing local system keyspace snapshots 
(CASSANDRA-14381)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4e744e76/src/java/org/apache/cassandra/metrics/DecayingEstimatedHistogramReservoir.java
--
diff --git 
a/src/java/org/apache/cassandra/metrics/DecayingEstimatedHistogramReservoir.java
 
b/src/java/org/apache/cassandra/metrics/DecayingEstimatedHistogramReservoir.java
index 118f062..f17de78 100644
--- 
a/src/java/org/apache/cassandra/metrics/DecayingEstimatedHistogramReservoir.java
+++ 
b/src/java/org/apache/cassandra/metrics/DecayingEstimatedHistogramReservoir.java
@@ -24,8 +24,7 @@ import java.io.PrintWriter;
 import java.nio.charset.StandardCharsets;
 import java.util.Arrays;
 import java.util.concurrent.atomic.AtomicBoolean;
-import java.util.concurrent.atomic.AtomicLongArray;
-import java.util.concurrent.locks.ReentrantReadWriteLock;
+import java.util.concurrent.atomic.LongAdder;
 
 import com.google.common.annotations.VisibleForTesting;
 
@@ -85,8 +84,8 @@ public class DecayingEstimatedHistogramReservoir implements 
Reservoir
 private final long[] bucketOffsets;
 
 // decayingBuckets and buckets are one element longer than bucketOffsets 
-- the last element is values greater than the last offset
-private final AtomicLongArray decayingBuckets;
-private final AtomicLongArray buckets;
+private final LongAdder[] decayingBuckets;
+private final LongAdder[] buckets;
 
 public static final long HALF_TIME_IN_S = 60L;
 public static final double MEAN_LIFETIME_IN_S = HALF_TIME_IN_S / 
Math.log(2.0);
@@ -95,8 +94,6 @@ public class DecayingEstimatedHistogramReservoir implements 
Reservoir
 private final AtomicBoolean rescaling = new AtomicBoolean(false);
 private volatile long decayLandmark;
 
-private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
-
 // Wrapper around System.nanoTime() to simplify unit testing.
 private final Clock clock;
 
@@ -150,8 +147,15 @@ public class DecayingEstimatedHistogramReservoir 
implements Reservoir
 {
 bucketOffsets = EstimatedHistogram.newOffsets(bucketCount, 
considerZeroes);
 }
-decayingBuckets = new AtomicLongArray(bucketOffsets.length + 1);
-buckets = new AtomicLongArray(bucketOffsets.length + 1);
+decayingBuckets = new LongAdder[bucketOffsets.length + 1];
+buckets = new LongAdder[bucketOffsets.length + 1];
+
+for(int i = 0; i < buckets.length; i++) 
+{
+decayingBuckets[i] = new LongAdder();
+buckets[i] = new LongAdder();
+}
+
 this.clock = clock;
 decayLandmark = clock.getTime();
 }
@@ -174,18 +178,8 @@ public class DecayingEstimatedHistogramReservoir 
implements Reservoir
 }
 // else exact match; we're good
 
-lockForRegularUsage();
-
-try
-{
-decayingBuckets.getAndAdd(index, 
Math.round(forwardDecayWeight(now)));
-}
-finally
-{
-unlockForRegularUsage();
-}
-
-

[jira] [Commented] (CASSANDRA-14410) tablehistograms with non-existent table gives an exception

2018-04-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451608#comment-16451608
 ] 

Hannu Kröger commented on CASSANDRA-14410:
--

Branch on top of cassandra-3.11 with a fix: 
[https://github.com/hkroger/cassandra/tree/CASSANDRA-14410]

 

> tablehistograms with non-existent table gives an exception
> --
>
> Key: CASSANDRA-14410
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14410
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Hannu Kröger
>Assignee: Hannu Kröger
>Priority: Major
>  Labels: lhf
>
> nodetool tablehistograms with non-existent table gives a crazy exception. It 
> should give a nice error message like "Table acdc.abba doesn't exist" or 
> something like that.
>  
> Example:
> {code:java}
> $ nodetool tablehistograms acdc.abba
> error: 
> org.apache.cassandra.metrics:type=Table,keyspace=acdc,scope=abba,name=EstimatedPartitionSizeHistogram
> -- StackTrace --
> javax.management.InstanceNotFoundException: 
> org.apache.cassandra.metrics:type=Table,keyspace=acdc,scope=abba,name=EstimatedPartitionSizeHistogram
>     at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
>     at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
>     at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1445)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:639)
>     at sun.reflect.GeneratedMethodAccessor297.invoke(Unknown Source)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
>     at sun.rmi.transport.Transport$1.run(Transport.java:200)
>     at sun.rmi.transport.Transport$1.run(Transport.java:197)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
>     at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
>     at 
> sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:283)
>     at 
> sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:260)
>     at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161)
>     at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl_Stub.getAttribute(Unknown 
> Source)
>     at 
> javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getAttribute(RMIConnector.java:903)
>     at 
> javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:273)
>     at com.sun.proxy.$Proxy20.getValue(Unknown Source)
>     at 
> org.apache.cassandra.tools.NodeProbe.getColumnFamilyMetric(NodeProbe.java:1334)
>     at 
> org.apache.cassandra.tools.nodetool.TableHistograms.execute(TableHistograms.java:62)
>     at org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:254)
>     at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:168){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-14410) tablehistograms with non-existent table gives an exception

2018-04-24 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hannu Kröger reassigned CASSANDRA-14410:


Assignee: Hannu Kröger

> tablehistograms with non-existent table gives an exception
> --
>
> Key: CASSANDRA-14410
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14410
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Hannu Kröger
>Assignee: Hannu Kröger
>Priority: Major
>  Labels: lhf
>
> nodetool tablehistograms with non-existent table gives a crazy exception. It 
> should give a nice error message like "Table acdc.abba doesn't exist" or 
> something like that.
>  
> Example:
> {code:java}
> $ nodetool tablehistograms acdc.abba
> error: 
> org.apache.cassandra.metrics:type=Table,keyspace=acdc,scope=abba,name=EstimatedPartitionSizeHistogram
> -- StackTrace --
> javax.management.InstanceNotFoundException: 
> org.apache.cassandra.metrics:type=Table,keyspace=acdc,scope=abba,name=EstimatedPartitionSizeHistogram
>     at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
>     at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
>     at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1445)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:639)
>     at sun.reflect.GeneratedMethodAccessor297.invoke(Unknown Source)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
>     at sun.rmi.transport.Transport$1.run(Transport.java:200)
>     at sun.rmi.transport.Transport$1.run(Transport.java:197)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
>     at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
>     at 
> sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:283)
>     at 
> sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:260)
>     at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161)
>     at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl_Stub.getAttribute(Unknown 
> Source)
>     at 
> javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getAttribute(RMIConnector.java:903)
>     at 
> javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:273)
>     at com.sun.proxy.$Proxy20.getValue(Unknown Source)
>     at 
> org.apache.cassandra.tools.NodeProbe.getColumnFamilyMetric(NodeProbe.java:1334)
>     at 
> org.apache.cassandra.tools.nodetool.TableHistograms.execute(TableHistograms.java:62)
>     at org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:254)
>     at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:168){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14281) Improve LatencyMetrics performance by reducing write path processing

2018-04-24 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-14281:
---
 Reviewer: Chris Lohfink
Fix Version/s: 4.0

> Improve LatencyMetrics performance by reducing write path processing
> 
>
> Key: CASSANDRA-14281
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14281
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Michael Burman
>Assignee: Michael Burman
>Priority: Major
> Fix For: 4.0
>
> Attachments: bench.png, bench2.png, benchmark.html, benchmark2.png
>
>
> Currently for each write/read/rangequery/CAS touching the CFS we write a 
> latency metric which takes a lot of processing time (up to 66% of the total 
> processing time if the update was empty). 
> The way latencies are recorded is to use both a dropwizard "Timer" as well as 
> "Counter". Latter is used for totalLatency and the previous is decaying 
> metric for rates and certain percentile metrics. We then replicate all of 
> these CFS writes to the KeyspaceMetrics and globalWriteLatencies. 
> Instead of doing this on the write phase we should merge the metrics when 
> they're read. This is much less common occurrence and thus we save a lot of 
> CPU time in total. This also speeds up the write path.
> Currently, the DecayingEstimatedHistogramReservoir acquires a lock for each 
> update operation, which causes a contention if there are more than one thread 
> updating the histogram. This impacts scalability when using larger machines. 
> We should make it lock-free as much as possible and also avoid a single 
> CAS-update from blocking all the concurrent threads from making an update.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14281) Improve LatencyMetrics performance by reducing write path processing

2018-04-24 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451594#comment-16451594
 ] 

Chris Lohfink edited comment on CASSANDRA-14281 at 4/25/18 3:43 AM:


yeah i +1d the code on 18th, the benchmark was just followup me trying it out 
to see same improvement he did, which I didn't see on my laptop. Main feedback 
(was in IRC so not in comments, adding it now for posterity) that he fixed was 
children metrics tracking parent to prevent negative values after table drops


was (Author: cnlwsu):
yeah i +1d the code on 18th, the benchmark was just followup me trying it out 
to see same improvement he did, which I didn't see on my laptop. Main feedback 
(was in IRC so not in comments, adding it now for posterity) that he fixed was 
parent metrics tracking children to prevent negative values after table drops

> Improve LatencyMetrics performance by reducing write path processing
> 
>
> Key: CASSANDRA-14281
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14281
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Michael Burman
>Assignee: Michael Burman
>Priority: Major
> Attachments: bench.png, bench2.png, benchmark.html, benchmark2.png
>
>
> Currently for each write/read/rangequery/CAS touching the CFS we write a 
> latency metric which takes a lot of processing time (up to 66% of the total 
> processing time if the update was empty). 
> The way latencies are recorded is to use both a dropwizard "Timer" as well as 
> "Counter". Latter is used for totalLatency and the previous is decaying 
> metric for rates and certain percentile metrics. We then replicate all of 
> these CFS writes to the KeyspaceMetrics and globalWriteLatencies. 
> Instead of doing this on the write phase we should merge the metrics when 
> they're read. This is much less common occurrence and thus we save a lot of 
> CPU time in total. This also speeds up the write path.
> Currently, the DecayingEstimatedHistogramReservoir acquires a lock for each 
> update operation, which causes a contention if there are more than one thread 
> updating the histogram. This impacts scalability when using larger machines. 
> We should make it lock-free as much as possible and also avoid a single 
> CAS-update from blocking all the concurrent threads from making an update.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14281) Improve LatencyMetrics performance by reducing write path processing

2018-04-24 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451594#comment-16451594
 ] 

Chris Lohfink commented on CASSANDRA-14281:
---

yeah i +1d the code on 18th, the benchmark was just followup me trying it out 
to see same improvement he did, which I didn't see on my laptop. Main feedback 
(was in IRC so not in comments, adding it now for posterity) that he fixed was 
parent metrics tracking children to prevent negative values after table drops

> Improve LatencyMetrics performance by reducing write path processing
> 
>
> Key: CASSANDRA-14281
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14281
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Michael Burman
>Assignee: Michael Burman
>Priority: Major
> Attachments: bench.png, bench2.png, benchmark.html, benchmark2.png
>
>
> Currently for each write/read/rangequery/CAS touching the CFS we write a 
> latency metric which takes a lot of processing time (up to 66% of the total 
> processing time if the update was empty). 
> The way latencies are recorded is to use both a dropwizard "Timer" as well as 
> "Counter". Latter is used for totalLatency and the previous is decaying 
> metric for rates and certain percentile metrics. We then replicate all of 
> these CFS writes to the KeyspaceMetrics and globalWriteLatencies. 
> Instead of doing this on the write phase we should merge the metrics when 
> they're read. This is much less common occurrence and thus we save a lot of 
> CPU time in total. This also speeds up the write path.
> Currently, the DecayingEstimatedHistogramReservoir acquires a lock for each 
> update operation, which causes a contention if there are more than one thread 
> updating the histogram. This impacts scalability when using larger machines. 
> We should make it lock-free as much as possible and also avoid a single 
> CAS-update from blocking all the concurrent threads from making an update.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14281) Improve LatencyMetrics performance by reducing write path processing

2018-04-24 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451572#comment-16451572
 ] 

Jeff Jirsa commented on CASSANDRA-14281:


Is https://github.com/apache/cassandra/pull/217 the final patch? Chris you're 
+1 (fully review, not just benchmark)?



> Improve LatencyMetrics performance by reducing write path processing
> 
>
> Key: CASSANDRA-14281
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14281
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Michael Burman
>Assignee: Michael Burman
>Priority: Major
> Attachments: bench.png, bench2.png, benchmark.html, benchmark2.png
>
>
> Currently for each write/read/rangequery/CAS touching the CFS we write a 
> latency metric which takes a lot of processing time (up to 66% of the total 
> processing time if the update was empty). 
> The way latencies are recorded is to use both a dropwizard "Timer" as well as 
> "Counter". Latter is used for totalLatency and the previous is decaying 
> metric for rates and certain percentile metrics. We then replicate all of 
> these CFS writes to the KeyspaceMetrics and globalWriteLatencies. 
> Instead of doing this on the write phase we should merge the metrics when 
> they're read. This is much less common occurrence and thus we save a lot of 
> CPU time in total. This also speeds up the write path.
> Currently, the DecayingEstimatedHistogramReservoir acquires a lock for each 
> update operation, which causes a contention if there are more than one thread 
> updating the histogram. This impacts scalability when using larger machines. 
> We should make it lock-free as much as possible and also avoid a single 
> CAS-update from blocking all the concurrent threads from making an update.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14414) Errors in Supercolumn support in 2.0 upgrade

2018-04-24 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451564#comment-16451564
 ] 

Jeff Jirsa commented on CASSANDRA-14414:


2.0 is EOL, and we weren't expecting to cut a new release. What happens if you 
finish the rolling upgrade to 2.0? Does it finish (failing reads while it's 
mixed)? 




> Errors in Supercolumn support in 2.0 upgrade
> 
>
> Key: CASSANDRA-14414
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14414
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Ken Hancock
>Priority: Major
>
> In upgrading from 1.2.18 to 2.0.17, the following exceptions started showing 
> in cassandra log files when the 2.0.17 node is chosen as the coordinator.  
> CL=ALL reads will fail as a result.
> The following ccm script will create a 3-node cassandra cluster and upgrade 
> the 3rd node to cassandra 2.0.17
> {code}
> ccm create -n3 -v1.2.17 test
> ccm start
> ccm node1 cli -v -x "create keyspace test with 
> placement_strategy='org.apache.cassandra.locator.SimpleStrategy' and 
> strategy_options={replication_factor:3}"
> ccm node1 cli -v -x "use test;
>   create column family super with column_type = 'Super' and 
> key_validation_class='IntegerType' and comparator = 'IntegerType' and 
> subcomparator = 'IntegerType' and default_validation_class = 'AsciiType'"
> ccm node1 cli -v -x "use test;
>   create column family shadow with column_type = 'Super' and 
> key_validation_class='IntegerType' and comparator = 'IntegerType' and 
> subcomparator = 'IntegerType' and default_validation_class = 'AsciiType'"
> ccm node1 cli -v -x "use test;
>   set super[1][1][1]='1-1-1';
>   set super[1][1][2]='1-1-2';
>   set super[1][2][1]='1-2-1';
>   set super[1][2][2]='1-2-2';
>   set super[2][1][1]='2-1-1';
>   set super[2][1][2]='2-1-2';
>   set super[2][2][1]='2-2-1';
>   set super[2][2][2]='2-2-2';
>   set super[3][1][1]='3-1-1';
>   set super[3][1][2]='3-1-2';
>   "
> ccm flush
> ccm node3 stop
> ccm node3 setdir -v2.0.17
> ccm node3 start
> ccm node3 nodetool upgradesstables
> {code}
> The following python uses pycassa to exercise the range_slice Thrift API:
> {code}
> import pycassa
> from pycassa.pool import ConnectionPool
> from pycassa.columnfamily import ColumnFamily
> from pycassa import ConsistencyLevel
> pool = ConnectionPool('test', server_list=['127.0.0.3:9160'], max_retries=0)
> super = ColumnFamily(pool, 'super')
> print "fails with ClassCastException"
> super.get(1, columns=[1,2], read_consistency_level=ConsistencyLevel.ONE)
> print "fails with RuntimeException: Cannot convert filter to old super column 
> format...""
> super.get(1, column_start=2, column_finish=3, 
> read_consistency_level=ConsistencyLevel.ONE)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14381) nodetool listsnapshots is missing local system keyspace snapshots

2018-04-24 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451501#comment-16451501
 ] 

Cyril Scetbon commented on CASSANDRA-14381:
---

I'd say the patch is so simple let's push it to 2.1 and 3.0 (I still have 2.1 
nodes running)

> nodetool listsnapshots is missing local system keyspace snapshots
> -
>
> Key: CASSANDRA-14381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14381
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: MacOs 10.12.5
> Java 1.8.0_144
> Cassandra 3.11.2 (brew install)
>Reporter: Cyril Scetbon
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
>
> The output of *nodetool listsnapshots* is inconsistent with the snapshots 
> created :
> {code:java}
> $ nodetool listsnapshots
> Snapshot Details:
> There are no snapshots
> $ nodetool snapshot -t tag1 --table local system
> Requested creating snapshot(s) for [system] with snapshot name [tag1] and 
> options {skipFlush=false}
> Snapshot directory: tag1
> $ nodetool snapshot -t tag2 --table local system
> Requested creating snapshot(s) for [system] with snapshot name [tag2] and 
> options {skipFlush=false}
> Snapshot directory: tag2
> $ nodetool listsnapshots
> Snapshot Details:
> There are no snapshots
> $ ls 
> /usr/local/var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/snapshots/
> tag1 tag2{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12743) Assertion error while running compaction

2018-04-24 Thread Jay Zhuang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451469#comment-16451469
 ] 

Jay Zhuang commented on CASSANDRA-12743:


The problem is because 
[{{dataSyncPosition}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/IndexSummaryBuilder.java#L64]
 is the compressed file size (set here: 
[{{BigTableWriter.java:445}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java#L445]),
 VS. 
[{{lastReadableByData}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/IndexSummaryBuilder.java#L58]
 is having [uncompressed data 
size|https://github.com/apache/cassandra/blob/5dc55e715eba6667c388da9f8f1eb7a46489b35c/src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java#L185]:
 
[{{IndexSummaryBuilder.java:222}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/IndexSummaryBuilder.java#L222].
So if the compression ratio is bigger or around {{1.0}} and [the index file is 
synced faster than the data 
file|https://github.com/apache/cassandra/blob/5dc55e715eba6667c388da9f8f1eb7a46489b35c/src/java/org/apache/cassandra/io/sstable/IndexSummaryBuilder.java#L174],
 
[{{openEarly()}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java#L287]
 may open data that haven't been synced.

Here is the patch, please review:
| Branch | uTest | dTest |
| [12743-2.2|https://github.com/cooldoger/cassandra/tree/12743-2.2] | 
[!https://circleci.com/gh/cooldoger/cassandra/tree/12743-2.2.svg?style=svg!|https://circleci.com/gh/cooldoger/cassandra/tree/12743-2.2]
 | 
[#526|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/522]
| [12743-3.0|https://github.com/cooldoger/cassandra/tree/12743-3.0] | 
[!https://circleci.com/gh/cooldoger/cassandra/tree/12743-3.0.svg?style=svg!|https://circleci.com/gh/cooldoger/cassandra/tree/12743-3.0]
 | 
[#527|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/522]
| [12743-3.11|https://github.com/cooldoger/cassandra/tree/12743-3.11] | 
[!https://circleci.com/gh/cooldoger/cassandra/tree/12743-3.11.svg?style=svg!|https://circleci.com/gh/cooldoger/cassandra/tree/12743-3.11]
 | 
[#528|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/523]
| [12743-trunk|https://github.com/cooldoger/cassandra/tree/12743-trunk] | 
[!https://circleci.com/gh/cooldoger/cassandra/tree/12743-trunk.svg?style=svg!|https://circleci.com/gh/cooldoger/cassandra/tree/12743-trunk]
 | 
[#529|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/524]

> Assertion error while running compaction 
> -
>
> Key: CASSANDRA-12743
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12743
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: unix
>Reporter: Jean-Baptiste Le Duigou
>Assignee: Jay Zhuang
>Priority: Major
>
> While running compaction I run into an error sometimes :
> {noformat}
> nodetool compact
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Chunk.(CompressionMetadata.java:463)
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:228)
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.createMappedSegments(CompressedSegmentedFile.java:80)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.(CompressedPoolingSegmentedFile.java:38)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:198)
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openEarly(BigTableWriter.java:315)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.maybeReopenEarly(SSTableRewriter.java:171)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:116)
> at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.append(DefaultCompactionWriter.java:64)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:184)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
> at 
> 

[jira] [Updated] (CASSANDRA-12743) Assertion error while running compaction

2018-04-24 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-12743:
---
Reproduced In: 3.0.14, 2.2.7  (was: 3.0.14)
   Status: Patch Available  (was: Reopened)

> Assertion error while running compaction 
> -
>
> Key: CASSANDRA-12743
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12743
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: unix
>Reporter: Jean-Baptiste Le Duigou
>Assignee: Jay Zhuang
>Priority: Major
>
> While running compaction I run into an error sometimes :
> {noformat}
> nodetool compact
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Chunk.(CompressionMetadata.java:463)
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:228)
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.createMappedSegments(CompressedSegmentedFile.java:80)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.(CompressedPoolingSegmentedFile.java:38)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:198)
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openEarly(BigTableWriter.java:315)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.maybeReopenEarly(SSTableRewriter.java:171)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:116)
> at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.append(DefaultCompactionWriter.java:64)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:184)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:599)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Why is that happening?
> Is there anyway to provide more details (e.g. which SSTable cannot be 
> compacted)?
> We are using Cassandra 2.2.7



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-12743) Assertion error while running compaction

2018-04-24 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang reassigned CASSANDRA-12743:
--

Assignee: Jay Zhuang

> Assertion error while running compaction 
> -
>
> Key: CASSANDRA-12743
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12743
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: unix
>Reporter: Jean-Baptiste Le Duigou
>Assignee: Jay Zhuang
>Priority: Major
>
> While running compaction I run into an error sometimes :
> {noformat}
> nodetool compact
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Chunk.(CompressionMetadata.java:463)
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:228)
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.createMappedSegments(CompressedSegmentedFile.java:80)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.(CompressedPoolingSegmentedFile.java:38)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:198)
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openEarly(BigTableWriter.java:315)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.maybeReopenEarly(SSTableRewriter.java:171)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:116)
> at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.append(DefaultCompactionWriter.java:64)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:184)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:599)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Why is that happening?
> Is there anyway to provide more details (e.g. which SSTable cannot be 
> compacted)?
> We are using Cassandra 2.2.7



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14298) cqlshlib tests broken on b.a.o

2018-04-24 Thread Patrick Bannister (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16451256#comment-16451256
 ] 

Patrick Bannister commented on CASSANDRA-14298:
---

If you'd like to set aside the cqlsh copy tests that depend on cqlshlib for 
now, I can regroup and focus on getting everything else working. I can include 
a modification to pytest_collection_modifyitems in conftest.py to deselect the 
impacted tests, similar to what we're doing with the upgrade tests.

I'll investigate the possibility of porting cqlshlib for 2/3 cross 
compatibility. I personally prefer a complete transition to Python 3, but I 
suspect the community would be more comfortable with a cross compatible 
implementation.

> cqlshlib tests broken on b.a.o
> --
>
> Key: CASSANDRA-14298
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14298
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build, Testing
>Reporter: Stefan Podkowinski
>Assignee: Patrick Bannister
>Priority: Major
> Attachments: cqlsh_tests_notes.md
>
>
> It appears that cqlsh-tests on builds.apache.org on all branches stopped 
> working since we removed nosetests from the system environment. See e.g. 
> [here|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-cqlsh-tests/458/cython=no,jdk=JDK%201.8%20(latest),label=cassandra/console].
>  Looks like we either have to make nosetests available again or migrate to 
> pytest as we did with dtests. Giving pytest a quick try resulted in many 
> errors locally, but I haven't inspected them in detail yet. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-11551) Incorrect counting of pending messages in OutboundTcpConnection

2018-04-24 Thread Jaydeepkumar Chovatia (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450341#comment-16450341
 ] 

Jaydeepkumar Chovatia edited comment on CASSANDRA-11551 at 4/24/18 8:57 PM:


I think I've found the root cause for this issue, one possible reason is once 
we break at [this 
point|https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/net/OutboundTcpConnection.java#L270]
 then {{currentMsgBufferCount}} doesn't get reset and at that time 
{{backlog.size()}} is 0 but {{currentMsgBufferCount}} could be > 0 and call to 
{{getPendingMessages}} at this time will give > 0 pending message even though 
in reality it is none.

This problem will not happen most likely on {{trunk}} because it doesn't use 
{{currentMsgBufferCount}} instead it directly deals with {{backlog.size()}} in 
[getPendingMessages|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/net/async/OutboundMessagingConnection.java#L659]

I've fixed this for older branches, please find fix here:
||3.11||3.0||2.2||
|[diff 
|https://github.com/apache/cassandra/compare/cassandra-3.11...jaydeepkumar1984:11551-3.11?expand=1]|[diff
 
|https://github.com/apache/cassandra/compare/trunk...jaydeepkumar1984:11551-3.0?expand=1]|[diff|https://github.com/apache/cassandra/compare/trunk...jaydeepkumar1984:11551-2.2?expand=1]|
|[!https://circleci.com/gh/jaydeepkumar1984/cassandra/tree/11551-3.11.svg?style=svg!
 
|https://circleci.com/gh/jaydeepkumar1984/cassandra/63]|[!https://circleci.com/gh/jaydeepkumar1984/cassandra/tree/11551-3.0.svg?style=svg!
 
|https://circleci.com/gh/jaydeepkumar1984/cassandra/65]|[!https://circleci.com/gh/jaydeepkumar1984/cassandra/tree/11551-2.2.svg?style=svg!
  |https://circleci.com/gh/jaydeepkumar1984/cassandra/69]|

Jaydeep


was (Author: chovatia.jayd...@gmail.com):
I think I've found the root cause for this issue, one possible reason is once 
we break at [this 
point|https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/net/OutboundTcpConnection.java#L270]
 then {{currentMsgBufferCount}} doesn't get reset and at that time 
{{backlog.size()}} is 0 but {{currentMsgBufferCount}} could be > 0 and call to 
{{getPendingMessages}} at this time will give > 0 pending message even though 
in reality it is none.

This problem will not happen most likely on {{trunk}} because it doesn't use 
{{currentMsgBufferCount}} instead it directly deals with {{backlog.size()}} in 
[getPendingMessages|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/net/async/OutboundMessagingConnection.java#L659]

I've fixed this for older branches, please find fix here:
||3.11||3.0||2.2||
|[diff 
|https://github.com/apache/cassandra/compare/cassandra-3.11...jaydeepkumar1984:11551-3.11?expand=1]|[diff
 
|https://github.com/apache/cassandra/compare/trunk...jaydeepkumar1984:11551-3.0?expand=1]|[diff|https://github.com/apache/cassandra/compare/trunk...jaydeepkumar1984:11551-2.2?expand=1]|
|[!https://circleci.com/gh/jaydeepkumar1984/cassandra/tree/11551-3.11.svg?style=svg!
 
|https://circleci.com/gh/jaydeepkumar1984/cassandra/63]|[!https://circleci.com/gh/jaydeepkumar1984/cassandra/tree/11551-3.0.svg?style=svg!
 
|https://circleci.com/gh/jaydeepkumar1984/cassandra/65]|[!https://circleci.com/gh/jaydeepkumar1984/cassandra/tree/11551-2.2.svg?style=svg!
  |https://circleci.com/gh/jaydeepkumar1984/cassandra/66]|

Jaydeep

> Incorrect counting of pending messages in OutboundTcpConnection
> ---
>
> Key: CASSANDRA-11551
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11551
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 2.2.x, 3.0.x, 3.11.x
>
>
> Somehow {{OutboundTcpConnection.getPendingMessages()}} seems to return a 
> wrong number.
> {code}
> nodetool netstats
> Mode: NORMAL
> Not sending any streams.
> Read Repair Statistics:
> Attempted: 1655
> Mismatch (Blocking): 0
> Mismatch (Background): 2
> Pool NameActive   Pending  Completed
> Large messages  n/a 5  0
> Small messages  n/a 0   31534100
> Gossip messages n/a 0 520393
> {code}
> Inspection of the heap dump of that node unveiled that all instances of 
> {{OutboundTcpConnection.backlog}} are empty but {{currentMsgBufferCount}} is 
> {{1}} for 5 instances of {{OutboundTcpConnection}}.
> Maybe the cause is in {{OutboundTcpConnection.run()}} where 
> {{drainedMessages.size()}} is called twice but assumed that these are equal.
> /cc [~aweisberg]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CASSANDRA-14415) Performance regression in queries for distinct keys

2018-04-24 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-14415:
---
Component/s: Core

> Performance regression in queries for distinct keys
> ---
>
> Key: CASSANDRA-14415
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14415
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Samuel Klock
>Assignee: Samuel Klock
>Priority: Major
>  Labels: performance
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> Running Cassandra 3.0.16, we observed a major performance regression 
> affecting \{{SELECT DISTINCT keys}}-style queries against certain tables.  
> Based on some investigation (guided by some helpful feedback from Benjamin on 
> the dev list), we tracked the regression down to two problems.
> * One is that Cassandra was reading more data from disk than was necessary to 
> satisfy the query.  This was fixed under CASSANDRA-10657 in a later 3.x 
> release.
> * If the fix for CASSANDRA-10657 is incorporated, the other is this code 
> snippet in \{{RebufferingInputStream}}:
> {code:java}
>     @Override
>     public int skipBytes(int n) throws IOException
>     {
>     if (n < 0)
>     return 0;
>     int requested = n;
>     int position = buffer.position(), limit = buffer.limit(), remaining;
>     while ((remaining = limit - position) < n)
>     {
>     n -= remaining;
>     buffer.position(limit);
>     reBuffer();
>     position = buffer.position();
>     limit = buffer.limit();
>     if (position == limit)
>     return requested - n;
>     }
>     buffer.position(position + n);
>     return requested;
>     }
> {code}
> The gist of it is that to skip bytes, the stream needs to read those bytes 
> into memory then throw them away.  In our tests, we were spending a lot of 
> time in this method, so it looked like the chief drag on performance.
> We noticed that the subclass of \{{RebufferingInputStream}} in use for our 
> queries, \{{RandomAccessReader}} (over compressed sstables), implements a 
> \{{seek()}} method.  Overriding \{{skipBytes()}} in it to use \{{seek()}} 
> instead was sufficient to fix the performance regression.
> The performance difference is significant for tables with large values.  It's 
> straightforward to evaluate with very simple key-value tables, e.g.:
> {\{CREATE TABLE testtable (key TEXT PRIMARY KEY, value BLOB);}}
> We did some basic experimentation with the following variations (all in a 
> single-node 3.11.2 cluster with off-the-shelf settings running on a dev 
> workstation):
> * small values (1 KB, 100,000 entries), somewhat larger values (25 KB, 10,000 
> entries), and much larger values (1 MB, 10,000 entries);
> * compressible data (a single byte repeated) and uncompressible data (output 
> from \{{openssl rand $bytes}}); and
> * with and without sstable compression.  (With compression, we use 
> Cassandra's defaults.)
> The difference is most conspicuous for tables with large, uncompressible data 
> and sstable decompression (which happens to describe the use case that 
> triggered our investigation).  It is smaller but still readily apparent for 
> tables with effective compression.  For uncompressible data without 
> compression enabled, there is no appreciable difference.
> Here's what the performance looks like without our patch for the 1-MB entries 
> (times in seconds, five consecutive runs for each data set, all exhausting 
> the results from a \{{SELECT DISTINCT key FROM ...}} query with a page size 
> of 24):
> {noformat}
> working on compressible
> 5.21180510521
> 5.10270500183
> 5.22311806679
> 4.6732840538
> 4.84219098091
> working on uncompressible_uncompressed
> 55.0423607826
> 0.769015073776
> 0.850513935089
> 0.713396072388
> 0.62596988678
> working on uncompressible
> 413.292617083
> 231.345913887
> 449.524993896
> 425.135111094
> 243.469946861
> {noformat}
> and without the fix:
> {noformat}
> working on compressible
> 2.86733293533
> 1.24895811081
> 1.108907938
> 1.12742400169
> 1.04647302628
> working on uncompressible_uncompressed
> 56.4146180153
> 0.895509958267
> 0.922824144363
> 0.772884130478
> 0.731923818588
> working on uncompressible
> 64.4587619305
> 1.81325793266
> 1.52577018738
> 1.41769099236
> 1.60442209244
> {noformat}
> The long initial runs for the uncompressible data presumably come from 
> repeatedly hitting the disk.  In contrast to the runs without the fix, the 
> initial runs seem to be effective at warming the page cache (as lots of data 
> is skipped, so the data that's read can fit in memory), so subsequent runs 
> are faster.
> For smaller data sets, \{{RandomAccessReader.seek()}} and 
> 

[jira] [Updated] (CASSANDRA-14415) Performance regression in queries for distinct keys

2018-04-24 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-14415:
---
Labels: performance  (was: )

> Performance regression in queries for distinct keys
> ---
>
> Key: CASSANDRA-14415
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14415
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Samuel Klock
>Assignee: Samuel Klock
>Priority: Major
>  Labels: performance
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> Running Cassandra 3.0.16, we observed a major performance regression 
> affecting \{{SELECT DISTINCT keys}}-style queries against certain tables.  
> Based on some investigation (guided by some helpful feedback from Benjamin on 
> the dev list), we tracked the regression down to two problems.
> * One is that Cassandra was reading more data from disk than was necessary to 
> satisfy the query.  This was fixed under CASSANDRA-10657 in a later 3.x 
> release.
> * If the fix for CASSANDRA-10657 is incorporated, the other is this code 
> snippet in \{{RebufferingInputStream}}:
> {code:java}
>     @Override
>     public int skipBytes(int n) throws IOException
>     {
>     if (n < 0)
>     return 0;
>     int requested = n;
>     int position = buffer.position(), limit = buffer.limit(), remaining;
>     while ((remaining = limit - position) < n)
>     {
>     n -= remaining;
>     buffer.position(limit);
>     reBuffer();
>     position = buffer.position();
>     limit = buffer.limit();
>     if (position == limit)
>     return requested - n;
>     }
>     buffer.position(position + n);
>     return requested;
>     }
> {code}
> The gist of it is that to skip bytes, the stream needs to read those bytes 
> into memory then throw them away.  In our tests, we were spending a lot of 
> time in this method, so it looked like the chief drag on performance.
> We noticed that the subclass of \{{RebufferingInputStream}} in use for our 
> queries, \{{RandomAccessReader}} (over compressed sstables), implements a 
> \{{seek()}} method.  Overriding \{{skipBytes()}} in it to use \{{seek()}} 
> instead was sufficient to fix the performance regression.
> The performance difference is significant for tables with large values.  It's 
> straightforward to evaluate with very simple key-value tables, e.g.:
> {\{CREATE TABLE testtable (key TEXT PRIMARY KEY, value BLOB);}}
> We did some basic experimentation with the following variations (all in a 
> single-node 3.11.2 cluster with off-the-shelf settings running on a dev 
> workstation):
> * small values (1 KB, 100,000 entries), somewhat larger values (25 KB, 10,000 
> entries), and much larger values (1 MB, 10,000 entries);
> * compressible data (a single byte repeated) and uncompressible data (output 
> from \{{openssl rand $bytes}}); and
> * with and without sstable compression.  (With compression, we use 
> Cassandra's defaults.)
> The difference is most conspicuous for tables with large, uncompressible data 
> and sstable decompression (which happens to describe the use case that 
> triggered our investigation).  It is smaller but still readily apparent for 
> tables with effective compression.  For uncompressible data without 
> compression enabled, there is no appreciable difference.
> Here's what the performance looks like without our patch for the 1-MB entries 
> (times in seconds, five consecutive runs for each data set, all exhausting 
> the results from a \{{SELECT DISTINCT key FROM ...}} query with a page size 
> of 24):
> {noformat}
> working on compressible
> 5.21180510521
> 5.10270500183
> 5.22311806679
> 4.6732840538
> 4.84219098091
> working on uncompressible_uncompressed
> 55.0423607826
> 0.769015073776
> 0.850513935089
> 0.713396072388
> 0.62596988678
> working on uncompressible
> 413.292617083
> 231.345913887
> 449.524993896
> 425.135111094
> 243.469946861
> {noformat}
> and without the fix:
> {noformat}
> working on compressible
> 2.86733293533
> 1.24895811081
> 1.108907938
> 1.12742400169
> 1.04647302628
> working on uncompressible_uncompressed
> 56.4146180153
> 0.895509958267
> 0.922824144363
> 0.772884130478
> 0.731923818588
> working on uncompressible
> 64.4587619305
> 1.81325793266
> 1.52577018738
> 1.41769099236
> 1.60442209244
> {noformat}
> The long initial runs for the uncompressible data presumably come from 
> repeatedly hitting the disk.  In contrast to the runs without the fix, the 
> initial runs seem to be effective at warming the page cache (as lots of data 
> is skipped, so the data that's read can fit in memory), so subsequent runs 
> are faster.
> For smaller data sets, \{{RandomAccessReader.seek()}} and 
> 

[jira] [Updated] (CASSANDRA-14415) Performance regression in queries for distinct keys

2018-04-24 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-14415:
---
Fix Version/s: 4.x
   3.11.x
   3.0.x

> Performance regression in queries for distinct keys
> ---
>
> Key: CASSANDRA-14415
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14415
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Samuel Klock
>Assignee: Samuel Klock
>Priority: Major
>  Labels: performance
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> Running Cassandra 3.0.16, we observed a major performance regression 
> affecting \{{SELECT DISTINCT keys}}-style queries against certain tables.  
> Based on some investigation (guided by some helpful feedback from Benjamin on 
> the dev list), we tracked the regression down to two problems.
> * One is that Cassandra was reading more data from disk than was necessary to 
> satisfy the query.  This was fixed under CASSANDRA-10657 in a later 3.x 
> release.
> * If the fix for CASSANDRA-10657 is incorporated, the other is this code 
> snippet in \{{RebufferingInputStream}}:
> {code:java}
>     @Override
>     public int skipBytes(int n) throws IOException
>     {
>     if (n < 0)
>     return 0;
>     int requested = n;
>     int position = buffer.position(), limit = buffer.limit(), remaining;
>     while ((remaining = limit - position) < n)
>     {
>     n -= remaining;
>     buffer.position(limit);
>     reBuffer();
>     position = buffer.position();
>     limit = buffer.limit();
>     if (position == limit)
>     return requested - n;
>     }
>     buffer.position(position + n);
>     return requested;
>     }
> {code}
> The gist of it is that to skip bytes, the stream needs to read those bytes 
> into memory then throw them away.  In our tests, we were spending a lot of 
> time in this method, so it looked like the chief drag on performance.
> We noticed that the subclass of \{{RebufferingInputStream}} in use for our 
> queries, \{{RandomAccessReader}} (over compressed sstables), implements a 
> \{{seek()}} method.  Overriding \{{skipBytes()}} in it to use \{{seek()}} 
> instead was sufficient to fix the performance regression.
> The performance difference is significant for tables with large values.  It's 
> straightforward to evaluate with very simple key-value tables, e.g.:
> {\{CREATE TABLE testtable (key TEXT PRIMARY KEY, value BLOB);}}
> We did some basic experimentation with the following variations (all in a 
> single-node 3.11.2 cluster with off-the-shelf settings running on a dev 
> workstation):
> * small values (1 KB, 100,000 entries), somewhat larger values (25 KB, 10,000 
> entries), and much larger values (1 MB, 10,000 entries);
> * compressible data (a single byte repeated) and uncompressible data (output 
> from \{{openssl rand $bytes}}); and
> * with and without sstable compression.  (With compression, we use 
> Cassandra's defaults.)
> The difference is most conspicuous for tables with large, uncompressible data 
> and sstable decompression (which happens to describe the use case that 
> triggered our investigation).  It is smaller but still readily apparent for 
> tables with effective compression.  For uncompressible data without 
> compression enabled, there is no appreciable difference.
> Here's what the performance looks like without our patch for the 1-MB entries 
> (times in seconds, five consecutive runs for each data set, all exhausting 
> the results from a \{{SELECT DISTINCT key FROM ...}} query with a page size 
> of 24):
> {noformat}
> working on compressible
> 5.21180510521
> 5.10270500183
> 5.22311806679
> 4.6732840538
> 4.84219098091
> working on uncompressible_uncompressed
> 55.0423607826
> 0.769015073776
> 0.850513935089
> 0.713396072388
> 0.62596988678
> working on uncompressible
> 413.292617083
> 231.345913887
> 449.524993896
> 425.135111094
> 243.469946861
> {noformat}
> and without the fix:
> {noformat}
> working on compressible
> 2.86733293533
> 1.24895811081
> 1.108907938
> 1.12742400169
> 1.04647302628
> working on uncompressible_uncompressed
> 56.4146180153
> 0.895509958267
> 0.922824144363
> 0.772884130478
> 0.731923818588
> working on uncompressible
> 64.4587619305
> 1.81325793266
> 1.52577018738
> 1.41769099236
> 1.60442209244
> {noformat}
> The long initial runs for the uncompressible data presumably come from 
> repeatedly hitting the disk.  In contrast to the runs without the fix, the 
> initial runs seem to be effective at warming the page cache (as lots of data 
> is skipped, so the data that's read can fit in memory), so subsequent runs 
> are faster.
> For smaller data sets, 

[jira] [Updated] (CASSANDRA-11551) Incorrect counting of pending messages in OutboundTcpConnection

2018-04-24 Thread Jaydeepkumar Chovatia (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaydeepkumar Chovatia updated CASSANDRA-11551:
--
Fix Version/s: 3.11.x
   3.0.x

> Incorrect counting of pending messages in OutboundTcpConnection
> ---
>
> Key: CASSANDRA-11551
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11551
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 2.2.x, 3.0.x, 3.11.x
>
>
> Somehow {{OutboundTcpConnection.getPendingMessages()}} seems to return a 
> wrong number.
> {code}
> nodetool netstats
> Mode: NORMAL
> Not sending any streams.
> Read Repair Statistics:
> Attempted: 1655
> Mismatch (Blocking): 0
> Mismatch (Background): 2
> Pool NameActive   Pending  Completed
> Large messages  n/a 5  0
> Small messages  n/a 0   31534100
> Gossip messages n/a 0 520393
> {code}
> Inspection of the heap dump of that node unveiled that all instances of 
> {{OutboundTcpConnection.backlog}} are empty but {{currentMsgBufferCount}} is 
> {{1}} for 5 instances of {{OutboundTcpConnection}}.
> Maybe the cause is in {{OutboundTcpConnection.run()}} where 
> {{drainedMessages.size()}} is called twice but assumed that these are equal.
> /cc [~aweisberg]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-11551) Incorrect counting of pending messages in OutboundTcpConnection

2018-04-24 Thread Jaydeepkumar Chovatia (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaydeepkumar Chovatia updated CASSANDRA-11551:
--
Status: Patch Available  (was: Open)

> Incorrect counting of pending messages in OutboundTcpConnection
> ---
>
> Key: CASSANDRA-11551
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11551
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 2.2.x, 3.0.x, 3.11.x
>
>
> Somehow {{OutboundTcpConnection.getPendingMessages()}} seems to return a 
> wrong number.
> {code}
> nodetool netstats
> Mode: NORMAL
> Not sending any streams.
> Read Repair Statistics:
> Attempted: 1655
> Mismatch (Blocking): 0
> Mismatch (Background): 2
> Pool NameActive   Pending  Completed
> Large messages  n/a 5  0
> Small messages  n/a 0   31534100
> Gossip messages n/a 0 520393
> {code}
> Inspection of the heap dump of that node unveiled that all instances of 
> {{OutboundTcpConnection.backlog}} are empty but {{currentMsgBufferCount}} is 
> {{1}} for 5 instances of {{OutboundTcpConnection}}.
> Maybe the cause is in {{OutboundTcpConnection.run()}} where 
> {{drainedMessages.size()}} is called twice but assumed that these are equal.
> /cc [~aweisberg]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11551) Incorrect counting of pending messages in OutboundTcpConnection

2018-04-24 Thread Jaydeepkumar Chovatia (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450341#comment-16450341
 ] 

Jaydeepkumar Chovatia commented on CASSANDRA-11551:
---

I think I've found the root cause for this issue, one possible reason is once 
we break at [this 
point|https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/net/OutboundTcpConnection.java#L270]
 then {{currentMsgBufferCount}} doesn't get reset and at that time 
{{backlog.size()}} is 0 but {{currentMsgBufferCount}} could be > 0 and call to 
{{getPendingMessages}} at this time will give > 0 pending message even though 
in reality it is none.

This problem will not happen most likely on {{trunk}} because it doesn't use 
{{currentMsgBufferCount}} instead it directly deals with {{backlog.size()}} in 
[getPendingMessages|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/net/async/OutboundMessagingConnection.java#L659]

I've fixed this for older branches, please find fix here:
||3.11||3.0||2.2||
|[diff 
|https://github.com/apache/cassandra/compare/cassandra-3.11...jaydeepkumar1984:11551-3.11?expand=1]|[diff
 
|https://github.com/apache/cassandra/compare/trunk...jaydeepkumar1984:11551-3.0?expand=1]|[diff|https://github.com/apache/cassandra/compare/trunk...jaydeepkumar1984:11551-2.2?expand=1]|
|[!https://circleci.com/gh/jaydeepkumar1984/cassandra/tree/11551-3.11.svg?style=svg!
 
|https://circleci.com/gh/jaydeepkumar1984/cassandra/63]|[!https://circleci.com/gh/jaydeepkumar1984/cassandra/tree/11551-3.0.svg?style=svg!
 
|https://circleci.com/gh/jaydeepkumar1984/cassandra/65]|[!https://circleci.com/gh/jaydeepkumar1984/cassandra/tree/11551-2.2.svg?style=svg!
  |https://circleci.com/gh/jaydeepkumar1984/cassandra/66]|

Jaydeep

> Incorrect counting of pending messages in OutboundTcpConnection
> ---
>
> Key: CASSANDRA-11551
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11551
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 2.2.x
>
>
> Somehow {{OutboundTcpConnection.getPendingMessages()}} seems to return a 
> wrong number.
> {code}
> nodetool netstats
> Mode: NORMAL
> Not sending any streams.
> Read Repair Statistics:
> Attempted: 1655
> Mismatch (Blocking): 0
> Mismatch (Background): 2
> Pool NameActive   Pending  Completed
> Large messages  n/a 5  0
> Small messages  n/a 0   31534100
> Gossip messages n/a 0 520393
> {code}
> Inspection of the heap dump of that node unveiled that all instances of 
> {{OutboundTcpConnection.backlog}} are empty but {{currentMsgBufferCount}} is 
> {{1}} for 5 instances of {{OutboundTcpConnection}}.
> Maybe the cause is in {{OutboundTcpConnection.run()}} where 
> {{drainedMessages.size()}} is called twice but assumed that these are equal.
> /cc [~aweisberg]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14415) Performance regression in queries for distinct keys

2018-04-24 Thread Samuel Klock (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samuel Klock updated CASSANDRA-14415:
-
Reproduced In: 3.11.2, 3.0.16  (was: 3.0.16, 3.11.2)
   Status: Patch Available  (was: Open)

Patch available 
[here|https://github.com/akasklock/cassandra/tree/CASSANDRA-14415-Use-seek-for-skipBytes-3.11.2].

> Performance regression in queries for distinct keys
> ---
>
> Key: CASSANDRA-14415
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14415
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Samuel Klock
>Assignee: Samuel Klock
>Priority: Major
>
> Running Cassandra 3.0.16, we observed a major performance regression 
> affecting \{{SELECT DISTINCT keys}}-style queries against certain tables.  
> Based on some investigation (guided by some helpful feedback from Benjamin on 
> the dev list), we tracked the regression down to two problems.
> * One is that Cassandra was reading more data from disk than was necessary to 
> satisfy the query.  This was fixed under CASSANDRA-10657 in a later 3.x 
> release.
> * If the fix for CASSANDRA-10657 is incorporated, the other is this code 
> snippet in \{{RebufferingInputStream}}:
> {code:java}
>     @Override
>     public int skipBytes(int n) throws IOException
>     {
>     if (n < 0)
>     return 0;
>     int requested = n;
>     int position = buffer.position(), limit = buffer.limit(), remaining;
>     while ((remaining = limit - position) < n)
>     {
>     n -= remaining;
>     buffer.position(limit);
>     reBuffer();
>     position = buffer.position();
>     limit = buffer.limit();
>     if (position == limit)
>     return requested - n;
>     }
>     buffer.position(position + n);
>     return requested;
>     }
> {code}
> The gist of it is that to skip bytes, the stream needs to read those bytes 
> into memory then throw them away.  In our tests, we were spending a lot of 
> time in this method, so it looked like the chief drag on performance.
> We noticed that the subclass of \{{RebufferingInputStream}} in use for our 
> queries, \{{RandomAccessReader}} (over compressed sstables), implements a 
> \{{seek()}} method.  Overriding \{{skipBytes()}} in it to use \{{seek()}} 
> instead was sufficient to fix the performance regression.
> The performance difference is significant for tables with large values.  It's 
> straightforward to evaluate with very simple key-value tables, e.g.:
> {\{CREATE TABLE testtable (key TEXT PRIMARY KEY, value BLOB);}}
> We did some basic experimentation with the following variations (all in a 
> single-node 3.11.2 cluster with off-the-shelf settings running on a dev 
> workstation):
> * small values (1 KB, 100,000 entries), somewhat larger values (25 KB, 10,000 
> entries), and much larger values (1 MB, 10,000 entries);
> * compressible data (a single byte repeated) and uncompressible data (output 
> from \{{openssl rand $bytes}}); and
> * with and without sstable compression.  (With compression, we use 
> Cassandra's defaults.)
> The difference is most conspicuous for tables with large, uncompressible data 
> and sstable decompression (which happens to describe the use case that 
> triggered our investigation).  It is smaller but still readily apparent for 
> tables with effective compression.  For uncompressible data without 
> compression enabled, there is no appreciable difference.
> Here's what the performance looks like without our patch for the 1-MB entries 
> (times in seconds, five consecutive runs for each data set, all exhausting 
> the results from a \{{SELECT DISTINCT key FROM ...}} query with a page size 
> of 24):
> {noformat}
> working on compressible
> 5.21180510521
> 5.10270500183
> 5.22311806679
> 4.6732840538
> 4.84219098091
> working on uncompressible_uncompressed
> 55.0423607826
> 0.769015073776
> 0.850513935089
> 0.713396072388
> 0.62596988678
> working on uncompressible
> 413.292617083
> 231.345913887
> 449.524993896
> 425.135111094
> 243.469946861
> {noformat}
> and without the fix:
> {noformat}
> working on compressible
> 2.86733293533
> 1.24895811081
> 1.108907938
> 1.12742400169
> 1.04647302628
> working on uncompressible_uncompressed
> 56.4146180153
> 0.895509958267
> 0.922824144363
> 0.772884130478
> 0.731923818588
> working on uncompressible
> 64.4587619305
> 1.81325793266
> 1.52577018738
> 1.41769099236
> 1.60442209244
> {noformat}
> The long initial runs for the uncompressible data presumably come from 
> repeatedly hitting the disk.  In contrast to the runs without the fix, the 
> initial runs seem to be effective at warming the page cache (as lots of data 
> is skipped, so the data that's read can fit in memory), so subsequent runs 

[jira] [Created] (CASSANDRA-14415) Performance regression in queries for distinct keys

2018-04-24 Thread Samuel Klock (JIRA)
Samuel Klock created CASSANDRA-14415:


 Summary: Performance regression in queries for distinct keys
 Key: CASSANDRA-14415
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14415
 Project: Cassandra
  Issue Type: Improvement
Reporter: Samuel Klock
Assignee: Samuel Klock


Running Cassandra 3.0.16, we observed a major performance regression affecting 
\{{SELECT DISTINCT keys}}-style queries against certain tables.  Based on some 
investigation (guided by some helpful feedback from Benjamin on the dev list), 
we tracked the regression down to two problems.

* One is that Cassandra was reading more data from disk than was necessary to 
satisfy the query.  This was fixed under CASSANDRA-10657 in a later 3.x release.
* If the fix for CASSANDRA-10657 is incorporated, the other is this code 
snippet in \{{RebufferingInputStream}}:
{code:java}
    @Override
    public int skipBytes(int n) throws IOException
    {
    if (n < 0)
    return 0;
    int requested = n;
    int position = buffer.position(), limit = buffer.limit(), remaining;
    while ((remaining = limit - position) < n)
    {
    n -= remaining;
    buffer.position(limit);
    reBuffer();
    position = buffer.position();
    limit = buffer.limit();
    if (position == limit)
    return requested - n;
    }
    buffer.position(position + n);
    return requested;
    }
{code}
The gist of it is that to skip bytes, the stream needs to read those bytes into 
memory then throw them away.  In our tests, we were spending a lot of time in 
this method, so it looked like the chief drag on performance.

We noticed that the subclass of \{{RebufferingInputStream}} in use for our 
queries, \{{RandomAccessReader}} (over compressed sstables), implements a 
\{{seek()}} method.  Overriding \{{skipBytes()}} in it to use \{{seek()}} 
instead was sufficient to fix the performance regression.

The performance difference is significant for tables with large values.  It's 
straightforward to evaluate with very simple key-value tables, e.g.:

{\{CREATE TABLE testtable (key TEXT PRIMARY KEY, value BLOB);}}

We did some basic experimentation with the following variations (all in a 
single-node 3.11.2 cluster with off-the-shelf settings running on a dev 
workstation):

* small values (1 KB, 100,000 entries), somewhat larger values (25 KB, 10,000 
entries), and much larger values (1 MB, 10,000 entries);
* compressible data (a single byte repeated) and uncompressible data (output 
from \{{openssl rand $bytes}}); and
* with and without sstable compression.  (With compression, we use Cassandra's 
defaults.)

The difference is most conspicuous for tables with large, uncompressible data 
and sstable decompression (which happens to describe the use case that 
triggered our investigation).  It is smaller but still readily apparent for 
tables with effective compression.  For uncompressible data without compression 
enabled, there is no appreciable difference.

Here's what the performance looks like without our patch for the 1-MB entries 
(times in seconds, five consecutive runs for each data set, all exhausting the 
results from a \{{SELECT DISTINCT key FROM ...}} query with a page size of 24):

{noformat}
working on compressible
5.21180510521
5.10270500183
5.22311806679
4.6732840538
4.84219098091
working on uncompressible_uncompressed
55.0423607826
0.769015073776
0.850513935089
0.713396072388
0.62596988678
working on uncompressible
413.292617083
231.345913887
449.524993896
425.135111094
243.469946861
{noformat}

and without the fix:

{noformat}
working on compressible
2.86733293533
1.24895811081
1.108907938
1.12742400169
1.04647302628
working on uncompressible_uncompressed
56.4146180153
0.895509958267
0.922824144363
0.772884130478
0.731923818588
working on uncompressible
64.4587619305
1.81325793266
1.52577018738
1.41769099236
1.60442209244
{noformat}

The long initial runs for the uncompressible data presumably come from 
repeatedly hitting the disk.  In contrast to the runs without the fix, the 
initial runs seem to be effective at warming the page cache (as lots of data is 
skipped, so the data that's read can fit in memory), so subsequent runs are 
faster.

For smaller data sets, \{{RandomAccessReader.seek()}} and 
\{{RebufferingInputStream.skipBytes()}} are approximately equivalent in their 
behavior (reducing to changing the position pointer of an in-memory buffer most 
of the time), so there isn't much difference.  Here's before the fix for the 
1-KB entries:

{noformat}
working on small_compressible
8.34115099907
8.57280993462
8.3534219265
8.55130696297
8.17362189293
working on small_uncompressible_uncompressed
7.85155582428
7.54075288773
7.50106596947
7.39202189445
7.95735621452
working on small_uncompressible
7.89256501198
7.88875198364

[jira] [Assigned] (CASSANDRA-11551) Incorrect counting of pending messages in OutboundTcpConnection

2018-04-24 Thread Jaydeepkumar Chovatia (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaydeepkumar Chovatia reassigned CASSANDRA-11551:
-

Assignee: Jaydeepkumar Chovatia

> Incorrect counting of pending messages in OutboundTcpConnection
> ---
>
> Key: CASSANDRA-11551
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11551
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Jaydeepkumar Chovatia
>Priority: Minor
> Fix For: 2.2.x
>
>
> Somehow {{OutboundTcpConnection.getPendingMessages()}} seems to return a 
> wrong number.
> {code}
> nodetool netstats
> Mode: NORMAL
> Not sending any streams.
> Read Repair Statistics:
> Attempted: 1655
> Mismatch (Blocking): 0
> Mismatch (Background): 2
> Pool NameActive   Pending  Completed
> Large messages  n/a 5  0
> Small messages  n/a 0   31534100
> Gossip messages n/a 0 520393
> {code}
> Inspection of the heap dump of that node unveiled that all instances of 
> {{OutboundTcpConnection.backlog}} are empty but {{currentMsgBufferCount}} is 
> {{1}} for 5 instances of {{OutboundTcpConnection}}.
> Maybe the cause is in {{OutboundTcpConnection.run()}} where 
> {{drainedMessages.size()}} is called twice but assumed that these are equal.
> /cc [~aweisberg]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13404) Hostname verification for client-to-node encryption

2018-04-24 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450252#comment-16450252
 ] 

Jason Brown commented on CASSANDRA-13404:
-

I'll review this with a fresh perspective, as well, [~eperott]. If anything, 
thanks for the persistence ;)

> Hostname verification for client-to-node encryption
> ---
>
> Key: CASSANDRA-13404
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13404
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jan Karlsson
>Assignee: Per Otterström
>Priority: Major
> Fix For: 4.x
>
> Attachments: 13404-trunk-v2.patch, 13404-trunk.txt
>
>
> Similarily to CASSANDRA-9220, Cassandra should support hostname verification 
> for client-node connections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14413) minor network auth improvements

2018-04-24 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-14413:
-
Labels: security  (was: )

> minor network auth improvements
> ---
>
> Key: CASSANDRA-14413
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14413
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
>  Labels: security
> Fix For: 4.0
>
>
> CASSANDRA-13985 has a few minor things that could be improved



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14197) SSTable upgrade should be automatic

2018-04-24 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450228#comment-16450228
 ] 

Ariel Weisberg commented on CASSANDRA-14197:


That is a good point. I didn't consider that people will bring up the cluster 
with the new version pretty rapidly.

It seems like a single thread is probably fine because there is no rush to get 
the tables upgraded right?

> SSTable upgrade should be automatic
> ---
>
> Key: CASSANDRA-14197
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14197
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 4.x
>
>
> Upgradesstables should run automatically on node upgrade



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14410) tablehistograms with non-existent table gives an exception

2018-04-24 Thread Chris Lohfink (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Lohfink updated CASSANDRA-14410:
--
Labels: lhf  (was: )

> tablehistograms with non-existent table gives an exception
> --
>
> Key: CASSANDRA-14410
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14410
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Hannu Kröger
>Priority: Major
>  Labels: lhf
>
> nodetool tablehistograms with non-existent table gives a crazy exception. It 
> should give a nice error message like "Table acdc.abba doesn't exist" or 
> something like that.
>  
> Example:
> {code:java}
> $ nodetool tablehistograms acdc.abba
> error: 
> org.apache.cassandra.metrics:type=Table,keyspace=acdc,scope=abba,name=EstimatedPartitionSizeHistogram
> -- StackTrace --
> javax.management.InstanceNotFoundException: 
> org.apache.cassandra.metrics:type=Table,keyspace=acdc,scope=abba,name=EstimatedPartitionSizeHistogram
>     at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
>     at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
>     at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1445)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:639)
>     at sun.reflect.GeneratedMethodAccessor297.invoke(Unknown Source)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:324)
>     at sun.rmi.transport.Transport$1.run(Transport.java:200)
>     at sun.rmi.transport.Transport$1.run(Transport.java:197)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at sun.rmi.transport.Transport.serviceCall(Transport.java:196)
>     at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:568)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
>     at 
> sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:283)
>     at 
> sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:260)
>     at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161)
>     at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
>     at 
> javax.management.remote.rmi.RMIConnectionImpl_Stub.getAttribute(Unknown 
> Source)
>     at 
> javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getAttribute(RMIConnector.java:903)
>     at 
> javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:273)
>     at com.sun.proxy.$Proxy20.getValue(Unknown Source)
>     at 
> org.apache.cassandra.tools.NodeProbe.getColumnFamilyMetric(NodeProbe.java:1334)
>     at 
> org.apache.cassandra.tools.nodetool.TableHistograms.execute(TableHistograms.java:62)
>     at org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:254)
>     at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:168){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14342) Refactor getColumnFamilyStore() to getTable()

2018-04-24 Thread Jon Haddad (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Haddad updated CASSANDRA-14342:
---
Summary: Refactor getColumnFamilyStore() to getTable()  (was: Refactor 
getColumnFamilyStore() to getTableStore())

> Refactor getColumnFamilyStore() to getTable()
> -
>
> Key: CASSANDRA-14342
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14342
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jon Haddad
>Assignee: Jon Haddad
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-7622) Implement virtual tables

2018-04-24 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450123#comment-16450123
 ] 

Chris Lohfink commented on CASSANDRA-7622:
--

How do you create new tables? have a snippet or something?  ie 
https://github.com/clohfink/cassandra/blob/945309719a447c863bb60d3976d6a73d7b2f6863/src/java/org/apache/cassandra/db/virtual/CompactionStats.java

2 weeks isnt long to wait, and im all for dropping this patch if the other 
implementation is better, but I am not really convinced its any better, just 
different, and in a way that is in some ways worse sounding to me (I am not 
convinced a custom DDL for querying is a good idea). I do have to say though, 
this ticket has been 4 years of bikeshedding, kinda unfortunate timing to wait 
until theres an implementation in review to do it from scratch instead of 
giving feedback.

> Implement virtual tables
> 
>
> Key: CASSANDRA-7622
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7622
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tupshin Harper
>Assignee: Chris Lohfink
>Priority: Major
> Fix For: 4.x
>
>
> There are a variety of reasons to want virtual tables, which would be any 
> table that would be backed by an API, rather than data explicitly managed and 
> stored as sstables.
> One possible use case would be to expose JMX data through CQL as a 
> resurrection of CASSANDRA-3527.
> Another is a more general framework to implement the ability to expose yaml 
> configuration information. So it would be an alternate approach to 
> CASSANDRA-7370.
> A possible implementation would be in terms of CASSANDRA-7443, but I am not 
> presupposing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-7622) Implement virtual tables

2018-04-24 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450066#comment-16450066
 ] 

Chris Lohfink edited comment on CASSANDRA-7622 at 4/24/18 3:36 PM:
---

I am fine with renaming virtualtable to systemview, its irrelevant at this 
point and not exposed externally currently anyway. Ill do the rename in 
followup if its important.

If you want it to display "SYSTEM VIEW" in cqlsh when do a describe, thats 
purely a client side/driver change (if virtual or systemview whatever flag set, 
use that instead of CREATE TABLE), you dont need to change CQL to do that. 
There is a DDL change with this but its locked down internally and mostly just 
for ease, I can take that out and require using the tablemetadata builder even 
which might be a good idea. Changing protocol wont make that difference unless 
your planning on a whole new place to store the schema and share that 
information with driver/cqlsh. I think that can be captured in followup jira or 
sub task to improve UX with cqlsh/drivers.

Its supposed to be able to do range queries and about any slice/dicing, but I 
did miss short cutting the IN restriction, I'll fix that.

As for the schema... You have 1 cell per attribute of each metric? theres 
currently 63 metrics with between 1 and 15 values - literally hundreds of cells 
_per row_ which isnt viewable on a sane display. you can just as easily grab a 
single value efficiently with impl:

{code}
SELECT one_min_rate FROM system_info.table_stats WHERE keyspace_name = 'system' 
AND table_name = 'local' AND metric = 'writeLatency';
{code}

or see all parts of the metric:

{code}
SELECT * FROM system_info.table_stats WHERE keyspace_name = 'system' AND 
table_name = 'local' AND metric= 'writeLatency';
{code}

If we want to fine tune the table_stats schema, we can just remove it from this 
patch, and move to another patch to discuss that particular tables schema. 
Thats not really what this ticket is about. There are a lot of tables to 
create, lets just get the capability to create them in. If your implementation 
is very similar with a ReadQuery replacement its at a high level the same 
thing. In next patch I did refactor to look a bit cleaner and reuse existing 
AbstractQueryPager logic more vs doing its own from scratch QueryPager.



was (Author: cnlwsu):
I am fine with renaming virtualtable to systemview, its irrelevant at this 
point and not exposed externally currently anyway. Ill do the rename in 
followup if its important.

If you want it to display "SYSTEM VIEW" in cqlsh when do a describe, thats 
purely a client side/driver change (if virtual or systemview whatever flag set, 
use that instead of CREATE TABLE), you dont need to change CQL to do that, and 
changing protocol wont make that difference unless your planning on a whole new 
place to store the schema and share that information with driver/cqlsh. I think 
that can be captured in followup jira or sub task to improve UX with 
cqlsh/drivers.

Its supposed range queries and about any slice/dicing, but I did miss short 
cutting the IN restriction, I'll fix that.

As for the schema... You have 1 cell per attribute of each metric? theres 
currently 63 metrics with between 1 and 15 values - literally hundreds of cells 
_per row_ which isnt viewable on a sane display. you can just as easily grab a 
single value efficiently with impl:

{code}
SELECT one_min_rate FROM system_info.table_stats WHERE keyspace_name = 'system' 
AND table_name = 'local' AND metric = 'writeLatency';
{code}

or see all parts of the metric:

{code}
SELECT * FROM system_info.table_stats WHERE keyspace_name = 'system' AND 
table_name = 'local' AND metric= 'writeLatency';
{code}

If we want to fine tune the table_stats schema, we can just remove it from this 
patch, and move to another patch to discuss that particular tables schema. 
Thats not really what this ticket is about. There are a lot of tables to 
create, lets just get the capability to create them in. If your implementation 
is very similar with a ReadQuery replacement its at a high level the same 
thing. In next patch I did refactor to look a bit cleaner and reuse existing 
AbstractQueryPager logic more vs doing its own from scratch QueryPager.


> Implement virtual tables
> 
>
> Key: CASSANDRA-7622
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7622
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tupshin Harper
>Assignee: Chris Lohfink
>Priority: Major
> Fix For: 4.x
>
>
> There are a variety of reasons to want virtual tables, which would be any 
> table that would be backed by an API, rather than data explicitly managed and 
> stored as sstables.
> One possible use case would be to expose JMX data through CQL as 

[jira] [Commented] (CASSANDRA-7622) Implement virtual tables

2018-04-24 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450066#comment-16450066
 ] 

Chris Lohfink commented on CASSANDRA-7622:
--

I am fine with renaming virtualtable to systemview, its irrelevant at this 
point and not exposed externally currently anyway. Ill do the rename in 
followup if its important.

If you want it to display "SYSTEM VIEW" in cqlsh when do a describe, thats 
purely a client side/driver change (if virtual or systemview whatever flag set, 
use that instead of CREATE TABLE), you dont need to change CQL to do that, and 
changing protocol wont make that difference unless your planning on a whole new 
place to store the schema and share that information with driver/cqlsh. I think 
that can be captured in followup jira or sub task to improve UX with 
cqlsh/drivers.

Its supposed range queries and about any slice/dicing, but I did miss short 
cutting the IN restriction, I'll fix that.

As for the schema... You have 1 cell per attribute of each metric? theres 
currently 63 metrics with between 1 and 15 values - literally hundreds of cells 
_per row_ which isnt viewable on a sane display. you can just as easily grab a 
single value efficiently with impl:

{code}
SELECT one_min_rate FROM system_info.table_stats WHERE keyspace_name = 'system' 
AND table_name = 'local' AND metric = 'writeLatency';
{code}

or see all parts of the metric:

{code}
SELECT * FROM system_info.table_stats WHERE keyspace_name = 'system' AND 
table_name = 'local' AND metric= 'writeLatency';
{code}

If we want to fine tune the table_stats schema, we can just remove it from this 
patch, and move to another patch to discuss that particular tables schema. 
Thats not really what this ticket is about. There are a lot of tables to 
create, lets just get the capability to create them in. If your implementation 
is very similar with a ReadQuery replacement its at a high level the same 
thing. In next patch I did refactor to look a bit cleaner and reuse existing 
AbstractQueryPager logic more vs doing its own from scratch QueryPager.


> Implement virtual tables
> 
>
> Key: CASSANDRA-7622
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7622
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tupshin Harper
>Assignee: Chris Lohfink
>Priority: Major
> Fix For: 4.x
>
>
> There are a variety of reasons to want virtual tables, which would be any 
> table that would be backed by an API, rather than data explicitly managed and 
> stored as sstables.
> One possible use case would be to expose JMX data through CQL as a 
> resurrection of CASSANDRA-3527.
> Another is a more general framework to implement the ability to expose yaml 
> configuration information. So it would be an alternate approach to 
> CASSANDRA-7370.
> A possible implementation would be in terms of CASSANDRA-7443, but I am not 
> presupposing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14298) cqlshlib tests broken on b.a.o

2018-04-24 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450051#comment-16450051
 ] 

Stefan Podkowinski commented on CASSANDRA-14298:


{quote}For example, do we want to go all the way to Python 3, or would we 
prefer it to be 2/3 cross-compatible?
{quote}
Being cross-compatible would be the most convenient option for users I guess. 
But I haven't really any experience converting Python 2 code in such a way. 
Would you up to give this a try, Patrick?
{quote}I agree with this, only casually observing this ticket. Is it worth 
bringing up on the dev@ ML?
{quote}
Sure. Maybe we can bring it up with an actual proposal after checking our 
options.
{quote}However, for the impacted copy tests, we can only completely avoid these 
ugly workarounds if we port cqlshlib to Python 3 not only in trunk, but also in 
all other supported versions.
{quote}
Depending on how invasive those changes will turn out to be, we can discuss 
patching 3.11, but I don't think that's going to happen for older branches. 
Anyone running Cassandra 2.x/3.0 should be able to keep using Python 2 until 
EOL 2020, which should be well past the Cassandra EOL date.
{quote}Any branch of C* left behind on Python 2.7 will either have to be 
skipped for the copy tests, or else tested through some kind of alternate 
approach such as the awful hack I'm working on right now.
{quote}
We currently run 0% tests on b.a.o. If we can get all but the copy tests 
working in a straight forward way, that would be a good start.

> cqlshlib tests broken on b.a.o
> --
>
> Key: CASSANDRA-14298
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14298
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build, Testing
>Reporter: Stefan Podkowinski
>Assignee: Patrick Bannister
>Priority: Major
> Attachments: cqlsh_tests_notes.md
>
>
> It appears that cqlsh-tests on builds.apache.org on all branches stopped 
> working since we removed nosetests from the system environment. See e.g. 
> [here|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-cqlsh-tests/458/cython=no,jdk=JDK%201.8%20(latest),label=cassandra/console].
>  Looks like we either have to make nosetests available again or migrate to 
> pytest as we did with dtests. Giving pytest a quick try resulted in many 
> errors locally, but I haven't inspected them in detail yet. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-7622) Implement virtual tables

2018-04-24 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450043#comment-16450043
 ] 

Aleksey Yeschenko commented on CASSANDRA-7622:
--

[~blerer]
1. Do these tables live in a special keyspace?
2. What's the DDL for it? Is it at all exposed to users or hard-coded 
internally?
3. How's it represented in schema tables? Does it highjack system_schema.tables 
with shim props, or is there a separate schema table for system views?

> Implement virtual tables
> 
>
> Key: CASSANDRA-7622
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7622
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tupshin Harper
>Assignee: Chris Lohfink
>Priority: Major
> Fix For: 4.x
>
>
> There are a variety of reasons to want virtual tables, which would be any 
> table that would be backed by an API, rather than data explicitly managed and 
> stored as sstables.
> One possible use case would be to expose JMX data through CQL as a 
> resurrection of CASSANDRA-3527.
> Another is a more general framework to implement the ability to expose yaml 
> configuration information. So it would be an alternate approach to 
> CASSANDRA-7370.
> A possible implementation would be in terms of CASSANDRA-7443, but I am not 
> presupposing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13404) Hostname verification for client-to-node encryption

2018-04-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450025#comment-16450025
 ] 

Per Otterström commented on CASSANDRA-13404:


Taking another stab at this ticket. Attaching an updated patch set and some 
dtests to go with that.

Short recap:
* I want to add hostname validation on server side to verify client IP matches 
SAN field in client certificate.
* Several concerns were raised on initial patch, "does it add value", "setting 
incoming IP on the SSLHandler", "added complexity for users".
* A second patch based on a plug-in approach was created. While this approach 
has some interesting benefits, it is a bit overkill for this.

Some comments on the updated patch:
* SslHandler will get client host info only when endpoint-verification is 
enabled, very similar to the setup of server-server communication. When 
require_endpoint_verification option is not enabled, behavior will remain 
unchanged.
* The require_endpoint_verification is already accepted for client-server 
configuration, just currently unused and silently discared. Adding this 
property to the client_encryption_options section should be manageble for our 
users in terms of complexity.
* The fact that this patch-set give the wanted effect is verified with the 
provided dtests.
* IMO the value is well argued in previous comments. When tickets like 
CASSANDRA-13971 gets merged, a growing number of useres will have access to an 
infrastructure that manages keys and certificates. Then hostname validation 
will be a common task.

Patch for trunk: https://github.com/eperott/cassandra/tree/13404-trunk
Dtests: https://github.com/eperott/cassandra-dtest/tree/13404-trunk
CircleCI (unit tests only): 
https://circleci.com/workflow-run/c29a6caf-1eeb-408d-a424-1ffbcaf9477d





> Hostname verification for client-to-node encryption
> ---
>
> Key: CASSANDRA-13404
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13404
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jan Karlsson
>Assignee: Per Otterström
>Priority: Major
> Fix For: 4.x
>
> Attachments: 13404-trunk-v2.patch, 13404-trunk.txt
>
>
> Similarily to CASSANDRA-9220, Cassandra should support hostname verification 
> for client-node connections.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-7622) Implement virtual tables

2018-04-24 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16450017#comment-16450017
 ] 

Benjamin Lerer commented on CASSANDRA-7622:
---

{quote}you can do arithmetic operations, searches and aggregations on them but 
the type was wrong{quote}
Sorry, my comment was misleading. I just wanted to mention the fact that 
aggregates and arithmetic operations do not work on {{TEXT}} values.

{quote}What would you think the table schema should look like?{quote}
I asked myself that question a lot. Due to the CQL limitations, I do not think 
that there is a perfect solution. In the end, my preferred  schema is:
{code}
SYSTEM VIEW sv_table_metrics (keyspace TEXT,
  table TEXT,
  memtable_on_heap_size BIGINT,
  memtable_off_heap_size BIGINT,
  [...]
  PRIMARY KEY (keyspace, table));
{code}

That approach in the case of the {{table}} and {{keyspace}} metrics result in 
tables with a big number of columns (even if we mitigate that fact by using 
user define types for histograms, meters and timers) but it allow to easily 
select different subset of data. You can query based on {{keyspaces}}, 
{{tables}}, {{metrics}} and {{metric fields}}. At the same time you can easily 
select a specific metric value for a given table in an efficient way.

{quote}I am not fussy about naming. However, using the same terminology does 
confuse users as they may expect the same feature set from Cassandra as they 
got in their relational database. I would personally avoid it.{quote}

Based on my experience working on CQL tickets and my interaction with users or 
discussions with evangelists I came up with 2 conclusions.
# If the feature is the similar to one that they know from the relational 
world, people prefer when you use the same name. It is easier for them to 
recognize it and to understand how it should be used.
# If the feature has a different behavior that what is used in the relational 
world you should be careful and use a different naming or it will backfire.

In this case, there is no real difference between us an the relational world. 
Due to that, I think it would be a mistake to not reuse the name.
The {{Virtual Table}} name is in my opinion the really confusing one. It just 
make me think to some form of pluggable storage. Coming from the SQL world, it 
is not the name I would use in google to figure out how to access system 
information in Cassandra.

{quote}do you have a design or code that you can share? It would be great if 
you can post it. Is there a timeline around when you'll post it?{quote}

At the high level there are some similarities between [~cnlwsu] patch and ours. 
We have introduced some {{ReadQuery}} subclasses that delegate calls to 
{{SystemViews}} and slightly refactored the CQL layer to allow it to work on 
top of all {{ReadQuery}} implementations. The advantage of that approach is 
that the existing CQL functionalities are automatically supported on top the 
{{SystemViews}} and the conditional logic require for adding support for 
{{SystemViews}} is much lower.

[~cnlwsu] current patch does not support some range queries or multi-partition 
queries for example. It will fire an {{[Invalid query] message="IN restrictions 
are not supported on indexed columns"}} for multi-partition queries. We avoided 
that kind of risks/problems with our approach.

That reduced the logic of our {{SystemView}} implementation to just fetching 
the requesting data or updating them.

Our current code has been designed for DSE. So I need to modify it to make it 
work on top of C*. As I am also quite busy with some other tasks, it would 
probably take 2 weeks before I finish the port. 

  




> Implement virtual tables
> 
>
> Key: CASSANDRA-7622
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7622
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tupshin Harper
>Assignee: Chris Lohfink
>Priority: Major
> Fix For: 4.x
>
>
> There are a variety of reasons to want virtual tables, which would be any 
> table that would be backed by an API, rather than data explicitly managed and 
> stored as sstables.
> One possible use case would be to expose JMX data through CQL as a 
> resurrection of CASSANDRA-3527.
> Another is a more general framework to implement the ability to expose yaml 
> configuration information. So it would be an alternate approach to 
> CASSANDRA-7370.
> A possible implementation would be in terms of CASSANDRA-7443, but I am not 
> presupposing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Updated] (CASSANDRA-14311) Allow Token-Aware drivers for range scans

2018-04-24 Thread Adam Holmberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Holmberg updated CASSANDRA-14311:
--
Labels: client-impacting  (was: )

> Allow Token-Aware drivers for range scans
> -
>
> Key: CASSANDRA-14311
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14311
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Avi Kivity
>Priority: Major
>  Labels: client-impacting
>
> Currently, range scans are not token aware. This means that an extra hop is 
> needed for most requests. Since range scans are usually data intensive, this 
> causes significant extra traffic.
>  
> Token awareness could be enabled by having the coordinator return the token 
> for the next (still unread) row in the response, so the driver can select a 
> next coordinator that owns this row.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-2848) Make the Client API support passing down timeouts

2018-04-24 Thread Adam Holmberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Holmberg updated CASSANDRA-2848:
-
Labels: client-impacting  (was: )

> Make the Client API support passing down timeouts
> -
>
> Key: CASSANDRA-2848
> URL: https://issues.apache.org/jira/browse/CASSANDRA-2848
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Chris Goffinet
>Assignee: Dinesh Joshi
>Priority: Minor
>  Labels: client-impacting
> Fix For: 3.11.x
>
> Attachments: 2848-trunk-v2.txt, 2848-trunk.txt
>
>
> Having a max server RPC timeout is good for worst case, but many applications 
> that have middleware in front of Cassandra, might have higher timeout 
> requirements. In a fail fast environment, if my application starting at say 
> the front-end, only has 20ms to process a request, and it must connect to X 
> services down the stack, by the time it hits Cassandra, we might only have 
> 10ms. I propose we provide the ability to specify the timeout on each call we 
> do optionally.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14381) nodetool listsnapshots is missing local system keyspace snapshots

2018-04-24 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449519#comment-16449519
 ] 

Benjamin Lerer commented on CASSANDRA-14381:


It seems to me that it is a bug and that we should also have made the change in 
the other branches. Not only in trunk.

> nodetool listsnapshots is missing local system keyspace snapshots
> -
>
> Key: CASSANDRA-14381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14381
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: MacOs 10.12.5
> Java 1.8.0_144
> Cassandra 3.11.2 (brew install)
>Reporter: Cyril Scetbon
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 4.0
>
>
> The output of *nodetool listsnapshots* is inconsistent with the snapshots 
> created :
> {code:java}
> $ nodetool listsnapshots
> Snapshot Details:
> There are no snapshots
> $ nodetool snapshot -t tag1 --table local system
> Requested creating snapshot(s) for [system] with snapshot name [tag1] and 
> options {skipFlush=false}
> Snapshot directory: tag1
> $ nodetool snapshot -t tag2 --table local system
> Requested creating snapshot(s) for [system] with snapshot name [tag2] and 
> options {skipFlush=false}
> Snapshot directory: tag2
> $ nodetool listsnapshots
> Snapshot Details:
> There are no snapshots
> $ ls 
> /usr/local/var/lib/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/snapshots/
> tag1 tag2{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14359) CREATE TABLE fails if there is a column called "default" with Cassandra 3.11.2

2018-04-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449509#comment-16449509
 ] 

Andrés de la Peña commented on CASSANDRA-14359:
---

[~krummas] it seems correct to me, thanks for fixing the merge

> CREATE TABLE fails if there is a column called "default" with Cassandra 3.11.2
> --
>
> Key: CASSANDRA-14359
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14359
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL, Documentation and Website
> Environment: This is using Cassandra 3.11.2. This syntax was accepted 
> in 2.1.20.
>Reporter: Andy Klages
>Assignee: Andrés de la Peña
>Priority: Minor
> Fix For: 4.0, 3.0.17, 3.11.3
>
>
> My project is upgrading from Cassandra 2.1 to 3.11. We have a table whose 
> column name is "default". The Cassandra 3.11.2 is rejecting it. I don't see 
> "default" as a keyword in the CQL spec. 
> To reproduce, try adding the following:
> {code:java}
> CREATE TABLE simple (
> simplekey text PRIMARY KEY,
> default text // THIS IS REJECTED
> );
> {code}
> I get this error:
> {code:java}
> SyntaxException: line 3:4 mismatched input 'default' expecting ')' (...
> simplekey text PRIMARY KEY,[default]...)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14383) If fsync fails it's always an issue and continuing execution is suspect

2018-04-24 Thread Michael Burman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449505#comment-16449505
 ] 

Michael Burman commented on CASSANDRA-14383:


For reference, here is the mentioned Postgres mailing-list thread: 
[https://www.postgresql.org/message-id/CAMsr%2BYHh%2B5Oq4xziwwoEfhoTZgr07vdGG%2Bhu%3D1adXx59aTeaoQ%40mail.gmail.com]

 

> If fsync fails it's always an issue and continuing execution is suspect
> ---
>
> Key: CASSANDRA-14383
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14383
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
>Priority: Major
> Fix For: 2.1.x, 3.0.x, 3.11.x, 4.0.x
>
>
> We can't catch fsync errors and continue so we shouldn't have code that does 
> that in C*. There was a Postgres bug where fsync returned an error and the FS 
> lost data, but subsequent fsyncs succeeded.
> The [LastErrorException code in 
> NativeLibrary.trySync|https://github.com/apache/cassandra/commit/be313935e54be450d9aaabda7965a2f266e922c9#diff-4258621cdf765f0fea6770db5d40038fR307]
>  looks a little janky. What's up with that? When would trySync be something 
> we would merely try? If try is good enough why do it at all considering try 
> is the default behavior of a series of unsynced filesystem operations.
> -Also when we fsync in FD it's not just fsyncing that file the FS is 
> potentially fsyncing other data and the error code we get could be related to 
> that other data so we can't safely ignore it. The filesystem could be 
> internally inconsistent as well. This happens because the FS journaling may 
> force the FS to flush other data as well to preserve the ordering 
> requirements of journaled metadata.- I'm actually not 100% sure when/if this 
> is the case.
> If we ignore fsync errors it needs to be for whitelisted reasons such as a 
> bad FD.
> I know we have FSErrorHandler and it makes sense for reads, but I'm not sold 
> on it being the right answer for writes. We don't retry flushing a memtable 
> or writing to the commit log to my knowledge. We could go read only and I 
> need to check if that is what we do in practice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14197) SSTable upgrade should be automatic

2018-04-24 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449415#comment-16449415
 ] 

Marcus Eriksson commented on CASSANDRA-14197:
-

[~aweisberg] [~KurtG] I'm a little bit worried about the amount of compactions 
we would do on upgrade - people usually don't run {{nodetool upgradesstables}} 
on all nodes at the same time - should we perhaps limit these upgrades to run 
on a single thread or something?

> SSTable upgrade should be automatic
> ---
>
> Key: CASSANDRA-14197
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14197
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 4.x
>
>
> Upgradesstables should run automatically on node upgrade



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14411) Use Bounds instead of Range to represent sstable first/last token when checking how to anticompact sstables

2018-04-24 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-14411:

   Resolution: Fixed
Fix Version/s: 3.11.3
   3.0.17
   2.2.13
   4.0
   Status: Resolved  (was: Ready to Commit)

committed as {{334dca9aa825e6d353aa04fd97016ac1077ff132}}, thanks!

> Use Bounds instead of Range to represent sstable first/last token when 
> checking how to anticompact sstables
> ---
>
> Key: CASSANDRA-14411
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14411
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 4.0, 2.2.13, 3.0.17, 3.11.3
>
>
> There is currently a chance of missing marking a token as repaired due to the 
> fact that we use Range which are (a, b] to represent first/last token in 
> sstables instead of Bounds which are [a, b].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[10/10] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2018-04-24 Thread marcuse
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1d387f5e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1d387f5e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1d387f5e

Branch: refs/heads/trunk
Commit: 1d387f5e7f688150c09b7eb14a036d153017ec02
Parents: 1a2eb5e 684120d
Author: Marcus Eriksson 
Authored: Tue Apr 24 08:56:34 2018 +0200
Committer: Marcus Eriksson 
Committed: Tue Apr 24 08:56:34 2018 +0200

--
 CHANGES.txt |   1 +
 .../db/compaction/CompactionManager.java|  80 +++---
 .../db/compaction/AntiCompactionTest.java   | 105 ++-
 .../org/apache/cassandra/schema/MockSchema.java |  16 ++-
 4 files changed, 161 insertions(+), 41 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1d387f5e/CHANGES.txt
--
diff --cc CHANGES.txt
index 6976c7f,5450322..784fa2b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -261,10 -30,12 +261,11 @@@ Merged from 3.0
   * Fully utilise specified compaction threads (CASSANDRA-14210)
   * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
  Merged from 2.2:
+  * Use Bounds instead of Range for sstables in anticompaction 
(CASSANDRA-14411)
   * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)
 - * Backport circleci yaml (CASSANDRA-14240)
 + * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
  Merged from 2.1:
   * Check checksum before decompressing data (CASSANDRA-14284)
 - * CVE-2017-5929 Security vulnerability in Logback warning in NEWS.txt 
(CASSANDRA-14183)
  
  
  3.11.2

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1d387f5e/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --cc src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 5672dfe,f0a4de5..831d8ca
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@@ -648,63 -645,67 +648,30 @@@ public class CompactionManager implemen
Refs validatedForRepair,
LifecycleTransaction txn,
long repairedAt,
 +  UUID pendingRepair,
UUID parentRepairSession) throws 
InterruptedException, IOException
  {
 -logger.info("[repair #{}] Starting anticompaction for {}.{} on {}/{} 
sstables", parentRepairSession, cfs.keyspace.getName(), cfs.getTableName(), 
validatedForRepair.size(), cfs.getLiveSSTables());
 -logger.trace("[repair #{}] Starting anticompaction for ranges {}", 
parentRepairSession, ranges);
 -Set sstables = new HashSet<>(validatedForRepair);
 -Set mutatedRepairStatuses = new HashSet<>();
 -// we should only notify that repair status changed if it actually 
did:
 -Set mutatedRepairStatusToNotify = new HashSet<>();
 -Map wasRepairedBefore = new HashMap<>();
 -for (SSTableReader sstable : sstables)
 -wasRepairedBefore.put(sstable, sstable.isRepaired());
 -
 -Set nonAnticompacting = new HashSet<>();
 -
 -Iterator sstableIterator = sstables.iterator();
  try
  {
 -List normalizedRanges = Range.normalize(ranges);
 -
 -while (sstableIterator.hasNext())
 -{
 -SSTableReader sstable = sstableIterator.next();
 +ActiveRepairService.ParentRepairSession prs = 
ActiveRepairService.instance.getParentRepairSession(parentRepairSession);
 +Preconditions.checkArgument(!prs.isPreview(), "Cannot anticompact 
for previews");
  
 -Bounds sstableBounds = new 
Bounds<>(sstable.first.getToken(), sstable.last.getToken());
 +logger.info("{} Starting anticompaction for {}.{} on {}/{} 
sstables", PreviewKind.NONE.logPrefix(parentRepairSession), 
cfs.keyspace.getName(), cfs.getTableName(), validatedForRepair.size(), 
cfs.getLiveSSTables().size());
 +logger.trace("{} Starting anticompaction for ranges {}", 
PreviewKind.NONE.logPrefix(parentRepairSession), ranges);
 +Set sstables = new HashSet<>(validatedForRepair);
  
- Set nonAnticompacting = new HashSet<>();
- 
 -boolean shouldAnticompact = false;
 +Iterator sstableIterator = sstables.iterator();
 +List normalizedRanges = Range.normalize(ranges);

[02/10] cassandra git commit: Use Bounds instead of Range to represent sstables when deciding how to anticompact

2018-04-24 Thread marcuse
Use Bounds instead of Range to represent sstables when deciding how to 
anticompact

Patch by marcuse; reviewed by Blake Eggleston for CASSANDRA-14411


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/334dca9a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/334dca9a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/334dca9a

Branch: refs/heads/cassandra-3.0
Commit: 334dca9aa825e6d353aa04fd97016ac1077ff132
Parents: 594cde7
Author: Marcus Eriksson 
Authored: Mon Apr 23 09:13:52 2018 +0200
Committer: Marcus Eriksson 
Committed: Tue Apr 24 08:21:43 2018 +0200

--
 CHANGES.txt   |  1 +
 .../apache/cassandra/db/compaction/CompactionManager.java | 10 +-
 .../cassandra/db/compaction/AntiCompactionTest.java   |  2 +-
 3 files changed, 7 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/334dca9a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 967ee05..5f6189f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.13
+ * Use Bounds instead of Range for sstables in anticompaction (CASSANDRA-14411)
  * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)
  * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
  * Fix query pager DEBUG log leak causing hit in paged reads throughput 
(CASSANDRA-14318)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/334dca9a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index d90abe9..419f66e 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -537,13 +537,13 @@ public class CompactionManager implements 
CompactionManagerMBean
 {
 SSTableReader sstable = sstableIterator.next();
 
-Range sstableRange = new 
Range<>(sstable.first.getToken(), sstable.last.getToken());
+Bounds sstableBounds = new 
Bounds<>(sstable.first.getToken(), sstable.last.getToken());
 
 boolean shouldAnticompact = false;
 
 for (Range r : normalizedRanges)
 {
-if (r.contains(sstableRange))
+if (r.contains(sstableBounds.left) && 
r.contains(sstableBounds.right))
 {
 logger.info("SSTable {} fully contained in range {}, 
mutating repairedAt instead of anticompacting", sstable, r);
 
sstable.descriptor.getMetadataSerializer().mutateRepairedAt(sstable.descriptor, 
repairedAt);
@@ -555,16 +555,16 @@ public class CompactionManager implements 
CompactionManagerMBean
 shouldAnticompact = true;
 break;
 }
-else if (sstableRange.intersects(r))
+else if (r.intersects(sstableBounds))
 {
-logger.info("SSTable {} ({}) will be anticompacted on 
range {}", sstable, sstableRange, r);
+logger.info("SSTable {} ({}) will be anticompacted on 
range {}", sstable, sstableBounds, r);
 shouldAnticompact = true;
 }
 }
 
 if (!shouldAnticompact)
 {
-logger.info("SSTable {} ({}) does not intersect repaired 
ranges {}, not touching repairedAt.", sstable, sstableRange, normalizedRanges);
+logger.info("SSTable {} ({}) does not intersect repaired 
ranges {}, not touching repairedAt.", sstable, sstableBounds, normalizedRanges);
 nonAnticompacting.add(sstable);
 sstableIterator.remove();
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/334dca9a/test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java 
b/test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java
index 7c3fbc2..c451516 100644
--- a/test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java
+++ b/test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java
@@ -262,7 +262,7 @@ public class AntiCompactionTest
 ColumnFamilyStore store = prepareColumnFamilyStore();
 

[07/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2018-04-24 Thread marcuse
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bc1f8412
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bc1f8412
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bc1f8412

Branch: refs/heads/trunk
Commit: bc1f8412954c494aa20a6a123d15b4a269c1ad4b
Parents: e42d9e7 334dca9
Author: Marcus Eriksson 
Authored: Tue Apr 24 08:51:19 2018 +0200
Committer: Marcus Eriksson 
Committed: Tue Apr 24 08:51:19 2018 +0200

--
 CHANGES.txt   |  1 +
 .../apache/cassandra/db/compaction/CompactionManager.java | 10 +-
 .../cassandra/db/compaction/AntiCompactionTest.java   |  2 +-
 3 files changed, 7 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc1f8412/CHANGES.txt
--
diff --cc CHANGES.txt
index b0fd579,5f6189f..cf470d6
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,26 -1,8 +1,27 @@@
 -2.2.13
 +3.0.17
 + * Deprecate background repair and probablistic read_repair_chance table 
options
 +   (CASSANDRA-13910)
 + * Add missed CQL keywords to documentation (CASSANDRA-14359)
 + * Fix unbounded validation compactions on repair / revert CASSANDRA-13797 
(CASSANDRA-14332)
 + * Avoid deadlock when running nodetool refresh before node is fully up 
(CASSANDRA-14310)
 + * Handle all exceptions when opening sstables (CASSANDRA-14202)
 + * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
 + * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
 + * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)
 + * Respect max hint window when hinting for LWT (CASSANDRA-14215)
 + * Adding missing WriteType enum values to v3, v4, and v5 spec 
(CASSANDRA-13697)
 + * Don't regenerate bloomfilter and summaries on startup (CASSANDRA-11163)
 + * Fix NPE when performing comparison against a null frozen in LWT 
(CASSANDRA-14087)
 + * Log when SSTables are deleted (CASSANDRA-14302)
 + * Fix batch commitlog sync regression (CASSANDRA-14292)
 + * Write to pending endpoint when view replica is also base replica 
(CASSANDRA-14251)
 + * Chain commit log marker potential performance regression in batch commit 
mode (CASSANDRA-14194)
 + * Fully utilise specified compaction threads (CASSANDRA-14210)
 + * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
 +Merged from 2.2:
+  * Use Bounds instead of Range for sstables in anticompaction 
(CASSANDRA-14411)
   * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)
   * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
 - * Fix query pager DEBUG log leak causing hit in paged reads throughput 
(CASSANDRA-14318)
   * Backport circleci yaml (CASSANDRA-14240)
  Merged from 2.1:
   * Check checksum before decompressing data (CASSANDRA-14284)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc1f8412/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --cc src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index a8e6931,419f66e..ab363e0
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@@ -560,9 -543,9 +560,9 @@@ public class CompactionManager implemen
  
  for (Range r : normalizedRanges)
  {
- if (r.contains(sstableRange))
+ if (r.contains(sstableBounds.left) && 
r.contains(sstableBounds.right))
  {
 -logger.info("SSTable {} fully contained in range {}, 
mutating repairedAt instead of anticompacting", sstable, r);
 +logger.info("[repair #{}] SSTable {} fully contained 
in range {}, mutating repairedAt instead of anticompacting", 
parentRepairSession, sstable, r);
  
sstable.descriptor.getMetadataSerializer().mutateRepairedAt(sstable.descriptor, 
repairedAt);
  sstable.reloadSSTableMetadata();
  mutatedRepairStatuses.add(sstable);
@@@ -572,16 -555,16 +572,16 @@@
  shouldAnticompact = true;
  break;
  }
- else if (sstableRange.intersects(r))
+ else if (r.intersects(sstableBounds))
  {
- logger.info("[repair #{}] SSTable {} ({}) will be 
anticompacted on range {}", parentRepairSession, sstable, sstableRange, r);
 -logger.info("SSTable 

[05/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2018-04-24 Thread marcuse
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bc1f8412
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bc1f8412
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bc1f8412

Branch: refs/heads/cassandra-3.11
Commit: bc1f8412954c494aa20a6a123d15b4a269c1ad4b
Parents: e42d9e7 334dca9
Author: Marcus Eriksson 
Authored: Tue Apr 24 08:51:19 2018 +0200
Committer: Marcus Eriksson 
Committed: Tue Apr 24 08:51:19 2018 +0200

--
 CHANGES.txt   |  1 +
 .../apache/cassandra/db/compaction/CompactionManager.java | 10 +-
 .../cassandra/db/compaction/AntiCompactionTest.java   |  2 +-
 3 files changed, 7 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc1f8412/CHANGES.txt
--
diff --cc CHANGES.txt
index b0fd579,5f6189f..cf470d6
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,26 -1,8 +1,27 @@@
 -2.2.13
 +3.0.17
 + * Deprecate background repair and probablistic read_repair_chance table 
options
 +   (CASSANDRA-13910)
 + * Add missed CQL keywords to documentation (CASSANDRA-14359)
 + * Fix unbounded validation compactions on repair / revert CASSANDRA-13797 
(CASSANDRA-14332)
 + * Avoid deadlock when running nodetool refresh before node is fully up 
(CASSANDRA-14310)
 + * Handle all exceptions when opening sstables (CASSANDRA-14202)
 + * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
 + * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
 + * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)
 + * Respect max hint window when hinting for LWT (CASSANDRA-14215)
 + * Adding missing WriteType enum values to v3, v4, and v5 spec 
(CASSANDRA-13697)
 + * Don't regenerate bloomfilter and summaries on startup (CASSANDRA-11163)
 + * Fix NPE when performing comparison against a null frozen in LWT 
(CASSANDRA-14087)
 + * Log when SSTables are deleted (CASSANDRA-14302)
 + * Fix batch commitlog sync regression (CASSANDRA-14292)
 + * Write to pending endpoint when view replica is also base replica 
(CASSANDRA-14251)
 + * Chain commit log marker potential performance regression in batch commit 
mode (CASSANDRA-14194)
 + * Fully utilise specified compaction threads (CASSANDRA-14210)
 + * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
 +Merged from 2.2:
+  * Use Bounds instead of Range for sstables in anticompaction 
(CASSANDRA-14411)
   * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)
   * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
 - * Fix query pager DEBUG log leak causing hit in paged reads throughput 
(CASSANDRA-14318)
   * Backport circleci yaml (CASSANDRA-14240)
  Merged from 2.1:
   * Check checksum before decompressing data (CASSANDRA-14284)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc1f8412/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --cc src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index a8e6931,419f66e..ab363e0
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@@ -560,9 -543,9 +560,9 @@@ public class CompactionManager implemen
  
  for (Range r : normalizedRanges)
  {
- if (r.contains(sstableRange))
+ if (r.contains(sstableBounds.left) && 
r.contains(sstableBounds.right))
  {
 -logger.info("SSTable {} fully contained in range {}, 
mutating repairedAt instead of anticompacting", sstable, r);
 +logger.info("[repair #{}] SSTable {} fully contained 
in range {}, mutating repairedAt instead of anticompacting", 
parentRepairSession, sstable, r);
  
sstable.descriptor.getMetadataSerializer().mutateRepairedAt(sstable.descriptor, 
repairedAt);
  sstable.reloadSSTableMetadata();
  mutatedRepairStatuses.add(sstable);
@@@ -572,16 -555,16 +572,16 @@@
  shouldAnticompact = true;
  break;
  }
- else if (sstableRange.intersects(r))
+ else if (r.intersects(sstableBounds))
  {
- logger.info("[repair #{}] SSTable {} ({}) will be 
anticompacted on range {}", parentRepairSession, sstable, sstableRange, r);
 -

[01/10] cassandra git commit: Use Bounds instead of Range to represent sstables when deciding how to anticompact

2018-04-24 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 594cde793 -> 334dca9aa
  refs/heads/cassandra-3.0 e42d9e7d6 -> bc1f84129
  refs/heads/cassandra-3.11 22bb2cf67 -> 684120deb
  refs/heads/trunk 1a2eb5ecb -> 1d387f5e7


Use Bounds instead of Range to represent sstables when deciding how to 
anticompact

Patch by marcuse; reviewed by Blake Eggleston for CASSANDRA-14411


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/334dca9a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/334dca9a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/334dca9a

Branch: refs/heads/cassandra-2.2
Commit: 334dca9aa825e6d353aa04fd97016ac1077ff132
Parents: 594cde7
Author: Marcus Eriksson 
Authored: Mon Apr 23 09:13:52 2018 +0200
Committer: Marcus Eriksson 
Committed: Tue Apr 24 08:21:43 2018 +0200

--
 CHANGES.txt   |  1 +
 .../apache/cassandra/db/compaction/CompactionManager.java | 10 +-
 .../cassandra/db/compaction/AntiCompactionTest.java   |  2 +-
 3 files changed, 7 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/334dca9a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 967ee05..5f6189f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.13
+ * Use Bounds instead of Range for sstables in anticompaction (CASSANDRA-14411)
  * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)
  * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
  * Fix query pager DEBUG log leak causing hit in paged reads throughput 
(CASSANDRA-14318)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/334dca9a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index d90abe9..419f66e 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -537,13 +537,13 @@ public class CompactionManager implements 
CompactionManagerMBean
 {
 SSTableReader sstable = sstableIterator.next();
 
-Range sstableRange = new 
Range<>(sstable.first.getToken(), sstable.last.getToken());
+Bounds sstableBounds = new 
Bounds<>(sstable.first.getToken(), sstable.last.getToken());
 
 boolean shouldAnticompact = false;
 
 for (Range r : normalizedRanges)
 {
-if (r.contains(sstableRange))
+if (r.contains(sstableBounds.left) && 
r.contains(sstableBounds.right))
 {
 logger.info("SSTable {} fully contained in range {}, 
mutating repairedAt instead of anticompacting", sstable, r);
 
sstable.descriptor.getMetadataSerializer().mutateRepairedAt(sstable.descriptor, 
repairedAt);
@@ -555,16 +555,16 @@ public class CompactionManager implements 
CompactionManagerMBean
 shouldAnticompact = true;
 break;
 }
-else if (sstableRange.intersects(r))
+else if (r.intersects(sstableBounds))
 {
-logger.info("SSTable {} ({}) will be anticompacted on 
range {}", sstable, sstableRange, r);
+logger.info("SSTable {} ({}) will be anticompacted on 
range {}", sstable, sstableBounds, r);
 shouldAnticompact = true;
 }
 }
 
 if (!shouldAnticompact)
 {
-logger.info("SSTable {} ({}) does not intersect repaired 
ranges {}, not touching repairedAt.", sstable, sstableRange, normalizedRanges);
+logger.info("SSTable {} ({}) does not intersect repaired 
ranges {}, not touching repairedAt.", sstable, sstableBounds, normalizedRanges);
 nonAnticompacting.add(sstable);
 sstableIterator.remove();
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/334dca9a/test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java 
b/test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java
index 7c3fbc2..c451516 100644
--- 

[03/10] cassandra git commit: Use Bounds instead of Range to represent sstables when deciding how to anticompact

2018-04-24 Thread marcuse
Use Bounds instead of Range to represent sstables when deciding how to 
anticompact

Patch by marcuse; reviewed by Blake Eggleston for CASSANDRA-14411


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/334dca9a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/334dca9a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/334dca9a

Branch: refs/heads/cassandra-3.11
Commit: 334dca9aa825e6d353aa04fd97016ac1077ff132
Parents: 594cde7
Author: Marcus Eriksson 
Authored: Mon Apr 23 09:13:52 2018 +0200
Committer: Marcus Eriksson 
Committed: Tue Apr 24 08:21:43 2018 +0200

--
 CHANGES.txt   |  1 +
 .../apache/cassandra/db/compaction/CompactionManager.java | 10 +-
 .../cassandra/db/compaction/AntiCompactionTest.java   |  2 +-
 3 files changed, 7 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/334dca9a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 967ee05..5f6189f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.13
+ * Use Bounds instead of Range for sstables in anticompaction (CASSANDRA-14411)
  * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)
  * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
  * Fix query pager DEBUG log leak causing hit in paged reads throughput 
(CASSANDRA-14318)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/334dca9a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index d90abe9..419f66e 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -537,13 +537,13 @@ public class CompactionManager implements 
CompactionManagerMBean
 {
 SSTableReader sstable = sstableIterator.next();
 
-Range sstableRange = new 
Range<>(sstable.first.getToken(), sstable.last.getToken());
+Bounds sstableBounds = new 
Bounds<>(sstable.first.getToken(), sstable.last.getToken());
 
 boolean shouldAnticompact = false;
 
 for (Range r : normalizedRanges)
 {
-if (r.contains(sstableRange))
+if (r.contains(sstableBounds.left) && 
r.contains(sstableBounds.right))
 {
 logger.info("SSTable {} fully contained in range {}, 
mutating repairedAt instead of anticompacting", sstable, r);
 
sstable.descriptor.getMetadataSerializer().mutateRepairedAt(sstable.descriptor, 
repairedAt);
@@ -555,16 +555,16 @@ public class CompactionManager implements 
CompactionManagerMBean
 shouldAnticompact = true;
 break;
 }
-else if (sstableRange.intersects(r))
+else if (r.intersects(sstableBounds))
 {
-logger.info("SSTable {} ({}) will be anticompacted on 
range {}", sstable, sstableRange, r);
+logger.info("SSTable {} ({}) will be anticompacted on 
range {}", sstable, sstableBounds, r);
 shouldAnticompact = true;
 }
 }
 
 if (!shouldAnticompact)
 {
-logger.info("SSTable {} ({}) does not intersect repaired 
ranges {}, not touching repairedAt.", sstable, sstableRange, normalizedRanges);
+logger.info("SSTable {} ({}) does not intersect repaired 
ranges {}, not touching repairedAt.", sstable, sstableBounds, normalizedRanges);
 nonAnticompacting.add(sstable);
 sstableIterator.remove();
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/334dca9a/test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java 
b/test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java
index 7c3fbc2..c451516 100644
--- a/test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java
+++ b/test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java
@@ -262,7 +262,7 @@ public class AntiCompactionTest
 ColumnFamilyStore store = prepareColumnFamilyStore();
 

[04/10] cassandra git commit: Use Bounds instead of Range to represent sstables when deciding how to anticompact

2018-04-24 Thread marcuse
Use Bounds instead of Range to represent sstables when deciding how to 
anticompact

Patch by marcuse; reviewed by Blake Eggleston for CASSANDRA-14411


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/334dca9a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/334dca9a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/334dca9a

Branch: refs/heads/trunk
Commit: 334dca9aa825e6d353aa04fd97016ac1077ff132
Parents: 594cde7
Author: Marcus Eriksson 
Authored: Mon Apr 23 09:13:52 2018 +0200
Committer: Marcus Eriksson 
Committed: Tue Apr 24 08:21:43 2018 +0200

--
 CHANGES.txt   |  1 +
 .../apache/cassandra/db/compaction/CompactionManager.java | 10 +-
 .../cassandra/db/compaction/AntiCompactionTest.java   |  2 +-
 3 files changed, 7 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/334dca9a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 967ee05..5f6189f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.13
+ * Use Bounds instead of Range for sstables in anticompaction (CASSANDRA-14411)
  * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)
  * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
  * Fix query pager DEBUG log leak causing hit in paged reads throughput 
(CASSANDRA-14318)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/334dca9a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index d90abe9..419f66e 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -537,13 +537,13 @@ public class CompactionManager implements 
CompactionManagerMBean
 {
 SSTableReader sstable = sstableIterator.next();
 
-Range sstableRange = new 
Range<>(sstable.first.getToken(), sstable.last.getToken());
+Bounds sstableBounds = new 
Bounds<>(sstable.first.getToken(), sstable.last.getToken());
 
 boolean shouldAnticompact = false;
 
 for (Range r : normalizedRanges)
 {
-if (r.contains(sstableRange))
+if (r.contains(sstableBounds.left) && 
r.contains(sstableBounds.right))
 {
 logger.info("SSTable {} fully contained in range {}, 
mutating repairedAt instead of anticompacting", sstable, r);
 
sstable.descriptor.getMetadataSerializer().mutateRepairedAt(sstable.descriptor, 
repairedAt);
@@ -555,16 +555,16 @@ public class CompactionManager implements 
CompactionManagerMBean
 shouldAnticompact = true;
 break;
 }
-else if (sstableRange.intersects(r))
+else if (r.intersects(sstableBounds))
 {
-logger.info("SSTable {} ({}) will be anticompacted on 
range {}", sstable, sstableRange, r);
+logger.info("SSTable {} ({}) will be anticompacted on 
range {}", sstable, sstableBounds, r);
 shouldAnticompact = true;
 }
 }
 
 if (!shouldAnticompact)
 {
-logger.info("SSTable {} ({}) does not intersect repaired 
ranges {}, not touching repairedAt.", sstable, sstableRange, normalizedRanges);
+logger.info("SSTable {} ({}) does not intersect repaired 
ranges {}, not touching repairedAt.", sstable, sstableBounds, normalizedRanges);
 nonAnticompacting.add(sstable);
 sstableIterator.remove();
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/334dca9a/test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java 
b/test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java
index 7c3fbc2..c451516 100644
--- a/test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java
+++ b/test/unit/org/apache/cassandra/db/compaction/AntiCompactionTest.java
@@ -262,7 +262,7 @@ public class AntiCompactionTest
 ColumnFamilyStore store = prepareColumnFamilyStore();
 Collection 

[06/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2018-04-24 Thread marcuse
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bc1f8412
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bc1f8412
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bc1f8412

Branch: refs/heads/cassandra-3.0
Commit: bc1f8412954c494aa20a6a123d15b4a269c1ad4b
Parents: e42d9e7 334dca9
Author: Marcus Eriksson 
Authored: Tue Apr 24 08:51:19 2018 +0200
Committer: Marcus Eriksson 
Committed: Tue Apr 24 08:51:19 2018 +0200

--
 CHANGES.txt   |  1 +
 .../apache/cassandra/db/compaction/CompactionManager.java | 10 +-
 .../cassandra/db/compaction/AntiCompactionTest.java   |  2 +-
 3 files changed, 7 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc1f8412/CHANGES.txt
--
diff --cc CHANGES.txt
index b0fd579,5f6189f..cf470d6
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,26 -1,8 +1,27 @@@
 -2.2.13
 +3.0.17
 + * Deprecate background repair and probablistic read_repair_chance table 
options
 +   (CASSANDRA-13910)
 + * Add missed CQL keywords to documentation (CASSANDRA-14359)
 + * Fix unbounded validation compactions on repair / revert CASSANDRA-13797 
(CASSANDRA-14332)
 + * Avoid deadlock when running nodetool refresh before node is fully up 
(CASSANDRA-14310)
 + * Handle all exceptions when opening sstables (CASSANDRA-14202)
 + * Handle incompletely written hint descriptors during startup 
(CASSANDRA-14080)
 + * Handle repeat open bound from SRP in read repair (CASSANDRA-14330)
 + * Use zero as default score in DynamicEndpointSnitch (CASSANDRA-14252)
 + * Respect max hint window when hinting for LWT (CASSANDRA-14215)
 + * Adding missing WriteType enum values to v3, v4, and v5 spec 
(CASSANDRA-13697)
 + * Don't regenerate bloomfilter and summaries on startup (CASSANDRA-11163)
 + * Fix NPE when performing comparison against a null frozen in LWT 
(CASSANDRA-14087)
 + * Log when SSTables are deleted (CASSANDRA-14302)
 + * Fix batch commitlog sync regression (CASSANDRA-14292)
 + * Write to pending endpoint when view replica is also base replica 
(CASSANDRA-14251)
 + * Chain commit log marker potential performance regression in batch commit 
mode (CASSANDRA-14194)
 + * Fully utilise specified compaction threads (CASSANDRA-14210)
 + * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
 +Merged from 2.2:
+  * Use Bounds instead of Range for sstables in anticompaction 
(CASSANDRA-14411)
   * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)
   * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
 - * Fix query pager DEBUG log leak causing hit in paged reads throughput 
(CASSANDRA-14318)
   * Backport circleci yaml (CASSANDRA-14240)
  Merged from 2.1:
   * Check checksum before decompressing data (CASSANDRA-14284)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc1f8412/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --cc src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index a8e6931,419f66e..ab363e0
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@@ -560,9 -543,9 +560,9 @@@ public class CompactionManager implemen
  
  for (Range r : normalizedRanges)
  {
- if (r.contains(sstableRange))
+ if (r.contains(sstableBounds.left) && 
r.contains(sstableBounds.right))
  {
 -logger.info("SSTable {} fully contained in range {}, 
mutating repairedAt instead of anticompacting", sstable, r);
 +logger.info("[repair #{}] SSTable {} fully contained 
in range {}, mutating repairedAt instead of anticompacting", 
parentRepairSession, sstable, r);
  
sstable.descriptor.getMetadataSerializer().mutateRepairedAt(sstable.descriptor, 
repairedAt);
  sstable.reloadSSTableMetadata();
  mutatedRepairStatuses.add(sstable);
@@@ -572,16 -555,16 +572,16 @@@
  shouldAnticompact = true;
  break;
  }
- else if (sstableRange.intersects(r))
+ else if (r.intersects(sstableBounds))
  {
- logger.info("[repair #{}] SSTable {} ({}) will be 
anticompacted on range {}", parentRepairSession, sstable, sstableRange, r);
 -

[08/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2018-04-24 Thread marcuse
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/684120de
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/684120de
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/684120de

Branch: refs/heads/cassandra-3.11
Commit: 684120deb8fa5a982fb75523314ccd2181419e31
Parents: 22bb2cf bc1f841
Author: Marcus Eriksson 
Authored: Tue Apr 24 08:51:32 2018 +0200
Committer: Marcus Eriksson 
Committed: Tue Apr 24 08:51:32 2018 +0200

--
 CHANGES.txt   |  1 +
 .../apache/cassandra/db/compaction/CompactionManager.java | 10 +-
 .../cassandra/db/compaction/AntiCompactionTest.java   |  2 +-
 3 files changed, 7 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/684120de/CHANGES.txt
--
diff --cc CHANGES.txt
index 990c5db,cf470d6..5450322
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -30,7 -19,9 +30,8 @@@ Merged from 3.0
   * Fully utilise specified compaction threads (CASSANDRA-14210)
   * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
  Merged from 2.2:
+  * Use Bounds instead of Range for sstables in anticompaction 
(CASSANDRA-14411)
   * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)
 - * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
   * Backport circleci yaml (CASSANDRA-14240)
  Merged from 2.1:
   * Check checksum before decompressing data (CASSANDRA-14284)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/684120de/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[09/10] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2018-04-24 Thread marcuse
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/684120de
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/684120de
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/684120de

Branch: refs/heads/trunk
Commit: 684120deb8fa5a982fb75523314ccd2181419e31
Parents: 22bb2cf bc1f841
Author: Marcus Eriksson 
Authored: Tue Apr 24 08:51:32 2018 +0200
Committer: Marcus Eriksson 
Committed: Tue Apr 24 08:51:32 2018 +0200

--
 CHANGES.txt   |  1 +
 .../apache/cassandra/db/compaction/CompactionManager.java | 10 +-
 .../cassandra/db/compaction/AntiCompactionTest.java   |  2 +-
 3 files changed, 7 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/684120de/CHANGES.txt
--
diff --cc CHANGES.txt
index 990c5db,cf470d6..5450322
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -30,7 -19,9 +30,8 @@@ Merged from 3.0
   * Fully utilise specified compaction threads (CASSANDRA-14210)
   * Pre-create deletion log records to finish compactions quicker 
(CASSANDRA-12763)
  Merged from 2.2:
+  * Use Bounds instead of Range for sstables in anticompaction 
(CASSANDRA-14411)
   * Fix JSON queries with IN restrictions and ORDER BY clause (CASSANDRA-14286)
 - * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891)
   * Backport circleci yaml (CASSANDRA-14240)
  Merged from 2.1:
   * Check checksum before decompressing data (CASSANDRA-14284)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/684120de/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14359) CREATE TABLE fails if there is a column called "default" with Cassandra 3.11.2

2018-04-24 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449391#comment-16449391
 ] 

Marcus Eriksson commented on CASSANDRA-14359:
-

seems this wasn't merged properly from 3.0 -> 3.11, conflicts in 
docs/cql3/CQL.textile - I merged 3.0 in to 3.11 with {{-s ours}} could you 
verify that that is correct?

> CREATE TABLE fails if there is a column called "default" with Cassandra 3.11.2
> --
>
> Key: CASSANDRA-14359
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14359
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL, Documentation and Website
> Environment: This is using Cassandra 3.11.2. This syntax was accepted 
> in 2.1.20.
>Reporter: Andy Klages
>Assignee: Andrés de la Peña
>Priority: Minor
> Fix For: 4.0, 3.0.17, 3.11.3
>
>
> My project is upgrading from Cassandra 2.1 to 3.11. We have a table whose 
> column name is "default". The Cassandra 3.11.2 is rejecting it. I don't see 
> "default" as a keyword in the CQL spec. 
> To reproduce, try adding the following:
> {code:java}
> CREATE TABLE simple (
> simplekey text PRIMARY KEY,
> default text // THIS IS REJECTED
> );
> {code}
> I get this error:
> {code:java}
> SyntaxException: line 3:4 mismatched input 'default' expecting ')' (...
> simplekey text PRIMARY KEY,[default]...)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[5/5] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2018-04-24 Thread marcuse
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1a2eb5ec
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1a2eb5ec
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1a2eb5ec

Branch: refs/heads/trunk
Commit: 1a2eb5ecb0760d8c7f1d3dd4da1a7ad84fbccf95
Parents: 6970ac2 22bb2cf
Author: Marcus Eriksson 
Authored: Tue Apr 24 08:38:13 2018 +0200
Committer: Marcus Eriksson 
Committed: Tue Apr 24 08:38:13 2018 +0200

--

--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[1/5] cassandra git commit: Add missed CQL keywords to documentation

2018-04-24 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.11 35823fcef -> 22bb2cf67
  refs/heads/trunk 6970ac215 -> 1a2eb5ecb


Add missed CQL keywords to documentation

patch by Andres de la Peña; reviewed by Benjamin Lerer for CASSANDRA-14359


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e42d9e7d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e42d9e7d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e42d9e7d

Branch: refs/heads/cassandra-3.11
Commit: e42d9e7d696baa8d7b81c058cfbe2f1091671c15
Parents: eaf9bf1
Author: Andrés de la Peña 
Authored: Fri Apr 6 14:03:04 2018 +0100
Committer: Andrés de la Peña 
Committed: Wed Apr 18 12:01:06 2018 +0100

--
 CHANGES.txt  | 1 +
 doc/cql3/CQL.textile | 3 +++
 2 files changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e42d9e7d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 847f347..b0fd579 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,6 +1,7 @@
 3.0.17
  * Deprecate background repair and probablistic read_repair_chance table 
options
(CASSANDRA-13910)
+ * Add missed CQL keywords to documentation (CASSANDRA-14359)
  * Fix unbounded validation compactions on repair / revert CASSANDRA-13797 
(CASSANDRA-14332)
  * Avoid deadlock when running nodetool refresh before node is fully up 
(CASSANDRA-14310)
  * Handle all exceptions when opening sstables (CASSANDRA-14202)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e42d9e7d/doc/cql3/CQL.textile
--
diff --git a/doc/cql3/CQL.textile b/doc/cql3/CQL.textile
index 54888b8..cc2b9aa 100644
--- a/doc/cql3/CQL.textile
+++ b/doc/cql3/CQL.textile
@@ -2193,6 +2193,7 @@ CQL distinguishes between _reserved_ and _non-reserved_ 
keywords. Reserved keywo
 | @INSERT@   | yes |
 | @INT@  | no  |
 | @INTO@ | yes |
+| @IS@   | yes |
 | @JSON@ | no  |
 | @KEY@  | no  |
 | @KEYS@ | no  |
@@ -2203,6 +2204,7 @@ CQL distinguishes between _reserved_ and _non-reserved_ 
keywords. Reserved keywo
 | @LIST@ | no  |
 | @LOGIN@| no  |
 | @MAP@  | no  |
+| @MATERIALIZED@ | yes |
 | @MODIFY@   | yes |
 | @NAN@  | yes |
 | @NOLOGIN@  | no  |
@@ -2257,6 +2259,7 @@ CQL distinguishes between _reserved_ and _non-reserved_ 
keywords. Reserved keywo
 | @VALUES@   | no  |
 | @VARCHAR@  | no  |
 | @VARINT@   | no  |
+| @VIEW@ | yes |
 | @WHERE@| yes |
 | @WITH@ | yes |
 | @WRITETIME@| no  |


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[4/5] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2018-04-24 Thread marcuse
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/22bb2cf6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/22bb2cf6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/22bb2cf6

Branch: refs/heads/cassandra-3.11
Commit: 22bb2cf671966a342adffb59c56a60f520974ac8
Parents: 35823fc e42d9e7
Author: Marcus Eriksson 
Authored: Tue Apr 24 08:38:03 2018 +0200
Committer: Marcus Eriksson 
Committed: Tue Apr 24 08:38:03 2018 +0200

--

--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/5] cassandra git commit: Add missed CQL keywords to documentation

2018-04-24 Thread marcuse
Add missed CQL keywords to documentation

patch by Andres de la Peña; reviewed by Benjamin Lerer for CASSANDRA-14359


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e42d9e7d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e42d9e7d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e42d9e7d

Branch: refs/heads/trunk
Commit: e42d9e7d696baa8d7b81c058cfbe2f1091671c15
Parents: eaf9bf1
Author: Andrés de la Peña 
Authored: Fri Apr 6 14:03:04 2018 +0100
Committer: Andrés de la Peña 
Committed: Wed Apr 18 12:01:06 2018 +0100

--
 CHANGES.txt  | 1 +
 doc/cql3/CQL.textile | 3 +++
 2 files changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e42d9e7d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 847f347..b0fd579 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,6 +1,7 @@
 3.0.17
  * Deprecate background repair and probablistic read_repair_chance table 
options
(CASSANDRA-13910)
+ * Add missed CQL keywords to documentation (CASSANDRA-14359)
  * Fix unbounded validation compactions on repair / revert CASSANDRA-13797 
(CASSANDRA-14332)
  * Avoid deadlock when running nodetool refresh before node is fully up 
(CASSANDRA-14310)
  * Handle all exceptions when opening sstables (CASSANDRA-14202)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e42d9e7d/doc/cql3/CQL.textile
--
diff --git a/doc/cql3/CQL.textile b/doc/cql3/CQL.textile
index 54888b8..cc2b9aa 100644
--- a/doc/cql3/CQL.textile
+++ b/doc/cql3/CQL.textile
@@ -2193,6 +2193,7 @@ CQL distinguishes between _reserved_ and _non-reserved_ 
keywords. Reserved keywo
 | @INSERT@   | yes |
 | @INT@  | no  |
 | @INTO@ | yes |
+| @IS@   | yes |
 | @JSON@ | no  |
 | @KEY@  | no  |
 | @KEYS@ | no  |
@@ -2203,6 +2204,7 @@ CQL distinguishes between _reserved_ and _non-reserved_ 
keywords. Reserved keywo
 | @LIST@ | no  |
 | @LOGIN@| no  |
 | @MAP@  | no  |
+| @MATERIALIZED@ | yes |
 | @MODIFY@   | yes |
 | @NAN@  | yes |
 | @NOLOGIN@  | no  |
@@ -2257,6 +2259,7 @@ CQL distinguishes between _reserved_ and _non-reserved_ 
keywords. Reserved keywo
 | @VALUES@   | no  |
 | @VARCHAR@  | no  |
 | @VARINT@   | no  |
+| @VIEW@ | yes |
 | @WHERE@| yes |
 | @WITH@ | yes |
 | @WRITETIME@| no  |


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[3/5] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2018-04-24 Thread marcuse
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/22bb2cf6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/22bb2cf6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/22bb2cf6

Branch: refs/heads/trunk
Commit: 22bb2cf671966a342adffb59c56a60f520974ac8
Parents: 35823fc e42d9e7
Author: Marcus Eriksson 
Authored: Tue Apr 24 08:38:03 2018 +0200
Committer: Marcus Eriksson 
Committed: Tue Apr 24 08:38:03 2018 +0200

--

--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14400) Subrange repair doesn't always mark as repaired

2018-04-24 Thread Kurt Greaves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16449385#comment-16449385
 ] 

Kurt Greaves commented on CASSANDRA-14400:
--

Interesting... I can reproduce this and understand a bit better now after a bit 
of research.

bq. it might stay in pending until a compaction has run
bq. Nope, it's exclusively compaction. They were probably just compacted away 
on startup.

This is only partially true. It will actually switch from pending to repaired 
as soon as 
{{org.apache.cassandra.db.compaction.CompactionStrategyManager#getNextBackgroundTask}}
 is called as pending repairs are the first thing it checks for, and it doesn't 
actually require a "compaction" in a traditional sense, it will just update the 
metadata (which is why the generation number doesn't change). A side effect of 
restarting is that we enable compaction on all keyspaces during startup and 
subsequently call {{getNextBackgroundTask()}}, which finds the SSTable pending 
repair and marks it as repaired.

So the caveat is it will stay in pending until we _attempt_ to trigger a 
compaction - not necessarily that a compaction has run, or that the specific 
SSTable is included in a compaction.

This seems perfectly fine to me, just documenting findings here in case someone 
else gets a little confused like I did.

The same behaviour as shown below can be achieved by {{nodetool 
disableautocompaction; nodetool enableautocompaction}} rather than 
stopping/starting the node. 

 After repair: 
{code:java}
CASSANDRA_INCLUDE=~/.ccm/kgreav-3nodes/node1/bin/cassandra.in.sh 
~/werk/cstar/kgreav-cassandra/tools/bin/sstablemetadata na-39-big-Data.db
SSTable: 
/home/kurt/.ccm/kgreav-3nodes/node1/data0/aoeu/aoeu-c2c45b00439011e8bfc8737d74e3e5df/na-39-big
First token: -8223339496150845696 (derphead5731287)
Last token: -8023360031800191250 (derphead3351464)
Repaired at: 0
Pending repair: 825565d0-4784-11e8-b1b1-8f56691c789f

ccm node1 stop

CASSANDRA_INCLUDE=~/.ccm/kgreav-3nodes/node1/bin/cassandra.in.sh 
~/werk/cstar/kgreav-cassandra/tools/bin/sstablemetadata na-39-big-Data.db
SSTable: 
/home/kurt/.ccm/kgreav-3nodes/node1/data0/aoeu/aoeu-c2c45b00439011e8bfc8737d74e3e5df/na-39-big
First token: -8223339496150845696 (derphead5731287)
Last token: -8023360031800191250 (derphead3351464)
SSTable Level: 0
Repaired at: 0
Pending repair: 825565d0-4784-11e8-b1b1-8f56691c789f

ccm node1 start

CASSANDRA_INCLUDE=~/.ccm/kgreav-3nodes/node1/bin/cassandra.in.sh 
~/werk/cstar/kgreav-cassandra/tools/bin/sstablemetadata na-39-big-Data.db
SSTable: 
/home/kurt/.ccm/kgreav-3nodes/node1/data0/aoeu/aoeu-c2c45b00439011e8bfc8737d74e3e5df/na-39-big
First token: -8223339496150845696 (derphead5731287)
Last token: -8023360031800191250 (derphead3351464)
Repaired at: 1524549508277 (04/24/2018 05:58:28)
Pending repair: --
{code}



> Subrange repair doesn't always mark as repaired
> ---
>
> Key: CASSANDRA-14400
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14400
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Kurt Greaves
>Priority: Major
>
> So was just messing around with subrange repair on trunk and found that if I 
> generated an SSTable with a single token and then tried to repair that 
> SSTable using subrange repairs it wouldn't get marked as repaired.
>  
>  Before repair:
> {code:java}
> First token: -9223362383595311662 (derphead4471291)
> Last token: -9223362383595311662 (derphead4471291)
> Repaired at: 0
> Pending repair: 862395e0-4394-11e8-8f20-3b8ee110d005
> {code}
> Repair command:
> {code}
> ccm node1 nodetool "repair -st -9223362383595311663 -et -9223362383595311661 
> aoeu"
> [2018-04-19 05:44:42,806] Starting repair command #7 
> (c23f76c0-4394-11e8-8f20-3b8ee110d005), repairing keyspace aoeu with repair 
> options (parallelism: parallel, primary range: false, incremental: true, job 
> threads: 1, ColumnFamilies: [], dataCenters: [], hosts: [], previewKind: 
> NONE, # of ranges: 1, pull repair: false, force repair: false, optimise 
> streams: false)
> [2018-04-19 05:44:42,843] Repair session c242d220-4394-11e8-8f20-3b8ee110d005 
> for range [(-9223362383595311663,-9223362383595311661]] finished (progress: 
> 20%)
> [2018-04-19 05:44:43,139] Repair completed successfully
> [2018-04-19 05:44:43,140] Repair command #7 finished in 0 seconds
> {code}
> After repair SSTable hasn't changed and sstablemetadata outputs:
> {code}
> First token: -9223362383595311662 (derphead4471291)
> Last token: -9223362383595311662 (derphead4471291)
> Repaired at: 0
> Pending repair: 862395e0-4394-11e8-8f20-3b8ee110d005
> {code}
> And parent_repair_history states that the repair is complete/range was 
> successful:
> {code}
> select * from system_distributed.parent_repair_history where 
> parent_id=862395e0-4394-11e8-8f20-3b8ee110d005 ;
>  parent_id