[jira] [Commented] (CASSANDRA-7438) Serializing Row cache alternative (Fully off heap)

2014-10-15 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172078#comment-14172078
 ] 

Robert Stupp commented on CASSANDRA-7438:
-

Will take a look at this this week.

 Serializing Row cache alternative (Fully off heap)
 --

 Key: CASSANDRA-7438
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7438
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: Linux
Reporter: Vijay
Assignee: Vijay
  Labels: performance
 Fix For: 3.0

 Attachments: 0001-CASSANDRA-7438.patch


 Currently SerializingCache is partially off heap, keys are still stored in 
 JVM heap as BB, 
 * There is a higher GC costs for a reasonably big cache.
 * Some users have used the row cache efficiently in production for better 
 results, but this requires careful tunning.
 * Overhead in Memory for the cache entries are relatively high.
 So the proposal for this ticket is to move the LRU cache logic completely off 
 heap and use JNI to interact with cache. We might want to ensure that the new 
 implementation match the existing API's (ICache), and the implementation 
 needs to have safe memory access, low overhead in memory and less memcpy's 
 (As much as possible).
 We might also want to make this cache configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8122) Undeclare throwable exception while executing 'nodetool netstats localhost'

2014-10-15 Thread Marcus Olsson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Olsson updated CASSANDRA-8122:
-
Attachment: CASSANDRA-8122.patch

Added patch that checks if the server is starting before using MBeans.

 Undeclare throwable exception while executing 'nodetool netstats localhost'
 ---

 Key: CASSANDRA-8122
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8122
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Cassandra: 2.0.9
Reporter: Vishal Mehta
Priority: Minor
 Attachments: CASSANDRA-8122.patch


 *Steps*
 # Stop cassandra service
 # Check netstats of nodetool using 'nodetool netstats localhost'
 # Start cassandra service
 # Again check netstats of nodetool using 'nodetool netstats localhost'
 *Expected output*
 Mode: STARTING
 Not sending any streams. (End of output - no further exceptions)
 *Observed output*
 {noformat}
  nodetool netstats localhost
 Mode: STARTING
 Not sending any streams.
 Exception in thread main java.lang.reflect.UndeclaredThrowableException
   at com.sun.proxy.$Proxy6.getReadRepairAttempted(Unknown Source)
   at 
 org.apache.cassandra.tools.NodeProbe.getReadRepairAttempted(NodeProbe.java:897)
   at 
 org.apache.cassandra.tools.NodeCmd.printNetworkStats(NodeCmd.java:726)
   at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:1281)
 Caused by: javax.management.InstanceNotFoundException: 
 org.apache.cassandra.db:type=StorageProxy
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1464)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:657)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
   at 
 sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:273)
   at 
 sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:251)
   at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:160)
   at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
   at 
 javax.management.remote.rmi.RMIConnectionImpl_Stub.getAttribute(Unknown 
 Source)
   at 
 javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getAttribute(RMIConnector.java:902)
   at 
 javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:267)
   ... 4 more
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8109) Avoid constant boxing in ColumnStats.{Min/Max}Tracker

2014-10-15 Thread Rajanarayanan Thottuvaikkatumana (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172209#comment-14172209
 ] 

Rajanarayanan Thottuvaikkatumana commented on CASSANDRA-8109:
-

Sylvain,
Currently these two classes are generics. When I searched the code base, these 
are instantiated with Long and Integer classes. I understood the boxing issue 
that is described in the ticket. Out of the below cases, can you please 
identify which change you want to implement?

1) Are you suggesting that these two classes should not be generics at all? If 
that is the case, did you mean to change these two classes to have the 
properties defaultValue and value to be declared as long? If that is the 
case, in MaxTracker/MinTracker get method, specifying whether to return int 
or long will be tricky depending on the situation and client code will be 
affected.

2) Keep the current design as the generic classes as it is. Change the 
properties defaultValue and value to long. In this approach also, the get 
method will have issue to return the correct data type value. Even in this case 
the client code will get affected

OR did you mean something else. Please clarify. 

Thanks
-Raj

 Avoid constant boxing in ColumnStats.{Min/Max}Tracker
 -

 Key: CASSANDRA-8109
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8109
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Rajanarayanan Thottuvaikkatumana
Priority: Minor
  Labels: lhf
 Fix For: 3.0


 We use the {{ColumnStats.MinTracker}} and {{ColumnStats.MaxTracker}} to track 
 timestamps and deletion times in sstable. Those classes are generics but we 
 really ever use them for longs and integers. The consequence is that every 
 call to their {{update}} method (called for every cell during sstable write) 
 box it's argument (since we don't store the cell timestamps and deletion time 
 boxed). That feels like a waste that is easy to fix: we could just make those 
 work on longs only for instance and convert back to int at the end when 
 that's what we need.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8090) NullPointerException when using prepared statements

2014-10-15 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172218#comment-14172218
 ] 

Benjamin Lerer commented on CASSANDRA-8090:
---

The patch is on 
[branch|https://github.com/blerer/cassandra/compare/CASSANDRA-8090].
The problem came from the fact that when fixing CASSANDRA-4914 I introduced 
some state within the selectors (not realising that the selector are called by 
multiple threads) for the needs of aggregations.
To solve that problem I had to introduce unshared state. This could be done by 
either having some unshared selector states outside of the selectors or by 
instanciating a new set of selectors for each request.
I try both approach but finally settled for the second one as it was making the 
code easier to understand.
To do that the patch introduce some new classes extending Selector.Factory. 
Those classes are generated when the statement is prepared and they are then 
used to generate the Selector instances at execution time.
As the Selection class was becoming to big and difficult to understand I 
extracted the Selector classes into separate files.  
  

 NullPointerException when using prepared statements
 ---

 Key: CASSANDRA-8090
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8090
 Project: Cassandra
  Issue Type: Bug
Reporter: Carl Yeksigian
Assignee: Benjamin Lerer
 Fix For: 3.0


 Due to the changes in CASSANDRA-4914, using a prepared statement from 
 multiple threads leads to a race condition where the simple selection may be 
 reset from a different thread, causing the following NPE:
 {noformat}
 java.lang.NullPointerException: null
   at org.apache.cassandra.cql3.ResultSet.addRow(ResultSet.java:63) 
 ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.Selection$ResultSetBuilder.build(Selection.java:372)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1120)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:283)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:260)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:213)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:63)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:226)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:481)
  ~[main/:na]
   at 
 org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:133)
  ~[main/:na]
   at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:438)
  [main/:na]
   at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:334)
  [main/:na]
   at 
 io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 [na:1.7.0_67]
   at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:163)
  [main/:na]
   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:103) 
 [main/:na]
   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_67]
 {noformat}
 Reproduced this using the stress tool:
 {noformat}
  ./tools/bin/cassandra-stress user profile=tools/cqlstress-example.yaml 
 ops\(insert=1,simple1=1\)
 {noformat}
 You'll need to change the {noformat}select:{noformat} line to be /1000 to 
 prevent the illegal query exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8090) NullPointerException when using prepared statements

2014-10-15 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1417#comment-1417
 ] 

Benjamin Lerer commented on CASSANDRA-8090:
---

[~slebresne] can you review?

 NullPointerException when using prepared statements
 ---

 Key: CASSANDRA-8090
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8090
 Project: Cassandra
  Issue Type: Bug
Reporter: Carl Yeksigian
Assignee: Benjamin Lerer
 Fix For: 3.0


 Due to the changes in CASSANDRA-4914, using a prepared statement from 
 multiple threads leads to a race condition where the simple selection may be 
 reset from a different thread, causing the following NPE:
 {noformat}
 java.lang.NullPointerException: null
   at org.apache.cassandra.cql3.ResultSet.addRow(ResultSet.java:63) 
 ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.Selection$ResultSetBuilder.build(Selection.java:372)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1120)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:283)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:260)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:213)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:63)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:226)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:481)
  ~[main/:na]
   at 
 org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:133)
  ~[main/:na]
   at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:438)
  [main/:na]
   at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:334)
  [main/:na]
   at 
 io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 [na:1.7.0_67]
   at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:163)
  [main/:na]
   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:103) 
 [main/:na]
   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_67]
 {noformat}
 Reproduced this using the stress tool:
 {noformat}
  ./tools/bin/cassandra-stress user profile=tools/cqlstress-example.yaml 
 ops\(insert=1,simple1=1\)
 {noformat}
 You'll need to change the {noformat}select:{noformat} line to be /1000 to 
 prevent the illegal query exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6568) sstables incorrectly getting marked as not live

2014-10-15 Thread Chris Burroughs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172311#comment-14172311
 ] 

Chris Burroughs commented on CASSANDRA-6568:


1.2.x is too stable ;-) have not updated to the end of the series with this 
patch yet.

 sstables incorrectly getting marked as not live
 -

 Key: CASSANDRA-6568
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6568
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 1.2.12 with several 1.2.13 patches
Reporter: Chris Burroughs
Assignee: Marcus Eriksson
 Fix For: 2.0.11

 Attachments: 0001-add-jmx-method-to-get-non-active-sstables.patch


 {noformat}
 -rw-rw-r-- 14 cassandra cassandra 1.4G Nov 25 19:46 
 /data/sstables/data/ks/cf/ks-cf-ic-402383-Data.db
 -rw-rw-r-- 14 cassandra cassandra  13G Nov 26 00:04 
 /data/sstables/data/ks/cf/ks-cf-ic-402430-Data.db
 -rw-rw-r-- 14 cassandra cassandra  13G Nov 26 05:03 
 /data/sstables/data/ks/cf/ks-cf-ic-405231-Data.db
 -rw-rw-r-- 31 cassandra cassandra  21G Nov 26 08:38 
 /data/sstables/data/ks/cf/ks-cf-ic-405232-Data.db
 -rw-rw-r--  2 cassandra cassandra 2.6G Dec  3 13:44 
 /data/sstables/data/ks/cf/ks-cf-ic-434662-Data.db
 -rw-rw-r-- 14 cassandra cassandra 1.5G Dec  5 09:05 
 /data/sstables/data/ks/cf/ks-cf-ic-438698-Data.db
 -rw-rw-r--  2 cassandra cassandra 3.1G Dec  6 12:10 
 /data/sstables/data/ks/cf/ks-cf-ic-440983-Data.db
 -rw-rw-r--  2 cassandra cassandra  96M Dec  8 01:52 
 /data/sstables/data/ks/cf/ks-cf-ic-444041-Data.db
 -rw-rw-r--  2 cassandra cassandra 3.3G Dec  9 16:37 
 /data/sstables/data/ks/cf/ks-cf-ic-451116-Data.db
 -rw-rw-r--  2 cassandra cassandra 876M Dec 10 11:23 
 /data/sstables/data/ks/cf/ks-cf-ic-453552-Data.db
 -rw-rw-r--  2 cassandra cassandra 891M Dec 11 03:21 
 /data/sstables/data/ks/cf/ks-cf-ic-454518-Data.db
 -rw-rw-r--  2 cassandra cassandra 102M Dec 11 12:27 
 /data/sstables/data/ks/cf/ks-cf-ic-455429-Data.db
 -rw-rw-r--  2 cassandra cassandra 906M Dec 11 23:54 
 /data/sstables/data/ks/cf/ks-cf-ic-455533-Data.db
 -rw-rw-r--  1 cassandra cassandra 214M Dec 12 05:02 
 /data/sstables/data/ks/cf/ks-cf-ic-456426-Data.db
 -rw-rw-r--  1 cassandra cassandra 203M Dec 12 10:49 
 /data/sstables/data/ks/cf/ks-cf-ic-456879-Data.db
 -rw-rw-r--  1 cassandra cassandra  49M Dec 12 12:03 
 /data/sstables/data/ks/cf/ks-cf-ic-456963-Data.db
 -rw-rw-r-- 18 cassandra cassandra  20G Dec 25 01:09 
 /data/sstables/data/ks/cf/ks-cf-ic-507770-Data.db
 -rw-rw-r--  3 cassandra cassandra  12G Jan  8 04:22 
 /data/sstables/data/ks/cf/ks-cf-ic-567100-Data.db
 -rw-rw-r--  3 cassandra cassandra 957M Jan  8 22:51 
 /data/sstables/data/ks/cf/ks-cf-ic-569015-Data.db
 -rw-rw-r--  2 cassandra cassandra 923M Jan  9 17:04 
 /data/sstables/data/ks/cf/ks-cf-ic-571303-Data.db
 -rw-rw-r--  1 cassandra cassandra 821M Jan 10 08:20 
 /data/sstables/data/ks/cf/ks-cf-ic-574642-Data.db
 -rw-rw-r--  1 cassandra cassandra  18M Jan 10 08:48 
 /data/sstables/data/ks/cf/ks-cf-ic-574723-Data.db
 {noformat}
 I tried to do a user defined compaction on sstables from November and got it 
 is not an active sstable.  Live sstable count from jmx was about 7 while on 
 disk there were over 20.  Live vs total size showed about a ~50 GiB 
 difference.
 Forcing a gc from jconsole had no effect.  However, restarting the node 
 resulted in live sstables/bytes *increasing* to match what was on disk.  User 
 compaction could now compact the November sstables.  This cluster was last 
 restarted in mid December.
 I'm not sure what affect not live had on other operations of the cluster.  
 From the logs it seems that the files were sent at least at some point as 
 part of repair, but I don't know if they were being being used for read 
 requests or not.  Because the problem that got me looking in the first place 
 was poor performance I suspect they were  used for reads (and the reads were 
 slow because so many sstables were being read).  I presume based on their age 
 at the least they were being excluded from compaction.
 I'm not aware of any isLive() or getRefCount() to problematically confirm 
 which nodes have this problem.  In this cluster almost all columns have a 14 
 day TTL, based on the number of nodes with November sstables it appears to be 
 occurring on a significant fraction of the nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8035) 2.0.x repair causes large increasein client latency even for small datasets

2014-10-15 Thread Chris Burroughs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172315#comment-14172315
 ] 

Chris Burroughs commented on CASSANDRA-8035:


This particular cluster has triggered GCInspector only a handful of times in 
the past two weeks, and none during the relevant repair period.  I think that 
makes GC an unlikely culprit.

 2.0.x repair causes large increasein client latency even for small datasets
 ---

 Key: CASSANDRA-8035
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8035
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: c-2.0.10, 3 nodes per @ DCs.  Load  50 MB
Reporter: Chris Burroughs
 Attachments: cl-latency.png, cpu-idle.png, keyspace-99p.png, 
 row-cache-hit-rate.png


 Running repair causes a significnat increase in client latency even when the 
 total amount of data per node is very small.
 Each node 900 req/s and during normal operations the 99p Client Request 
 Lantecy is less than 4 ms and usually less than 1ms.  During repair the 
 latency increases to within 4-10ms on all nodes.  I am unable to find any 
 resource based explantion for this.  Several graphs are attached to 
 summarize.  Repair started at about 10:10 and finished around 10:25.
  * Client Request Latency goes up significantly.
  * Local keyspace read latency is flat.  I interpret this to mean that it's 
 purly coordinator overhead that's causing the slowdown.
  * Row cache hit rate is unaffected ( and is very high).  Between these two 
 metrics I don't think there is any doubt that virtually all reads are being 
 satisfied in memory.
  * There is plenty of available cpu.  Aggregate cpu used (mostly nic) did go 
 up during this.
 Having more/larger keyspaces seems to make it worse.  Having two keyspaces on 
 this cluster (still with total size  RAM) caused larger increases in 
 latency which would have made for better graphs but it pushed the cluster 
 well outsid of SLAs and we needed to move the second keyspace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8123) List appends when inserting in the same value

2014-10-15 Thread Jorge Bay (JIRA)
Jorge Bay created CASSANDRA-8123:


 Summary: List appends when inserting in the same value
 Key: CASSANDRA-8123
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8123
 Project: Cassandra
  Issue Type: Bug
 Environment: C* 2.1.0 and C* 2.0.10
Reporter: Jorge Bay
Priority: Minor


List append when inserting in the same value

I'm getting list appends when executing multiple times concurrently an INSERT 
(or update) of a list column value on the same partition:

INSERT INTO cf1 VALUES (id, list_sample) VALUES (id1, ['one', 'two']);

It could result into a list with the values appended: ['one', 'two', 'one', 
'two', ...]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8123) List appends when inserting in the same value

2014-10-15 Thread Jorge Bay (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172326#comment-14172326
 ] 

Jorge Bay commented on CASSANDRA-8123:
--

I understand that if I'm going for unique values, I should use a set but I 
think the behavior is unexpected.

 List appends when inserting in the same value
 -

 Key: CASSANDRA-8123
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8123
 Project: Cassandra
  Issue Type: Bug
 Environment: C* 2.1.0 and C* 2.0.10
Reporter: Jorge Bay
Priority: Minor

 List append when inserting in the same value
 I'm getting list appends when executing multiple times concurrently an INSERT 
 (or update) of a list column value on the same partition:
 INSERT INTO cf1 VALUES (id, list_sample) VALUES (id1, ['one', 'two']);
 It could result into a list with the values appended: ['one', 'two', 'one', 
 'two', ...]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7237) Optimize batchlog manager to avoid full scans

2014-10-15 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-7237:
-
Reviewer: Aleksey Yeschenko

 Optimize batchlog manager to avoid full scans
 -

 Key: CASSANDRA-7237
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7237
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Branimir Lambov
Priority: Minor
 Fix For: 2.1.1


 Now that we use time-UUIDs for batchlog ids, and given that w/ local strategy 
 the partitions are ordered in time-order here, we can optimize the scanning 
 by limiting the range to replay taking the last replayed batch's id as the 
 beginning of the range, and uuid(now+timeout) as its end.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7237) Optimize batchlog manager to avoid full scans

2014-10-15 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-7237:
-
Assignee: Branimir Lambov  (was: Aleksey Yeschenko)

 Optimize batchlog manager to avoid full scans
 -

 Key: CASSANDRA-7237
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7237
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Branimir Lambov
Priority: Minor
 Fix For: 2.1.1


 Now that we use time-UUIDs for batchlog ids, and given that w/ local strategy 
 the partitions are ordered in time-order here, we can optimize the scanning 
 by limiting the range to replay taking the last replayed batch's id as the 
 beginning of the range, and uuid(now+timeout) as its end.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8123) List appends when inserting in the same value

2014-10-15 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172417#comment-14172417
 ] 

Aleksey Yeschenko commented on CASSANDRA-8123:
--

It's supposed to cover the previous values with a range tombstone, so you 
wouldn't append. However, if you execute them truly concurrently, there isn't 
much we can do, since for the two concurrent inserts they'll write the same 
range tombstone with the same timestamp, that won't overwrite each other.

How concurrent are we talking here?

 List appends when inserting in the same value
 -

 Key: CASSANDRA-8123
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8123
 Project: Cassandra
  Issue Type: Bug
 Environment: C* 2.1.0 and C* 2.0.10
Reporter: Jorge Bay
Priority: Minor

 List append when inserting in the same value
 I'm getting list appends when executing multiple times concurrently an INSERT 
 (or update) of a list column value on the same partition:
 INSERT INTO cf1 VALUES (id, list_sample) VALUES (id1, ['one', 'two']);
 It could result into a list with the values appended: ['one', 'two', 'one', 
 'two', ...]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8123) List appends when inserting in the same value

2014-10-15 Thread Jorge Bay (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172423#comment-14172423
 ] 

Jorge Bay edited comment on CASSANDRA-8123 at 10/15/14 2:43 PM:


Just 1 node, 2 connections and issuing the insert request 100 times. 

2 Different threads and executing the requests 50 times per connection in 
parallel.


was (Author: jorgebg):
Just 1 node, 2 connections and issuing the insert request 100. Different 
threads and executing the requests 50 times per connection in parallel.

 List appends when inserting in the same value
 -

 Key: CASSANDRA-8123
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8123
 Project: Cassandra
  Issue Type: Bug
 Environment: C* 2.1.0 and C* 2.0.10
Reporter: Jorge Bay
Priority: Minor

 List append when inserting in the same value
 I'm getting list appends when executing multiple times concurrently an INSERT 
 (or update) of a list column value on the same partition:
 INSERT INTO cf1 VALUES (id, list_sample) VALUES (id1, ['one', 'two']);
 It could result into a list with the values appended: ['one', 'two', 'one', 
 'two', ...]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8123) List appends when inserting in the same value

2014-10-15 Thread Jorge Bay (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172423#comment-14172423
 ] 

Jorge Bay commented on CASSANDRA-8123:
--

Just 1 node, 2 connections and issuing the insert request 100. Different 
threads and executing the requests 50 times per connection in parallel.

 List appends when inserting in the same value
 -

 Key: CASSANDRA-8123
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8123
 Project: Cassandra
  Issue Type: Bug
 Environment: C* 2.1.0 and C* 2.0.10
Reporter: Jorge Bay
Priority: Minor

 List append when inserting in the same value
 I'm getting list appends when executing multiple times concurrently an INSERT 
 (or update) of a list column value on the same partition:
 INSERT INTO cf1 VALUES (id, list_sample) VALUES (id1, ['one', 'two']);
 It could result into a list with the values appended: ['one', 'two', 'one', 
 'two', ...]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8123) List appends when inserting in the same value

2014-10-15 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-8123.
--
Resolution: Won't Fix

Then I'm afraid there isn't much there that could be done for lists. You could 
(and should, really) specify client timestamps for requests, and if you can do 
it with better than millisecond granularity, you can minimize this effect a lot 
(server-side timestamps will use System.currentTimeMillis() * 1000).

 List appends when inserting in the same value
 -

 Key: CASSANDRA-8123
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8123
 Project: Cassandra
  Issue Type: Bug
 Environment: C* 2.1.0 and C* 2.0.10
Reporter: Jorge Bay
Priority: Minor

 List append when inserting in the same value
 I'm getting list appends when executing multiple times concurrently an INSERT 
 (or update) of a list column value on the same partition:
 INSERT INTO cf1 VALUES (id, list_sample) VALUES (id1, ['one', 'two']);
 It could result into a list with the values appended: ['one', 'two', 'one', 
 'two', ...]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8123) List appends when inserting in the same value

2014-10-15 Thread Jorge Bay (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172434#comment-14172434
 ] 

Jorge Bay commented on CASSANDRA-8123:
--

I understand what is causing it, but I think the behavior is unexpected: from 
an api perspective, the operation is to set the value to a new value. 
If the values are different with the same timestamp, any of the values should 
win but not append (appending is like doing a sum of 2 values in a 
single-value column, in the same situation).

 List appends when inserting in the same value
 -

 Key: CASSANDRA-8123
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8123
 Project: Cassandra
  Issue Type: Bug
 Environment: C* 2.1.0 and C* 2.0.10
Reporter: Jorge Bay
Priority: Minor

 List append when inserting in the same value
 I'm getting list appends when executing multiple times concurrently an INSERT 
 (or update) of a list column value on the same partition:
 INSERT INTO cf1 VALUES (id, list_sample) VALUES (id1, ['one', 'two']);
 It could result into a list with the values appended: ['one', 'two', 'one', 
 'two', ...]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8123) List appends when inserting in the same value

2014-10-15 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172449#comment-14172449
 ] 

Sylvain Lebresne commented on CASSANDRA-8123:
-

bq. I understand what is causing it, but I think the behavior is unexpected

We're not disagreeing with you, but given how things are working internally, we 
don't know how to fix it currently. Once we've fixed CASSANDRA-6123, we might 
be able to fix that too, but before that this will have to be a known gotcha. 
Unless you have an actual solution to fix this (that don't involve entirely 
redisigning the internal list implementation preferably)?

 List appends when inserting in the same value
 -

 Key: CASSANDRA-8123
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8123
 Project: Cassandra
  Issue Type: Bug
 Environment: C* 2.1.0 and C* 2.0.10
Reporter: Jorge Bay
Priority: Minor

 List append when inserting in the same value
 I'm getting list appends when executing multiple times concurrently an INSERT 
 (or update) of a list column value on the same partition:
 INSERT INTO cf1 VALUES (id, list_sample) VALUES (id1, ['one', 'two']);
 It could result into a list with the values appended: ['one', 'two', 'one', 
 'two', ...]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8124) Stopping a node during compaction can make already written files stay around

2014-10-15 Thread Marcus Eriksson (JIRA)
Marcus Eriksson created CASSANDRA-8124:
--

 Summary: Stopping a node during compaction can make already 
written files stay around
 Key: CASSANDRA-8124
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8124
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
 Fix For: 2.1.1


In leveled compaction we generally create many files during compaction, in 2.0 
we left the ones we had written as -tmp- files, in 2.1 we close and open the 
readers, removing the -tmp- markers.

This means that any ongoing compactions will leave the resulting files around 
if we restart. Note that stop:ing the compaction will cause an exception and 
that makes us call abort() on the SSTableRewriter which removes the files.

Guess a fix could be to keep the -tmp- marker and make -tmplink- files until we 
are actually done with the compaction.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6998) HintedHandoff - expired hints may block future hints deliveries

2014-10-15 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172514#comment-14172514
 ] 

Jonathan Ellis commented on CASSANDRA-6998:
---

LGTM, +1

 HintedHandoff - expired hints may block future hints deliveries
 ---

 Key: CASSANDRA-6998
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6998
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: - cluster of two DCs: DC1, DC2
 - keyspace using NetworkTopologyStrategy (replication factors for both DCs)
 - heavy load (write:read, 100:1) with LOCAL_QUORUM using Java driver setup 
 with DC awareness, writing to DC1
Reporter: Scooletz
Assignee: Aleksey Yeschenko
  Labels: HintedHandoff, TTL
 Fix For: 2.0.11, 2.1.1

 Attachments: 6998, 6998-v2.txt


 For tests purposes, DC2 was shut down for 1 day. The _hints_ table was filled 
 with millions of rows. Now, when _HintedHandOffManager_ tries to 
 _doDeliverHintsToEndpoint_  it queries the store with 
 QueryFilter.getSliceFilter which counts deleted (TTLed) cells and throws 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException. 
 Throwing this exception stops the manager from running compaction as it is 
 run only after successful handoff. This leaves the HH practically disabled 
 till administrator runs truncateAllHints. 
 Wouldn't it be nicer if on 
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException run compaction? 
 That would remove TTLed hints leaving whole HH mechanism in a healthy state.
 The stacktrace is:
 {quote}
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:201)
   at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
   at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
   at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
   at 
 org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:351)
   at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:309)
   at 
 org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:92)
   at 
 org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:530)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:722)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8115) Windows install scripts fail to set logdir and datadir

2014-10-15 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172557#comment-14172557
 ] 

Philip Thompson commented on CASSANDRA-8115:


+1. Working fine under cmd and powershell with restricted and unrestricted 
execution policies. If I have any complaints it is that no warning is given 
when a user runs 'cassandra.bat install' when 'cassandra.bat -install' should 
be used.

 Windows install scripts fail to set logdir and datadir
 --

 Key: CASSANDRA-8115
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8115
 Project: Cassandra
  Issue Type: Bug
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
  Labels: Windows
 Fix For: 2.1.1

 Attachments: 8115_v1.txt


 After CASSANDRA-7136, the install scripts to run Cassandra as a service fail 
 on both the legacy and the powershell paths.  Looks like they need to have
 {code}
 ++JvmOptions=-Dcassandra.logdir=%CASSANDRA_HOME%\logs ^
 ++JvmOptions=-Dcassandra.storagedir=%CASSANDRA_HOME%\data
 {code}
 added to function correctly.
 We should take this opportunity to make sure the source of the java options 
 is uniform for both running and installation to prevent mismatches like this 
 in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] git commit: Merge branch 'cassandra-2.1' into trunk

2014-10-15 Thread tylerhobbs
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dfda97cb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dfda97cb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dfda97cb

Branch: refs/heads/trunk
Commit: dfda97cbfa3330a18fda76a5d96dc70be7c52c0c
Parents: 18886e1 f16507d
Author: Tyler Hobbs tylerho...@apache.org
Authored: Wed Oct 15 11:38:13 2014 -0500
Committer: Tyler Hobbs tylerho...@apache.org
Committed: Wed Oct 15 11:38:13 2014 -0500

--
 CHANGES.txt  |  2 ++
 .../cassandra/transport/messages/ErrorMessage.java   | 15 ++-
 2 files changed, 16 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dfda97cb/CHANGES.txt
--
diff --cc CHANGES.txt
index be84b11,0ae7af9..4a669c2
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,34 -1,6 +1,36 @@@
 +3.0
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + * Support for aggregation functions (CASSANDRA-4914)
 + * Remove cassandra-cli (CASSANDRA-7920)
 + * Accept dollar quoted strings in CQL (CASSANDRA-7769)
 + * Make assassinate a first class command (CASSANDRA-7935)
 + * Support IN clause on any clustering column (CASSANDRA-4762)
 + * Improve compaction logging (CASSANDRA-7818)
 + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
 + * Do anticompaction in groups (CASSANDRA-6851)
 + * Support pure user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 
7781, 7929,
 +   7924, 7812, 8063)
 + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416)
 + * Move sstable RandomAccessReader to nio2, which allows using the
 +   FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050)
 + * Remove CQL2 (CASSANDRA-5918)
 + * Add Thrift get_multi_slice call (CASSANDRA-6757)
 + * Optimize fetching multiple cells by name (CASSANDRA-6933)
 + * Allow compilation in java 8 (CASSANDRA-7028)
 + * Make incremental repair default (CASSANDRA-7250)
 + * Enable code coverage thru JaCoCo (CASSANDRA-7226)
 + * Switch external naming of 'column families' to 'tables' (CASSANDRA-4369) 
 + * Shorten SSTable path (CASSANDRA-6962)
 + * Use unsafe mutations for most unit tests (CASSANDRA-6969)
 + * Fix race condition during calculation of pending ranges (CASSANDRA-7390)
 + * Fail on very large batch sizes (CASSANDRA-8011)
 + * improve concurrency of repair (CASSANDRA-6455)
 +
 +
  2.1.1
+  * Send proper error response when there is an error during native
+protocol message decode (CASSANDRA-8118)
   * Gossip should ignore generation numbers too far in the future 
(CASSANDRA-8113)
   * Fix NPE when creating a table with frozen sets, lists (CASSANDRA-8104)
   * Fix high memory use due to tracking reads on incrementally opened sstable



git commit: Properly handle exceptions during native proto decode

2014-10-15 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 311d1 - f16507dd1


Properly handle exceptions during native proto decode

Patch by Tyler Hobbs; reviewed by Aleksey Yeschenko for CASSANDRA-8118


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f16507dd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f16507dd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f16507dd

Branch: refs/heads/cassandra-2.1
Commit: f16507dd1608456da0b9826b47e21c04699f0393
Parents: 311
Author: Tyler Hobbs tylerho...@apache.org
Authored: Wed Oct 15 11:37:08 2014 -0500
Committer: Tyler Hobbs tylerho...@apache.org
Committed: Wed Oct 15 11:37:08 2014 -0500

--
 CHANGES.txt  |  2 ++
 .../cassandra/transport/messages/ErrorMessage.java   | 15 ++-
 2 files changed, 16 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f16507dd/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6d9d221..0ae7af9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.1.1
+ * Send proper error response when there is an error during native
+   protocol message decode (CASSANDRA-8118)
  * Gossip should ignore generation numbers too far in the future 
(CASSANDRA-8113)
  * Fix NPE when creating a table with frozen sets, lists (CASSANDRA-8104)
  * Fix high memory use due to tracking reads on incrementally opened sstable

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f16507dd/src/java/org/apache/cassandra/transport/messages/ErrorMessage.java
--
diff --git a/src/java/org/apache/cassandra/transport/messages/ErrorMessage.java 
b/src/java/org/apache/cassandra/transport/messages/ErrorMessage.java
index 0aa54f1..7e4a3a9 100644
--- a/src/java/org/apache/cassandra/transport/messages/ErrorMessage.java
+++ b/src/java/org/apache/cassandra/transport/messages/ErrorMessage.java
@@ -18,6 +18,7 @@
 package org.apache.cassandra.transport.messages;
 
 import io.netty.buffer.ByteBuf;
+import io.netty.handler.codec.CodecException;
 import com.google.common.base.Predicate;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -216,7 +217,19 @@ public class ErrorMessage extends Message.Response
 public static ErrorMessage fromException(Throwable e, PredicateThrowable 
unexpectedExceptionHandler)
 {
 int streamId = 0;
-if (e instanceof WrappedException)
+
+// Netty will wrap exceptions during decoding in a CodecException. If 
the cause was one of our ProtocolExceptions
+// or some other internal exception, extract that and use it.
+if (e instanceof CodecException)
+{
+Throwable cause = e.getCause();
+if (cause != null  cause instanceof WrappedException)
+{
+streamId = ((WrappedException)cause).streamId;
+e = cause.getCause();
+}
+}
+else if (e instanceof WrappedException)
 {
 streamId = ((WrappedException)e).streamId;
 e = e.getCause();



[1/2] git commit: Properly handle exceptions during native proto decode

2014-10-15 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 18886e128 - dfda97cbf


Properly handle exceptions during native proto decode

Patch by Tyler Hobbs; reviewed by Aleksey Yeschenko for CASSANDRA-8118


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f16507dd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f16507dd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f16507dd

Branch: refs/heads/trunk
Commit: f16507dd1608456da0b9826b47e21c04699f0393
Parents: 311
Author: Tyler Hobbs tylerho...@apache.org
Authored: Wed Oct 15 11:37:08 2014 -0500
Committer: Tyler Hobbs tylerho...@apache.org
Committed: Wed Oct 15 11:37:08 2014 -0500

--
 CHANGES.txt  |  2 ++
 .../cassandra/transport/messages/ErrorMessage.java   | 15 ++-
 2 files changed, 16 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f16507dd/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6d9d221..0ae7af9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.1.1
+ * Send proper error response when there is an error during native
+   protocol message decode (CASSANDRA-8118)
  * Gossip should ignore generation numbers too far in the future 
(CASSANDRA-8113)
  * Fix NPE when creating a table with frozen sets, lists (CASSANDRA-8104)
  * Fix high memory use due to tracking reads on incrementally opened sstable

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f16507dd/src/java/org/apache/cassandra/transport/messages/ErrorMessage.java
--
diff --git a/src/java/org/apache/cassandra/transport/messages/ErrorMessage.java 
b/src/java/org/apache/cassandra/transport/messages/ErrorMessage.java
index 0aa54f1..7e4a3a9 100644
--- a/src/java/org/apache/cassandra/transport/messages/ErrorMessage.java
+++ b/src/java/org/apache/cassandra/transport/messages/ErrorMessage.java
@@ -18,6 +18,7 @@
 package org.apache.cassandra.transport.messages;
 
 import io.netty.buffer.ByteBuf;
+import io.netty.handler.codec.CodecException;
 import com.google.common.base.Predicate;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -216,7 +217,19 @@ public class ErrorMessage extends Message.Response
 public static ErrorMessage fromException(Throwable e, PredicateThrowable 
unexpectedExceptionHandler)
 {
 int streamId = 0;
-if (e instanceof WrappedException)
+
+// Netty will wrap exceptions during decoding in a CodecException. If 
the cause was one of our ProtocolExceptions
+// or some other internal exception, extract that and use it.
+if (e instanceof CodecException)
+{
+Throwable cause = e.getCause();
+if (cause != null  cause instanceof WrappedException)
+{
+streamId = ((WrappedException)cause).streamId;
+e = cause.getCause();
+}
+}
+else if (e instanceof WrappedException)
 {
 streamId = ((WrappedException)e).streamId;
 e = e.getCause();



[jira] [Updated] (CASSANDRA-8084) GossipFilePropertySnitch and EC2MultiRegionSnitch when used in AWS/GCE clusters doesnt use the PRIVATE IPS for Intra-DC communications - When running nodetool repair

2014-10-15 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-8084:
--
Attachment: 8084-2.0-v3.txt

Attaching v3.

v2 has problems that 1) remote node always create session with broadcast 
address and 2) cannot be evicted by gossip since it only uses broadcast address.

In v3, each StreamSession has 'peer' address as an node id the same as before 
and 'connecting' address that indicates actual connecting address.
So nodetool nestats now shows both when the two are not the same. I haven't 
touched the logs because in those lines 'peer' address is used as a node 
identifier, but I added one INFO log that indicates StreamSession is using 
private IP.

 GossipFilePropertySnitch and EC2MultiRegionSnitch when used in AWS/GCE 
 clusters doesnt use the PRIVATE IPS for Intra-DC communications - When 
 running nodetool repair
 -

 Key: CASSANDRA-8084
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8084
 Project: Cassandra
  Issue Type: Bug
  Components: Config
 Environment: Tested this in GCE and AWS clusters. Created multi 
 region and multi dc cluster once in GCE and once in AWS and ran into the same 
 problem. 
 DISTRIB_ID=Ubuntu
 DISTRIB_RELEASE=12.04
 DISTRIB_CODENAME=precise
 DISTRIB_DESCRIPTION=Ubuntu 12.04.3 LTS
 NAME=Ubuntu
 VERSION=12.04.3 LTS, Precise Pangolin
 ID=ubuntu
 ID_LIKE=debian
 PRETTY_NAME=Ubuntu precise (12.04.3 LTS)
 VERSION_ID=12.04
 Tried to install Apache Cassandra version ReleaseVersion: 2.0.10 and also 
 latest DSE version which is 4.5 and which corresponds to 2.0.8.39.
Reporter: Jana
Assignee: Yuki Morishita
  Labels: features
 Fix For: 2.0.11

 Attachments: 8084-2.0-v2.txt, 8084-2.0-v3.txt, 8084-2.0.txt


 Neither of these snitches(GossipFilePropertySnitch and EC2MultiRegionSnitch ) 
 used the PRIVATE IPS for communication between INTRA-DC nodes in my 
 multi-region multi-dc cluster in cloud(on both AWS and GCE) when I ran 
 nodetool repair -local. It works fine during regular reads.
  Here are the various cluster flavors I tried and failed- 
 AWS + Multi-REGION + Multi-DC + GossipPropertyFileSnitch + 
 (Prefer_local=true) in rackdc-properties file. 
 AWS + Multi-REGION + Multi-DC + EC2MultiRegionSnitch + (Prefer_local=true) in 
 rackdc-properties file. 
 GCE + Multi-REGION + Multi-DC + GossipPropertyFileSnitch + 
 (Prefer_local=true) in rackdc-properties file. 
 GCE + Multi-REGION + Multi-DC + EC2MultiRegionSnitch + (Prefer_local=true) in 
 rackdc-properties file. 
 I am expecting with the above setup all of my nodes in a given DC all 
 communicate via private ips since the cloud providers dont charge us for 
 using the private ips and they charge for using public ips.
 But they can use PUBLIC IPs for INTER-DC communications which is working as 
 expected. 
 Here is a snippet from my log files when I ran the nodetool repair -local - 
 Node responding to 'node running repair' 
 INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,628 Validator.java (line 254) 
 [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Sending completed merkle tree 
 to /54.172.118.222 for system_traces/sessions
  INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,741 Validator.java (line 254) 
 [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Sending completed merkle tree 
 to /54.172.118.222 for system_traces/events
 Node running repair - 
 INFO [AntiEntropyStage:1] 2014-10-08 14:47:51,927 RepairSession.java (line 
 166) [repair #1439f290-4efa-11e4-bf3a-df845ecf54f8] Received merkle tree for 
 events from /54.172.118.222
 Note: The IPs its communicating is all PUBLIC Ips and it should have used the 
 PRIVATE IPs starting with 172.x.x.x
 YAML file values : 
 The listen address is set to: PRIVATE IP
 The broadcast address is set to: PUBLIC IP
 The SEEDs address is set to: PUBLIC IPs from both DCs
 The SNITCHES tried: GPFS and EC2MultiRegionSnitch
 RACK-DC: Had prefer_local set to true. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7901) Implement -f functionality in stop-server.bat

2014-10-15 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-7901:
---
Labels: Windows qa-resolved  (was: Windows)

 Implement -f functionality in stop-server.bat
 -

 Key: CASSANDRA-7901
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7901
 Project: Cassandra
  Issue Type: Improvement
 Environment: Windows
Reporter: Philip Thompson
Assignee: Philip Thompson
Priority: Minor
  Labels: Windows, qa-resolved
 Fix For: 2.1.1, 3.0

 Attachments: 7901-v2.txt, 7901.txt


 Stop-server.bat lists -f as an argument but does not handle it inside of 
 stop-server.ps1. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8087) Multiple non-DISTINCT rows returned when page_size set

2014-10-15 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172568#comment-14172568
 ] 

Philip Thompson commented on CASSANDRA-8087:


This is causing the dtest cql_tests:TestCQL.static_columns_with_distinct_test 
to fail.

 Multiple non-DISTINCT rows returned when page_size set
 --

 Key: CASSANDRA-8087
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8087
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Adam Holmberg
Priority: Minor
 Fix For: 2.0.11


 Using the following statements to reproduce:
 {code}
 CREATE TABLE test (
 k int,
 p int,
 s int static,
 PRIMARY KEY (k, p)
 );
 INSERT INTO test (k, p) VALUES (1, 1);
 INSERT INTO test (k, p) VALUES (1, 2);
 SELECT DISTINCT k, s FROM test ;
 {code}
 Native clients that set result_page_size in the query message receive 
 multiple non-distinct rows back (one per clustered value p in row k).
 This is only reproduced on 2.0.10. Does not appear in 2.1.0
 It does not appear in cqlsh for 2.0.10 because thrift.
 See https://datastax-oss.atlassian.net/browse/PYTHON-164 for background



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8115) Windows install scripts fail to set logdir and datadir

2014-10-15 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172611#comment-14172611
 ] 

Joshua McKenzie commented on CASSANDRA-8115:


That annoys me as well.  I'll see if I can find a solution for that and if it's 
trivial enough, commit it w/this.

 Windows install scripts fail to set logdir and datadir
 --

 Key: CASSANDRA-8115
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8115
 Project: Cassandra
  Issue Type: Bug
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
  Labels: Windows
 Fix For: 2.1.1

 Attachments: 8115_v1.txt


 After CASSANDRA-7136, the install scripts to run Cassandra as a service fail 
 on both the legacy and the powershell paths.  Looks like they need to have
 {code}
 ++JvmOptions=-Dcassandra.logdir=%CASSANDRA_HOME%\logs ^
 ++JvmOptions=-Dcassandra.storagedir=%CASSANDRA_HOME%\data
 {code}
 added to function correctly.
 We should take this opportunity to make sure the source of the java options 
 is uniform for both running and installation to prevent mismatches like this 
 in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


git commit: Fix 2i lookup on collection cell names w/ some clustering columns

2014-10-15 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 f16507dd1 - f54cd98d2


Fix 2i lookup on collection cell names w/ some clustering columns

Patch by Tyler Hobbs; reviewed by Aleksey Yeschenko for CASSANDRA-8073


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f54cd98d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f54cd98d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f54cd98d

Branch: refs/heads/cassandra-2.1
Commit: f54cd98d26b3fcc1dc15ef7b5645b5cc5f69d416
Parents: f16507d
Author: Tyler Hobbs tylerho...@apache.org
Authored: Wed Oct 15 12:10:39 2014 -0500
Committer: Tyler Hobbs tylerho...@apache.org
Committed: Wed Oct 15 12:10:39 2014 -0500

--
 CHANGES.txt |  2 ++
 .../cassandra/db/filter/ExtendedFilter.java | 26 +---
 .../CompositesIndexOnCollectionKey.java |  2 +-
 .../cassandra/cql3/ContainsRelationTest.java| 25 +++
 4 files changed, 50 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f54cd98d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0ae7af9..4da1e56 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.1.1
+ * Fix exception when querying secondary index on set items or map keys
+   when some clustering columns are specified (CASSANDRA-8073)
  * Send proper error response when there is an error during native
protocol message decode (CASSANDRA-8118)
  * Gossip should ignore generation numbers too far in the future 
(CASSANDRA-8113)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f54cd98d/src/java/org/apache/cassandra/db/filter/ExtendedFilter.java
--
diff --git a/src/java/org/apache/cassandra/db/filter/ExtendedFilter.java 
b/src/java/org/apache/cassandra/db/filter/ExtendedFilter.java
index 4f27a51..e945d2b 100644
--- a/src/java/org/apache/cassandra/db/filter/ExtendedFilter.java
+++ b/src/java/org/apache/cassandra/db/filter/ExtendedFilter.java
@@ -20,10 +20,7 @@ package org.apache.cassandra.db.filter;
 import java.nio.ByteBuffer;
 import java.util.*;
 
-import com.google.common.base.Predicate;
-import com.google.common.collect.Iterators;
-import org.apache.cassandra.db.marshal.CollectionType;
-import org.apache.cassandra.utils.ByteBufferUtil;
+import com.google.common.base.Objects;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -33,6 +30,7 @@ import org.apache.cassandra.db.*;
 import org.apache.cassandra.db.composites.CellName;
 import org.apache.cassandra.db.composites.Composite;
 import org.apache.cassandra.db.marshal.AbstractType;
+import org.apache.cassandra.db.marshal.CollectionType;
 import org.apache.cassandra.db.marshal.CompositeType;
 
 /**
@@ -151,6 +149,17 @@ public abstract class ExtendedFilter
 }
 }
 
+public String toString()
+{
+return Objects.toStringHelper(this)
+  .add(dataRange, dataRange)
+  .add(maxResults, maxResults)
+  .add(currentLimit, currentLimit)
+  .add(timestamp, timestamp)
+  .add(countCQL3Rows, countCQL3Rows)
+  .toString();
+}
+
 public static class WithClauses extends ExtendedFilter
 {
 private final ListIndexExpression clause;
@@ -395,6 +404,15 @@ public abstract class ExtendedFilter
 }
 throw new AssertionError();
 }
+
+public String toString()
+{
+return Objects.toStringHelper(this)
+  .add(dataRange, dataRange)
+  .add(timestamp, timestamp)
+  .add(clause, clause)
+  .toString();
+}
 }
 
 private static class EmptyClauseFilter extends ExtendedFilter

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f54cd98d/src/java/org/apache/cassandra/db/index/composites/CompositesIndexOnCollectionKey.java
--
diff --git 
a/src/java/org/apache/cassandra/db/index/composites/CompositesIndexOnCollectionKey.java
 
b/src/java/org/apache/cassandra/db/index/composites/CompositesIndexOnCollectionKey.java
index 2d25f8e..c252546 100644
--- 
a/src/java/org/apache/cassandra/db/index/composites/CompositesIndexOnCollectionKey.java
+++ 
b/src/java/org/apache/cassandra/db/index/composites/CompositesIndexOnCollectionKey.java
@@ -74,7 +74,7 @@ public class CompositesIndexOnCollectionKey extends 
CompositesIndex
 int count = 1 + baseCfs.metadata.clusteringColumns().size();
 CBuilder builder = 

[1/2] git commit: Fix 2i lookup on collection cell names w/ some clustering columns

2014-10-15 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk dfda97cbf - e2df76e02


Fix 2i lookup on collection cell names w/ some clustering columns

Patch by Tyler Hobbs; reviewed by Aleksey Yeschenko for CASSANDRA-8073


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f54cd98d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f54cd98d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f54cd98d

Branch: refs/heads/trunk
Commit: f54cd98d26b3fcc1dc15ef7b5645b5cc5f69d416
Parents: f16507d
Author: Tyler Hobbs tylerho...@apache.org
Authored: Wed Oct 15 12:10:39 2014 -0500
Committer: Tyler Hobbs tylerho...@apache.org
Committed: Wed Oct 15 12:10:39 2014 -0500

--
 CHANGES.txt |  2 ++
 .../cassandra/db/filter/ExtendedFilter.java | 26 +---
 .../CompositesIndexOnCollectionKey.java |  2 +-
 .../cassandra/cql3/ContainsRelationTest.java| 25 +++
 4 files changed, 50 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f54cd98d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0ae7af9..4da1e56 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.1.1
+ * Fix exception when querying secondary index on set items or map keys
+   when some clustering columns are specified (CASSANDRA-8073)
  * Send proper error response when there is an error during native
protocol message decode (CASSANDRA-8118)
  * Gossip should ignore generation numbers too far in the future 
(CASSANDRA-8113)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f54cd98d/src/java/org/apache/cassandra/db/filter/ExtendedFilter.java
--
diff --git a/src/java/org/apache/cassandra/db/filter/ExtendedFilter.java 
b/src/java/org/apache/cassandra/db/filter/ExtendedFilter.java
index 4f27a51..e945d2b 100644
--- a/src/java/org/apache/cassandra/db/filter/ExtendedFilter.java
+++ b/src/java/org/apache/cassandra/db/filter/ExtendedFilter.java
@@ -20,10 +20,7 @@ package org.apache.cassandra.db.filter;
 import java.nio.ByteBuffer;
 import java.util.*;
 
-import com.google.common.base.Predicate;
-import com.google.common.collect.Iterators;
-import org.apache.cassandra.db.marshal.CollectionType;
-import org.apache.cassandra.utils.ByteBufferUtil;
+import com.google.common.base.Objects;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -33,6 +30,7 @@ import org.apache.cassandra.db.*;
 import org.apache.cassandra.db.composites.CellName;
 import org.apache.cassandra.db.composites.Composite;
 import org.apache.cassandra.db.marshal.AbstractType;
+import org.apache.cassandra.db.marshal.CollectionType;
 import org.apache.cassandra.db.marshal.CompositeType;
 
 /**
@@ -151,6 +149,17 @@ public abstract class ExtendedFilter
 }
 }
 
+public String toString()
+{
+return Objects.toStringHelper(this)
+  .add(dataRange, dataRange)
+  .add(maxResults, maxResults)
+  .add(currentLimit, currentLimit)
+  .add(timestamp, timestamp)
+  .add(countCQL3Rows, countCQL3Rows)
+  .toString();
+}
+
 public static class WithClauses extends ExtendedFilter
 {
 private final ListIndexExpression clause;
@@ -395,6 +404,15 @@ public abstract class ExtendedFilter
 }
 throw new AssertionError();
 }
+
+public String toString()
+{
+return Objects.toStringHelper(this)
+  .add(dataRange, dataRange)
+  .add(timestamp, timestamp)
+  .add(clause, clause)
+  .toString();
+}
 }
 
 private static class EmptyClauseFilter extends ExtendedFilter

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f54cd98d/src/java/org/apache/cassandra/db/index/composites/CompositesIndexOnCollectionKey.java
--
diff --git 
a/src/java/org/apache/cassandra/db/index/composites/CompositesIndexOnCollectionKey.java
 
b/src/java/org/apache/cassandra/db/index/composites/CompositesIndexOnCollectionKey.java
index 2d25f8e..c252546 100644
--- 
a/src/java/org/apache/cassandra/db/index/composites/CompositesIndexOnCollectionKey.java
+++ 
b/src/java/org/apache/cassandra/db/index/composites/CompositesIndexOnCollectionKey.java
@@ -74,7 +74,7 @@ public class CompositesIndexOnCollectionKey extends 
CompositesIndex
 int count = 1 + baseCfs.metadata.clusteringColumns().size();
 CBuilder builder = getIndexComparator().builder();
  

[2/2] git commit: Merge branch 'cassandra-2.1' into trunk

2014-10-15 Thread tylerhobbs
Merge branch 'cassandra-2.1' into trunk

Conflicts:
src/java/org/apache/cassandra/db/filter/ExtendedFilter.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e2df76e0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e2df76e0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e2df76e0

Branch: refs/heads/trunk
Commit: e2df76e02454ff0b947b5ebc791f8f81c0696922
Parents: dfda97c f54cd98
Author: Tyler Hobbs tylerho...@apache.org
Authored: Wed Oct 15 12:13:23 2014 -0500
Committer: Tyler Hobbs tylerho...@apache.org
Committed: Wed Oct 15 12:13:23 2014 -0500

--
 CHANGES.txt |  2 ++
 .../cassandra/db/filter/ExtendedFilter.java | 23 +-
 .../CompositesIndexOnCollectionKey.java |  2 +-
 .../cassandra/cql3/ContainsRelationTest.java| 25 
 4 files changed, 50 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e2df76e0/CHANGES.txt
--
diff --cc CHANGES.txt
index 4a669c2,4da1e56..8248d76
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,34 -1,6 +1,36 @@@
 +3.0
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + * Support for aggregation functions (CASSANDRA-4914)
 + * Remove cassandra-cli (CASSANDRA-7920)
 + * Accept dollar quoted strings in CQL (CASSANDRA-7769)
 + * Make assassinate a first class command (CASSANDRA-7935)
 + * Support IN clause on any clustering column (CASSANDRA-4762)
 + * Improve compaction logging (CASSANDRA-7818)
 + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
 + * Do anticompaction in groups (CASSANDRA-6851)
 + * Support pure user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 
7781, 7929,
 +   7924, 7812, 8063)
 + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416)
 + * Move sstable RandomAccessReader to nio2, which allows using the
 +   FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050)
 + * Remove CQL2 (CASSANDRA-5918)
 + * Add Thrift get_multi_slice call (CASSANDRA-6757)
 + * Optimize fetching multiple cells by name (CASSANDRA-6933)
 + * Allow compilation in java 8 (CASSANDRA-7028)
 + * Make incremental repair default (CASSANDRA-7250)
 + * Enable code coverage thru JaCoCo (CASSANDRA-7226)
 + * Switch external naming of 'column families' to 'tables' (CASSANDRA-4369) 
 + * Shorten SSTable path (CASSANDRA-6962)
 + * Use unsafe mutations for most unit tests (CASSANDRA-6969)
 + * Fix race condition during calculation of pending ranges (CASSANDRA-7390)
 + * Fail on very large batch sizes (CASSANDRA-8011)
 + * improve concurrency of repair (CASSANDRA-6455)
 +
 +
  2.1.1
+  * Fix exception when querying secondary index on set items or map keys
+when some clustering columns are specified (CASSANDRA-8073)
   * Send proper error response when there is an error during native
 protocol message decode (CASSANDRA-8118)
   * Gossip should ignore generation numbers too far in the future 
(CASSANDRA-8113)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e2df76e0/test/unit/org/apache/cassandra/cql3/ContainsRelationTest.java
--



[1/2] git commit: Ninja: add missing @Override to new toString() methods

2014-10-15 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk e2df76e02 - 2623982e8


Ninja: add missing @Override to new toString() methods


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ba79107a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ba79107a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ba79107a

Branch: refs/heads/trunk
Commit: ba79107aefb8a40047c4068b8d3a3f838ddb62fc
Parents: f54cd98
Author: Tyler Hobbs tylerho...@apache.org
Authored: Wed Oct 15 12:15:07 2014 -0500
Committer: Tyler Hobbs tylerho...@apache.org
Committed: Wed Oct 15 12:15:07 2014 -0500

--
 src/java/org/apache/cassandra/db/filter/ExtendedFilter.java | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ba79107a/src/java/org/apache/cassandra/db/filter/ExtendedFilter.java
--
diff --git a/src/java/org/apache/cassandra/db/filter/ExtendedFilter.java 
b/src/java/org/apache/cassandra/db/filter/ExtendedFilter.java
index e945d2b..b152472 100644
--- a/src/java/org/apache/cassandra/db/filter/ExtendedFilter.java
+++ b/src/java/org/apache/cassandra/db/filter/ExtendedFilter.java
@@ -149,6 +149,7 @@ public abstract class ExtendedFilter
 }
 }
 
+@Override
 public String toString()
 {
 return Objects.toStringHelper(this)
@@ -405,6 +406,7 @@ public abstract class ExtendedFilter
 throw new AssertionError();
 }
 
+@Override
 public String toString()
 {
 return Objects.toStringHelper(this)



[2/2] git commit: Merge branch 'cassandra-2.1' into trunk

2014-10-15 Thread tylerhobbs
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2623982e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2623982e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2623982e

Branch: refs/heads/trunk
Commit: 2623982e8b6f03b703eddaf830b69cc2dcafff5d
Parents: e2df76e ba79107
Author: Tyler Hobbs tylerho...@apache.org
Authored: Wed Oct 15 12:15:41 2014 -0500
Committer: Tyler Hobbs tylerho...@apache.org
Committed: Wed Oct 15 12:15:41 2014 -0500

--
 src/java/org/apache/cassandra/db/filter/ExtendedFilter.java | 2 ++
 1 file changed, 2 insertions(+)
--




git commit: Ninja: add missing @Override to new toString() methods

2014-10-15 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 f54cd98d2 - ba79107ae


Ninja: add missing @Override to new toString() methods


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ba79107a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ba79107a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ba79107a

Branch: refs/heads/cassandra-2.1
Commit: ba79107aefb8a40047c4068b8d3a3f838ddb62fc
Parents: f54cd98
Author: Tyler Hobbs tylerho...@apache.org
Authored: Wed Oct 15 12:15:07 2014 -0500
Committer: Tyler Hobbs tylerho...@apache.org
Committed: Wed Oct 15 12:15:07 2014 -0500

--
 src/java/org/apache/cassandra/db/filter/ExtendedFilter.java | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ba79107a/src/java/org/apache/cassandra/db/filter/ExtendedFilter.java
--
diff --git a/src/java/org/apache/cassandra/db/filter/ExtendedFilter.java 
b/src/java/org/apache/cassandra/db/filter/ExtendedFilter.java
index e945d2b..b152472 100644
--- a/src/java/org/apache/cassandra/db/filter/ExtendedFilter.java
+++ b/src/java/org/apache/cassandra/db/filter/ExtendedFilter.java
@@ -149,6 +149,7 @@ public abstract class ExtendedFilter
 }
 }
 
+@Override
 public String toString()
 {
 return Objects.toStringHelper(this)
@@ -405,6 +406,7 @@ public abstract class ExtendedFilter
 throw new AssertionError();
 }
 
+@Override
 public String toString()
 {
 return Objects.toStringHelper(this)



[jira] [Commented] (CASSANDRA-4762) Support IN clause for any clustering column

2014-10-15 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172630#comment-14172630
 ] 

Constance Eustace commented on CASSANDRA-4762:
--

Ok, thanks. I can use a different schema, and maybe try out the new one. I'm 
pretty interested in Presto / Shark adhoc queries, that's the main motivation 
of the reengineer... this ticket is for supporting our main lookup use cases...

 Support IN clause for any clustering column
 ---

 Key: CASSANDRA-4762
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4762
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: T Jake Luciani
Assignee: Benjamin Lerer
  Labels: cql, docs
 Fix For: 3.0

 Attachments: 4762-1.txt


 Given CASSANDRA-3885
 It seems it should be possible to store multiple ranges for many predicates 
 even the inner parts of a composite column.
 They could be expressed as a expanded set of filter queries.
 example:
 {code}
 CREATE TABLE test (
name text,
tdate timestamp,
tdate2 timestamp,
tdate3 timestamp,
num double,
PRIMARY KEY(name,tdate,tdate2,tdate3)
  ) WITH COMPACT STORAGE;
 SELECT * FROM test WHERE 
   name IN ('a','b') and
   tdate IN ('2010-01-01','2011-01-01') and
   tdate2 IN ('2010-01-01','2011-01-01') and
   tdate3 IN ('2010-01-01','2011-01-01') 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7341) Emit metrics related to CAS/Paxos

2014-10-15 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172696#comment-14172696
 ] 

Brandon Williams commented on CASSANDRA-7341:
-

LGTM, want to post a patch for 2.0?

 Emit metrics related to CAS/Paxos
 -

 Key: CASSANDRA-7341
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7341
 Project: Cassandra
  Issue Type: Improvement
Reporter: sankalp kohli
Assignee: sankalp kohli
Priority: Minor
 Attachments: CASClientRequestMetrics.java, trunk-7341-v2.diff, 
 trunk-7341.diff


 We can emit metrics based on Paxos. One of them is when there is contention. 
 I will add more metric in this JIRA if it is helpful. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] git commit: Avoid NPE on null nested UDT inside a set

2014-10-15 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 2623982e8 - e1c5ebde3


Avoid NPE on null nested UDT inside a set

Patch by Robert Stupp; reviewed by Tyler Hobbs for CASSANDRA-8105


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f6ea46e9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f6ea46e9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f6ea46e9

Branch: refs/heads/trunk
Commit: f6ea46e93ccd9d5388a6f0fa37ddef9cf2279997
Parents: ba79107
Author: Robert Stupp sn...@snazy.de
Authored: Wed Oct 15 13:48:14 2014 -0500
Committer: Tyler Hobbs tylerho...@apache.org
Committed: Wed Oct 15 13:48:14 2014 -0500

--
 CHANGES.txt  |  1 +
 .../apache/cassandra/db/marshal/TupleType.java   |  5 +
 .../org/apache/cassandra/cql3/UserTypesTest.java | 19 +++
 3 files changed, 21 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f6ea46e9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4da1e56..0d39416 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.1
+ * Fix NPE on null nested UDT inside a set (CASSANDRA-8105)
  * Fix exception when querying secondary index on set items or map keys
when some clustering columns are specified (CASSANDRA-8073)
  * Send proper error response when there is an error during native

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f6ea46e9/src/java/org/apache/cassandra/db/marshal/TupleType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/TupleType.java 
b/src/java/org/apache/cassandra/db/marshal/TupleType.java
index a7a83ea..42aaba1 100644
--- a/src/java/org/apache/cassandra/db/marshal/TupleType.java
+++ b/src/java/org/apache/cassandra/db/marshal/TupleType.java
@@ -72,8 +72,7 @@ public class TupleType extends AbstractTypeByteBuffer
 ByteBuffer bb1 = o1.duplicate();
 ByteBuffer bb2 = o2.duplicate();
 
-int i = 0;
-while (bb1.remaining()  0  bb2.remaining()  0)
+for (int i = 0; bb1.remaining()  0  bb2.remaining()  0; i++)
 {
 AbstractType? comparator = types.get(i);
 
@@ -95,8 +94,6 @@ public class TupleType extends AbstractTypeByteBuffer
 int cmp = comparator.compare(value1, value2);
 if (cmp != 0)
 return cmp;
-
-++i;
 }
 
 if (bb1.remaining() == 0)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f6ea46e9/test/unit/org/apache/cassandra/cql3/UserTypesTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/UserTypesTest.java 
b/test/unit/org/apache/cassandra/cql3/UserTypesTest.java
index ca84102..184de19 100644
--- a/test/unit/org/apache/cassandra/cql3/UserTypesTest.java
+++ b/test/unit/org/apache/cassandra/cql3/UserTypesTest.java
@@ -32,6 +32,25 @@ public class UserTypesTest extends CQLTester
 }
 
 @Test
+public void testCassandra8105() throws Throwable
+{
+String ut1 = createType(CREATE TYPE %s (a int, b int));
+String ut2 = createType(CREATE TYPE %s (j frozen + KEYSPACE + . + 
ut1 + , k int));
+createTable(CREATE TABLE %s (x int PRIMARY KEY, y setfrozen + 
KEYSPACE + . + ut2 + ));
+execute(INSERT INTO %s (x, y) VALUES (1, { { k: 1 } }));
+
+String ut3 = createType(CREATE TYPE %s (a int, b int));
+String ut4 = createType(CREATE TYPE %s (j frozen + KEYSPACE + . + 
ut3 + , k int));
+createTable(CREATE TABLE %s (x int PRIMARY KEY, y listfrozen + 
KEYSPACE + . + ut4 + ));
+execute(INSERT INTO %s (x, y) VALUES (1, [ { k: 1 } ]));
+
+String ut5 = createType(CREATE TYPE %s (a int, b int));
+String ut6 = createType(CREATE TYPE %s (i int, j frozen + KEYSPACE 
+ . + ut5 + ));
+createTable(CREATE TABLE %s (x int PRIMARY KEY, y setfrozen + 
KEYSPACE + . + ut6 + ));
+execute(INSERT INTO %s (x, y) VALUES (1, { { i: 1 } }));
+}
+
+@Test
 public void testFor7684() throws Throwable
 {
 String myType = createType(CREATE TYPE %s (x double));



git commit: Avoid NPE on null nested UDT inside a set

2014-10-15 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 ba79107ae - f6ea46e93


Avoid NPE on null nested UDT inside a set

Patch by Robert Stupp; reviewed by Tyler Hobbs for CASSANDRA-8105


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f6ea46e9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f6ea46e9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f6ea46e9

Branch: refs/heads/cassandra-2.1
Commit: f6ea46e93ccd9d5388a6f0fa37ddef9cf2279997
Parents: ba79107
Author: Robert Stupp sn...@snazy.de
Authored: Wed Oct 15 13:48:14 2014 -0500
Committer: Tyler Hobbs tylerho...@apache.org
Committed: Wed Oct 15 13:48:14 2014 -0500

--
 CHANGES.txt  |  1 +
 .../apache/cassandra/db/marshal/TupleType.java   |  5 +
 .../org/apache/cassandra/cql3/UserTypesTest.java | 19 +++
 3 files changed, 21 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f6ea46e9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4da1e56..0d39416 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.1
+ * Fix NPE on null nested UDT inside a set (CASSANDRA-8105)
  * Fix exception when querying secondary index on set items or map keys
when some clustering columns are specified (CASSANDRA-8073)
  * Send proper error response when there is an error during native

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f6ea46e9/src/java/org/apache/cassandra/db/marshal/TupleType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/TupleType.java 
b/src/java/org/apache/cassandra/db/marshal/TupleType.java
index a7a83ea..42aaba1 100644
--- a/src/java/org/apache/cassandra/db/marshal/TupleType.java
+++ b/src/java/org/apache/cassandra/db/marshal/TupleType.java
@@ -72,8 +72,7 @@ public class TupleType extends AbstractTypeByteBuffer
 ByteBuffer bb1 = o1.duplicate();
 ByteBuffer bb2 = o2.duplicate();
 
-int i = 0;
-while (bb1.remaining()  0  bb2.remaining()  0)
+for (int i = 0; bb1.remaining()  0  bb2.remaining()  0; i++)
 {
 AbstractType? comparator = types.get(i);
 
@@ -95,8 +94,6 @@ public class TupleType extends AbstractTypeByteBuffer
 int cmp = comparator.compare(value1, value2);
 if (cmp != 0)
 return cmp;
-
-++i;
 }
 
 if (bb1.remaining() == 0)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f6ea46e9/test/unit/org/apache/cassandra/cql3/UserTypesTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/UserTypesTest.java 
b/test/unit/org/apache/cassandra/cql3/UserTypesTest.java
index ca84102..184de19 100644
--- a/test/unit/org/apache/cassandra/cql3/UserTypesTest.java
+++ b/test/unit/org/apache/cassandra/cql3/UserTypesTest.java
@@ -32,6 +32,25 @@ public class UserTypesTest extends CQLTester
 }
 
 @Test
+public void testCassandra8105() throws Throwable
+{
+String ut1 = createType(CREATE TYPE %s (a int, b int));
+String ut2 = createType(CREATE TYPE %s (j frozen + KEYSPACE + . + 
ut1 + , k int));
+createTable(CREATE TABLE %s (x int PRIMARY KEY, y setfrozen + 
KEYSPACE + . + ut2 + ));
+execute(INSERT INTO %s (x, y) VALUES (1, { { k: 1 } }));
+
+String ut3 = createType(CREATE TYPE %s (a int, b int));
+String ut4 = createType(CREATE TYPE %s (j frozen + KEYSPACE + . + 
ut3 + , k int));
+createTable(CREATE TABLE %s (x int PRIMARY KEY, y listfrozen + 
KEYSPACE + . + ut4 + ));
+execute(INSERT INTO %s (x, y) VALUES (1, [ { k: 1 } ]));
+
+String ut5 = createType(CREATE TYPE %s (a int, b int));
+String ut6 = createType(CREATE TYPE %s (i int, j frozen + KEYSPACE 
+ . + ut5 + ));
+createTable(CREATE TABLE %s (x int PRIMARY KEY, y setfrozen + 
KEYSPACE + . + ut6 + ));
+execute(INSERT INTO %s (x, y) VALUES (1, { { i: 1 } }));
+}
+
+@Test
 public void testFor7684() throws Throwable
 {
 String myType = createType(CREATE TYPE %s (x double));



[2/2] git commit: Merge branch 'cassandra-2.1' into trunk

2014-10-15 Thread tylerhobbs
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e1c5ebde
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e1c5ebde
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e1c5ebde

Branch: refs/heads/trunk
Commit: e1c5ebde324b5eb7b22901020444de9761513840
Parents: 2623982 f6ea46e
Author: Tyler Hobbs tylerho...@apache.org
Authored: Wed Oct 15 13:49:07 2014 -0500
Committer: Tyler Hobbs tylerho...@apache.org
Committed: Wed Oct 15 13:49:07 2014 -0500

--
 CHANGES.txt  |  1 +
 .../apache/cassandra/db/marshal/TupleType.java   |  5 +
 .../org/apache/cassandra/cql3/UserTypesTest.java | 19 +++
 3 files changed, 21 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e1c5ebde/CHANGES.txt
--
diff --cc CHANGES.txt
index 8248d76,0d39416..4237862
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,34 -1,5 +1,35 @@@
 +3.0
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + * Support for aggregation functions (CASSANDRA-4914)
 + * Remove cassandra-cli (CASSANDRA-7920)
 + * Accept dollar quoted strings in CQL (CASSANDRA-7769)
 + * Make assassinate a first class command (CASSANDRA-7935)
 + * Support IN clause on any clustering column (CASSANDRA-4762)
 + * Improve compaction logging (CASSANDRA-7818)
 + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
 + * Do anticompaction in groups (CASSANDRA-6851)
 + * Support pure user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 
7781, 7929,
 +   7924, 7812, 8063)
 + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416)
 + * Move sstable RandomAccessReader to nio2, which allows using the
 +   FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050)
 + * Remove CQL2 (CASSANDRA-5918)
 + * Add Thrift get_multi_slice call (CASSANDRA-6757)
 + * Optimize fetching multiple cells by name (CASSANDRA-6933)
 + * Allow compilation in java 8 (CASSANDRA-7028)
 + * Make incremental repair default (CASSANDRA-7250)
 + * Enable code coverage thru JaCoCo (CASSANDRA-7226)
 + * Switch external naming of 'column families' to 'tables' (CASSANDRA-4369) 
 + * Shorten SSTable path (CASSANDRA-6962)
 + * Use unsafe mutations for most unit tests (CASSANDRA-6969)
 + * Fix race condition during calculation of pending ranges (CASSANDRA-7390)
 + * Fail on very large batch sizes (CASSANDRA-8011)
 + * improve concurrency of repair (CASSANDRA-6455)
 +
 +
  2.1.1
+  * Fix NPE on null nested UDT inside a set (CASSANDRA-8105)
   * Fix exception when querying secondary index on set items or map keys
 when some clustering columns are specified (CASSANDRA-8073)
   * Send proper error response when there is an error during native

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e1c5ebde/src/java/org/apache/cassandra/db/marshal/TupleType.java
--



[jira] [Commented] (CASSANDRA-8076) Expose an mbean method to poll for repair job status

2014-10-15 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172778#comment-14172778
 ] 

Yuki Morishita commented on CASSANDRA-8076:
---

Currently, we only return 'id' for one repair command invocation which consists 
of possibly several repairing ranges(=repair ids).

I think what we can do for now is to just keep that 'id' while it is running 
and discard when finished (successfully or not).
JMX API would be {{boolean isRepairRunning(int id)}}.

What do you think?

 Expose an mbean method to poll for repair job status
 

 Key: CASSANDRA-8076
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8076
 Project: Cassandra
  Issue Type: Improvement
Reporter: Philip S Doctor

 Given the int reply-id from forceRepairAsync, allow a client to request the 
 status of this ID via jmx.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8099) Refactor and modernize the storage engine

2014-10-15 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172818#comment-14172818
 ] 

Jason Brown commented on CASSANDRA-8099:


I'm +1 on the idea. I poked through the code quickly, and it seemed in the 
right direction - although I'd have to read more carefully/think more wrt to 
some of my earlier thoughts about (pluggable) storage engines. Also, I see that 
'Column' has made a comeback :)

 Refactor and modernize the storage engine
 -

 Key: CASSANDRA-8099
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8099
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 3.0


 The current storage engine (which for this ticket I'll loosely define as the 
 code implementing the read/write path) is suffering from old age. One of the 
 main problem is that the only structure it deals with is the cell, which 
 completely ignores the more high level CQL structure that groups cell into 
 (CQL) rows.
 This leads to many inefficiencies, like the fact that during a reads we have 
 to group cells multiple times (to count on replica, then to count on the 
 coordinator, then to produce the CQL resultset) because we forget about the 
 grouping right away each time (so lots of useless cell names comparisons in 
 particular). But outside inefficiencies, having to manually recreate the CQL 
 structure every time we need it for something is hindering new features and 
 makes the code more complex that it should be.
 Said storage engine also has tons of technical debt. To pick an example, the 
 fact that during range queries we update {{SliceQueryFilter.count}} is pretty 
 hacky and error prone. Or the overly complex ways {{AbstractQueryPager}} has 
 to go into to simply remove the last query result.
 So I want to bite the bullet and modernize this storage engine. I propose to 
 do 2 main things:
 # Make the storage engine more aware of the CQL structure. In practice, 
 instead of having partitions be a simple iterable map of cells, it should be 
 an iterable list of row (each being itself composed of per-column cells, 
 though obviously not exactly the same kind of cell we have today).
 # Make the engine more iterative. What I mean here is that in the read path, 
 we end up reading all cells in memory (we put them in a ColumnFamily object), 
 but there is really no reason to. If instead we were working with iterators 
 all the way through, we could get to a point where we're basically 
 transferring data from disk to the network, and we should be able to reduce 
 GC substantially.
 Please note that such refactor should provide some performance improvements 
 right off the bat but it's not it's primary goal either. It's primary goal is 
 to simplify the storage engine and adds abstraction that are better suited to 
 further optimizations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8036) Add dtest for ipv6 functionality

2014-10-15 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172831#comment-14172831
 ] 

Philip Thompson commented on CASSANDRA-8036:


Currently blocked on adding a test for this due to a bug in ccm, 
https://github.com/pcmanus/ccm/issues/185

 Add dtest for ipv6 functionality
 

 Key: CASSANDRA-8036
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8036
 Project: Cassandra
  Issue Type: Test
Reporter: Philip Thompson
Assignee: Philip Thompson

 Cassandra can run with ipv6 addresses, and cqlsh should be able to connect 
 via ipv6. We need a dtest to verify this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8125) nodetool statusgossip doesn't exist

2014-10-15 Thread Connor Warrington (JIRA)
Connor Warrington created CASSANDRA-8125:


 Summary: nodetool statusgossip doesn't exist
 Key: CASSANDRA-8125
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8125
 Project: Cassandra
  Issue Type: Improvement
Reporter: Connor Warrington
Priority: Minor


nodetool supports different checks for status on thrift and for binary but does 
not support a check for gossip. You can get this information from nodetool info.

The ones that exist are:
nodetool statusbinary
nodetool statusthrift

It would be nice if the following existed:
nodetool statusgossip



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


git commit: Allow CassandraDaemon to be run as a managed service

2014-10-15 Thread brandonwilliams
Repository: cassandra
Updated Branches:
  refs/heads/trunk e1c5ebde3 - 027006dcb


Allow CassandraDaemon to be run as a managed service

Patch by Heiko Braun, reviewed by brandonwilliams for CASSANDRA-7997


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/027006dc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/027006dc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/027006dc

Branch: refs/heads/trunk
Commit: 027006dcb0931e5b93f5378494831aadc3baa809
Parents: e1c5ebd
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed Oct 15 15:15:24 2014 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed Oct 15 15:15:24 2014 -0500

--
 .../cassandra/config/DatabaseDescriptor.java| 25 ++
 .../cassandra/service/CassandraDaemon.java  | 48 +++-
 .../cassandra/service/StorageService.java   |  2 -
 3 files changed, 42 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/027006dc/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
--
diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java 
b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
index 319801d..8659c94 100644
--- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
+++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java
@@ -119,17 +119,9 @@ public class DatabaseDescriptor
 {
 applyConfig(loadConfig());
 }
-catch (ConfigurationException e)
-{
-logger.error(Fatal configuration error, e);
-System.err.println(e.getMessage() + \nFatal configuration error; 
unable to start. See log for stacktrace.);
-System.exit(1);
-}
 catch (Exception e)
 {
-logger.error(Fatal error during configuration loading, e);
-System.err.println(e.getMessage() + \nFatal error during 
configuration loading; unable to start. See log for stacktrace.);
-System.exit(1);
+throw new ExceptionInInitializerError(e.getMessage() + \nFatal 
configuration error; unable to start. See log for stacktrace.);
 }
 }
 
@@ -601,9 +593,7 @@ public class DatabaseDescriptor
 // there are about 5 checked exceptions that could be thrown here.
 catch (Exception e)
 {
-logger.error(Fatal configuration error, e);
-System.err.println(e.getMessage() + \nFatal configuration error; 
unable to start server.  See log for stacktrace.);
-System.exit(1);
+throw new ConfigurationException(e.getMessage() + \nFatal 
configuration error; unable to start server.  See log for stacktrace.);
 }
 if (seedProvider.getSeeds().size() == 0)
 throw new ConfigurationException(The seed provider lists no 
seeds.);
@@ -722,15 +712,11 @@ public class DatabaseDescriptor
 }
 catch (ConfigurationException e)
 {
-logger.error(Fatal error: {}, e.getMessage());
-System.err.println(Bad configuration; unable to start server);
-System.exit(1);
+throw new IllegalArgumentException(Bad configuration; unable to 
start server: +e.getMessage());
 }
 catch (FSWriteError e)
 {
-logger.error(Fatal error: {}, e.getMessage());
-System.err.println(e.getCause().getMessage() + ; unable to start 
server);
-System.exit(1);
+throw new IllegalStateException(e.getCause().getMessage() + ; 
unable to start server);
 }
 }
 
@@ -1571,8 +1557,7 @@ public class DatabaseDescriptor
 case offheap_buffers:
 if (!FileUtils.isCleanerAvailable())
 {
-logger.error(Could not free direct byte buffer: 
offheap_buffers is not a safe memtable_allocation_type without this ability, 
please adjust your config. This feature is only guaranteed to work on an Oracle 
JVM. Refusing to start.);
-System.exit(-1);
+throw new IllegalStateException(Could not free direct 
byte buffer: offheap_buffers is not a safe memtable_allocation_type without 
this ability, please adjust your config. This feature is only guaranteed to 
work on an Oracle JVM. Refusing to start.);
 }
 return new SlabPool(heapLimit, offHeapLimit, 
conf.memtable_cleanup_threshold, new 
ColumnFamilyStore.FlushLargestColumnFamily());
 case offheap_objects:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/027006dc/src/java/org/apache/cassandra/service/CassandraDaemon.java

[jira] [Resolved] (CASSANDRA-7998) Remove the usage of System.exit() calls in core services

2014-10-15 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-7998.
-
Resolution: Fixed
  Reviewer: Brandon Williams
  Assignee: Heiko Braun

resolved in parent issue.

 Remove the usage of System.exit() calls in core services
 

 Key: CASSANDRA-7998
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7998
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Heiko Braun
Assignee: Heiko Braun
Priority: Minor

 The use of System.exit() prevents using the CassandraDaemon as a managed 
 service (managed from another Java process). The core services 
 (StorageService,DatabaseDescriptor, SSTableReader) should propagate 
 exceptions back to the callee so the decision to exit the VM (unmanaged case) 
 or further delegate that decision (managed case) can be handled in a well 
 defined place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8003) Allow the CassandraDaemon to be managed externally

2014-10-15 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-8003.
-
Resolution: Fixed
  Reviewer: Brandon Williams
  Assignee: Heiko Braun

Resolved in parent issue.

 Allow the CassandraDaemon to be managed externally
 --

 Key: CASSANDRA-8003
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8003
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Heiko Braun
Assignee: Heiko Braun
Priority: Minor

 This is related to CASSANDRA-7998 and deals with the control flow, if the 
 CassandraDaemon is managed by another Java process. In that case it should 
 not exit the VM, but instead delegate that decision to the process that 
 created the daemon in the first place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8122) Undeclare throwable exception while executing 'nodetool netstats localhost'

2014-10-15 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172925#comment-14172925
 ] 

Philip Thompson commented on CASSANDRA-8122:


Is this patch for 2.0?

 Undeclare throwable exception while executing 'nodetool netstats localhost'
 ---

 Key: CASSANDRA-8122
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8122
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Cassandra: 2.0.9
Reporter: Vishal Mehta
Priority: Minor
 Attachments: CASSANDRA-8122.patch


 *Steps*
 # Stop cassandra service
 # Check netstats of nodetool using 'nodetool netstats localhost'
 # Start cassandra service
 # Again check netstats of nodetool using 'nodetool netstats localhost'
 *Expected output*
 Mode: STARTING
 Not sending any streams. (End of output - no further exceptions)
 *Observed output*
 {noformat}
  nodetool netstats localhost
 Mode: STARTING
 Not sending any streams.
 Exception in thread main java.lang.reflect.UndeclaredThrowableException
   at com.sun.proxy.$Proxy6.getReadRepairAttempted(Unknown Source)
   at 
 org.apache.cassandra.tools.NodeProbe.getReadRepairAttempted(NodeProbe.java:897)
   at 
 org.apache.cassandra.tools.NodeCmd.printNetworkStats(NodeCmd.java:726)
   at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:1281)
 Caused by: javax.management.InstanceNotFoundException: 
 org.apache.cassandra.db:type=StorageProxy
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:643)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1464)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:657)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
   at 
 sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:273)
   at 
 sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:251)
   at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:160)
   at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
   at 
 javax.management.remote.rmi.RMIConnectionImpl_Stub.getAttribute(Unknown 
 Source)
   at 
 javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.getAttribute(RMIConnector.java:902)
   at 
 javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:267)
   ... 4 more
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7341) Emit metrics related to CAS/Paxos

2014-10-15 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172929#comment-14172929
 ] 

sankalp kohli commented on CASSANDRA-7341:
--

Yes. Do you need a patch for 2.0?

 Emit metrics related to CAS/Paxos
 -

 Key: CASSANDRA-7341
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7341
 Project: Cassandra
  Issue Type: Improvement
Reporter: sankalp kohli
Assignee: sankalp kohli
Priority: Minor
 Attachments: CASClientRequestMetrics.java, trunk-7341-v2.diff, 
 trunk-7341.diff


 We can emit metrics based on Paxos. One of them is when there is contention. 
 I will add more metric in this JIRA if it is helpful. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7341) Emit metrics related to CAS/Paxos

2014-10-15 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172932#comment-14172932
 ] 

Brandon Williams commented on CASSANDRA-7341:
-

If you want it in 2.0 :) This one doesn't apply cleanly.

 Emit metrics related to CAS/Paxos
 -

 Key: CASSANDRA-7341
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7341
 Project: Cassandra
  Issue Type: Improvement
Reporter: sankalp kohli
Assignee: sankalp kohli
Priority: Minor
 Attachments: CASClientRequestMetrics.java, trunk-7341-v2.diff, 
 trunk-7341.diff


 We can emit metrics based on Paxos. One of them is when there is contention. 
 I will add more metric in this JIRA if it is helpful. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7341) Emit metrics related to CAS/Paxos

2014-10-15 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172938#comment-14172938
 ] 

sankalp kohli commented on CASSANDRA-7341:
--

ok let me give you the 2.0 patch. 

 Emit metrics related to CAS/Paxos
 -

 Key: CASSANDRA-7341
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7341
 Project: Cassandra
  Issue Type: Improvement
Reporter: sankalp kohli
Assignee: sankalp kohli
Priority: Minor
 Attachments: CASClientRequestMetrics.java, trunk-7341-v2.diff, 
 trunk-7341.diff


 We can emit metrics based on Paxos. One of them is when there is contention. 
 I will add more metric in this JIRA if it is helpful. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7341) Emit metrics related to CAS/Paxos

2014-10-15 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172983#comment-14172983
 ] 

sankalp kohli commented on CASSANDRA-7341:
--

I have ported the patch to 2.0 branch. Please review it

 Emit metrics related to CAS/Paxos
 -

 Key: CASSANDRA-7341
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7341
 Project: Cassandra
  Issue Type: Improvement
Reporter: sankalp kohli
Assignee: sankalp kohli
Priority: Minor
 Attachments: 7341_2.0.txt, CASClientRequestMetrics.java, 
 trunk-7341-v2.diff, trunk-7341.diff


 We can emit metrics based on Paxos. One of them is when there is contention. 
 I will add more metric in this JIRA if it is helpful. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7341) Emit metrics related to CAS/Paxos

2014-10-15 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli updated CASSANDRA-7341:
-
Attachment: 7341_2.0.txt

2.0 patch attached

 Emit metrics related to CAS/Paxos
 -

 Key: CASSANDRA-7341
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7341
 Project: Cassandra
  Issue Type: Improvement
Reporter: sankalp kohli
Assignee: sankalp kohli
Priority: Minor
 Attachments: 7341_2.0.txt, CASClientRequestMetrics.java, 
 trunk-7341-v2.diff, trunk-7341.diff


 We can emit metrics based on Paxos. One of them is when there is contention. 
 I will add more metric in this JIRA if it is helpful. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6602) Compaction improvements to optimize time series data

2014-10-15 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172990#comment-14172990
 ] 

Björn Hegerfors commented on CASSANDRA-6602:


[~krummas] Yep, I also think of seconds/minutes/etc. when I hear 'time unit'. I 
agree time_unit is not the best name for what it ended up being. I guess I've 
tested time_unit with values like 60,000,000 (minute) and 3,600,000,000 (hour), 
and then timeUnit made sense in the code, when setting the initial target size 
to 1 of whatever the 'time unit' is (e.g. 1 hour). But if you use something 
like 300,000,000 (5 minutes), calling that a 'time unit' is a bit iffy. It 
feels like 'minute' is the time unit, and 5 some multiplier on how big the 
initial target is (so 5 minute units, rather than one 5-minute unit).

Then you noted the other problem, that can cause even more confusion. The 
potentially differing timestamp formats is probably a harder nut to crack, put 
in relation with the naming. But the solution to it probably also affects the 
number of options and their names. You could even go with three options 
replacing time_unit: timestamp_resolution, time_unit, base_time. Then, for 5 
minutes with microsecond timestamps, you would specify (with longs or strings, 
I don't know) timestamp_resolution=1,000,000 (microseconds), time_unit=60 
(minutes), base_time=5. The time_unit that my code uses is simply the product 
of these options.

What remains, as far as I can see, is a 2-option solution: remove the middle 
option (time_unit) and default to minutes or seconds (I personally think 
seconds are nicer). Or even microseconds, but then base_time (or 
base_time_microseconds) has to be a big number and the product should be 
timestamp_resolution * base_time / 1,000,000. The latter choice is only there 
if someone believes that sub-second targets could be useful.

Regardless, I think it's important to make it clear to all users that they have 
to make sure that the 'timestamp resolution' is correct. One way would be to 
default to microseconds and simply put a visible warning in the documentation 
about DTCS that it expects microseconds, so if you use something else, you need 
to change the timestamp_resolution option. I don't know if Cassandra somehow 
prefers microseconds or if it likes to stay neutral and not estrange those who 
use something else. But CQL has that default, doesn't it? Another way to 
prevent bugs caused by non microsecond timestamps would be to simply require a 
timestamp_resolution to be defined. No defaults.

I don't know if I've helped narrowing anything down here, but these are all the 
alternatives that I can think of. Without knowing what conventions there might 
be to things like this, my preferred choice right now is probably these 
options: long timestamp_resolution (default: 1,000,000), long base_time_seconds 
(default: 3600? 300?). That and the warning in the documentation, somewhere 
visible. My time to ask: wdyt?

 Compaction improvements to optimize time series data
 

 Key: CASSANDRA-6602
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6602
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Tupshin Harper
Assignee: Björn Hegerfors
  Labels: compaction, performance
 Fix For: 2.0.11

 Attachments: 1 week.txt, 8 weeks.txt, STCS 16 hours.txt, 
 TimestampViewer.java, 
 cassandra-2.0-CASSANDRA-6602-DateTieredCompactionStrategy.txt, 
 cassandra-2.0-CASSANDRA-6602-DateTieredCompactionStrategy_v2.txt, 
 cassandra-2.0-CASSANDRA-6602-DateTieredCompactionStrategy_v3.txt


 There are some unique characteristics of many/most time series use cases that 
 both provide challenges, as well as provide unique opportunities for 
 optimizations.
 One of the major challenges is in compaction. The existing compaction 
 strategies will tend to re-compact data on disk at least a few times over the 
 lifespan of each data point, greatly increasing the cpu and IO costs of that 
 write.
 Compaction exists to
 1) ensure that there aren't too many files on disk
 2) ensure that data that should be contiguous (part of the same partition) is 
 laid out contiguously
 3) deleting data due to ttls or tombstones
 The special characteristics of time series data allow us to optimize away all 
 three.
 Time series data
 1) tends to be delivered in time order, with relatively constrained exceptions
 2) often has a pre-determined and fixed expiration date
 3) Never gets deleted prior to TTL
 4) Has relatively predictable ingestion rates
 Note that I filed CASSANDRA-5561 and this ticket potentially replaces or 
 lowers the need for it. In that ticket, jbellis reasonably asks, how that 
 compaction strategy is better than disabling compaction.
 Taking that 

[jira] [Updated] (CASSANDRA-8108) ClassCastException in AbstractCellNameType

2014-10-15 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8108:
---
Attachment: 8108.txt

There were a couple of problems.  The first was that when the cell name in the 
paging state was null, an empty composite would be used, which isn't a valid 
CellName.  Switching to a Composite for the slice start fixed that.  The second 
problem was that the limit for individual slice queries would be overwritten 
with the page size; in the case of a DISTINCT query with static columns, the 
limit for each slice query should be 1.

I'm not 100% confident that this is the best fix, so please review carefully.

I also added some documentation and did some minor renaming in 
AbstractQueryPager.

Besides 8108.txt, there's also a 
[branch|https://github.com/thobbs/cassandra/tree/CASSANDRA-8108] and a new 
[dtest|https://github.com/thobbs/cassandra-dtest/tree/CASSANDRA-8108].  I also 
ran the python driver paging tests and the tests from the [paging dtests 
branch|https://github.com/riptano/cassandra-dtest/pull/93] to check for 
regressions.

 ClassCastException in AbstractCellNameType
 --

 Key: CASSANDRA-8108
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8108
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.0
 Datastax AMI on EC2 (Ubuntu Linux)
Reporter: David Hearnden
Assignee: Tyler Hobbs
 Fix For: 2.1.1

 Attachments: 8108.txt


 {noformat}
 java.lang.ClassCastException: 
 org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast 
 to org.apache.cassandra.db.composites.CellName
   at 
 org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:170)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.SliceQueryPager.init(SliceQueryPager.java:57)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.MultiPartitionPager.makePager(MultiPartitionPager.java:84)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.MultiPartitionPager.init(MultiPartitionPager.java:68)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.QueryPagers.pager(QueryPagers.java:101) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.QueryPagers.pager(QueryPagers.java:125) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:215)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:60)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:187)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:413)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:133)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:422)
  [apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:318)
  [apache-cassandra-2.1.0.jar:2.1.0]
   at 
 io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:103)
  [netty-all-4.0.20.Final.jar:4.0.20.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:332)
  [netty-all-4.0.20.Final.jar:4.0.20.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:31)
  [netty-all-4.0.20.Final.jar:4.0.20.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:323)
  [netty-all-4.0.20.Final.jar:4.0.20.Final]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 [na:1.7.0_51]
   at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:163)
  [apache-cassandra-2.1.0.jar:2.1.0]
   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:103) 
 [apache-cassandra-2.1.0.jar:2.1.0]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8096) Make cache serializers pluggable

2014-10-15 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-8096:
---
Attachment: CASSANDRA-8096-v2.patch

v2 patch moves the auto saving cache file io out of AutoSavingCache. 
ICacheSaver implementations are now configurable, not ICacheSerializers. 

 Make cache serializers pluggable
 

 Key: CASSANDRA-8096
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8096
 Project: Cassandra
  Issue Type: Improvement
Reporter: Blake Eggleston
Assignee: Blake Eggleston
Priority: Minor
 Fix For: 2.1.2

 Attachments: CASSANDRA-8096-v2.patch, CASSANDRA-8096.patch


 Make cache serializers configurable via system properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8004) Run LCS for both repaired and unrepaired data

2014-10-15 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173057#comment-14173057
 ] 

sankalp kohli commented on CASSANDRA-8004:
--

it would make migrating to incremental repairs so much easier
+1. Incremental repair is what I like the most in 2.1 and this is very 
important for it. 
Let me review the new patch. 

 Run LCS for both repaired and unrepaired data
 -

 Key: CASSANDRA-8004
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8004
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
  Labels: compaction
 Fix For: 2.1.2


 If a user has leveled compaction configured, we should run that for both the 
 unrepaired and the repaired data. I think this would make things a lot easier 
 for end users
 It would simplify migration to incremental repairs as well, if a user runs 
 incremental repair on its nice leveled unrepaired data, we wont need to drop 
 it all to L0, instead we can just start moving sstables from the unrepaired 
 leveling straight into the repaired leveling
 Idea could be to have two instances of LeveledCompactionStrategy and move 
 sstables between the instances after an incremental repair run (and let LCS 
 be totally oblivious to whether it handles repaired or unrepaired data). Same 
 should probably apply to any compaction strategy, run two instances and 
 remove all repaired/unrepaired logic from the strategy itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8004) Run LCS for both repaired and unrepaired data

2014-10-15 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli updated CASSANDRA-8004:
-
Reviewer: sankalp kohli

 Run LCS for both repaired and unrepaired data
 -

 Key: CASSANDRA-8004
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8004
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
  Labels: compaction
 Fix For: 2.1.2


 If a user has leveled compaction configured, we should run that for both the 
 unrepaired and the repaired data. I think this would make things a lot easier 
 for end users
 It would simplify migration to incremental repairs as well, if a user runs 
 incremental repair on its nice leveled unrepaired data, we wont need to drop 
 it all to L0, instead we can just start moving sstables from the unrepaired 
 leveling straight into the repaired leveling
 Idea could be to have two instances of LeveledCompactionStrategy and move 
 sstables between the instances after an incremental repair run (and let LCS 
 be totally oblivious to whether it handles repaired or unrepaired data). Same 
 should probably apply to any compaction strategy, run two instances and 
 remove all repaired/unrepaired logic from the strategy itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8126) Review disk failure mode handling

2014-10-15 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-8126:

Issue Type: Bug  (was: Improvement)

 Review disk failure mode handling
 -

 Key: CASSANDRA-8126
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8126
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jeremy Hanna

 Our disk failure modes are great in most circumstances, but there are a 
 couple where they may not make sense.
 Take the example of trying to snapshot your data on a node.  If permissions 
 aren't set up properly, the snapshot may fail which triggers a disk failure 
 which brings down the server.
 On the other hand, if you're trying to truncate a table, it may make sense to 
 bring down the node if it's unable to snapshot because it's unable to 
 properly make a hardlink backup of the data that's getting deleted - which is 
 the expectation.  This may be debatable.
 Perhaps in certain cases we can simply throw obvious errors and not bring 
 down the server.  In other cases, we should be clear about why we are 
 bringing down the server - perhaps for specific cases like the second case, 
 having a special output to indicate why it's going down.  I say special 
 output because it's not obvious why truncate to bring down any nodes in their 
 cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8126) Review disk failure mode handling

2014-10-15 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-8126:

Summary: Review disk failure mode handling  (was: Review disk failure modes 
if permissions are in the way)

 Review disk failure mode handling
 -

 Key: CASSANDRA-8126
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8126
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jeremy Hanna

 Our disk failure modes are great in most circumstances, but there are a 
 couple where they may not make sense.
 Take the example of trying to snapshot your data on a node.  If permissions 
 aren't set up properly, the snapshot may fail which triggers a disk failure 
 which brings down the server.
 On the other hand, if you're trying to truncate a table, it may make sense to 
 bring down the node if it's unable to snapshot because it's unable to 
 properly make a hardlink backup of the data that's getting deleted - which is 
 the expectation.  This may be debatable.
 Perhaps in certain cases we can simply throw obvious errors and not bring 
 down the server.  In other cases, we should be clear about why we are 
 bringing down the server - perhaps for specific cases like the second case, 
 having a special output to indicate why it's going down.  I say special 
 output because it's not obvious why truncate to bring down any nodes in their 
 cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8126) Review disk failure mode handling

2014-10-15 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-8126:

Description: 
Our disk failure modes are great in most circumstances, but there are a couple 
where they may not make sense.

Take the example of trying to snapshot your data on a node.  If permissions 
aren't set up properly, the snapshot may fail which triggers a disk failure 
which brings down the server.

On the other hand, if you're trying to truncate a table, it may make sense to 
bring down the node if it's unable to snapshot because it's unable to properly 
make a hardlink backup of the data that's getting deleted - which is the 
expectation.  This may be debatable.

Perhaps in certain cases we can simply throw obvious errors and not bring down 
the server.  In other cases, we should be clear about why we are bringing down 
the server - perhaps for specific cases like the second case, having a special 
output to indicate why it's going down.  I say special output because it's not 
obvious why truncate would bring down any nodes in their cluster.

  was:
Our disk failure modes are great in most circumstances, but there are a couple 
where they may not make sense.

Take the example of trying to snapshot your data on a node.  If permissions 
aren't set up properly, the snapshot may fail which triggers a disk failure 
which brings down the server.

On the other hand, if you're trying to truncate a table, it may make sense to 
bring down the node if it's unable to snapshot because it's unable to properly 
make a hardlink backup of the data that's getting deleted - which is the 
expectation.  This may be debatable.

Perhaps in certain cases we can simply throw obvious errors and not bring down 
the server.  In other cases, we should be clear about why we are bringing down 
the server - perhaps for specific cases like the second case, having a special 
output to indicate why it's going down.  I say special output because it's not 
obvious why truncate to bring down any nodes in their cluster.


 Review disk failure mode handling
 -

 Key: CASSANDRA-8126
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8126
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jeremy Hanna

 Our disk failure modes are great in most circumstances, but there are a 
 couple where they may not make sense.
 Take the example of trying to snapshot your data on a node.  If permissions 
 aren't set up properly, the snapshot may fail which triggers a disk failure 
 which brings down the server.
 On the other hand, if you're trying to truncate a table, it may make sense to 
 bring down the node if it's unable to snapshot because it's unable to 
 properly make a hardlink backup of the data that's getting deleted - which is 
 the expectation.  This may be debatable.
 Perhaps in certain cases we can simply throw obvious errors and not bring 
 down the server.  In other cases, we should be clear about why we are 
 bringing down the server - perhaps for specific cases like the second case, 
 having a special output to indicate why it's going down.  I say special 
 output because it's not obvious why truncate would bring down any nodes in 
 their cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8126) Review disk failure modes if permissions are in the way

2014-10-15 Thread Jeremy Hanna (JIRA)
Jeremy Hanna created CASSANDRA-8126:
---

 Summary: Review disk failure modes if permissions are in the way
 Key: CASSANDRA-8126
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8126
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jeremy Hanna


Our disk failure modes are great in most circumstances, but there are a couple 
where they may not make sense.

Take the example of trying to snapshot your data on a node.  If permissions 
aren't set up properly, the snapshot may fail which triggers a disk failure 
which brings down the server.

On the other hand, if you're trying to truncate a table, it may make sense to 
bring down the node if it's unable to snapshot because it's unable to properly 
make a hardlink backup of the data that's getting deleted - which is the 
expectation.  This may be debatable.

Perhaps in certain cases we can simply throw obvious errors and not bring down 
the server.  In other cases, we should be clear about why we are bringing down 
the server - perhaps for specific cases like the second case, having a special 
output to indicate why it's going down.  I say special output because it's not 
obvious why truncate to bring down any nodes in their cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8062) IllegalArgumentException passing blob as tuple value element in list

2014-10-15 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8062:
---
Fix Version/s: 2.1.1

The problem is that Tuples.InValue assumes the v3 protocol when deserializing 
the list of tuples, so the read of the collection size is incorrect, as you 
suspected.

 IllegalArgumentException passing blob as tuple value element in list
 

 Key: CASSANDRA-8062
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8062
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows 7, DataStax 2.1.0 Cassandra server, Java 
 cassandra-driver-2.1.1 
Reporter: Bill Mitchell
Assignee: Tyler Hobbs
 Fix For: 2.1.1


 I am using the same table schema as described in earlier reports, e.g., 
 CASSANDRA-7105:
 {code}
 CREATE TABLE sr (siteid uuid, listid bigint, partition int, createdate 
 timestamp, emailcrypt blob, emailaddr text, properties text, removedate 
 timestamp. removeimportid bigint,
 PRIMARY KEY ((siteid, listid, partition), createdate, emailcrypt)
 ) WITH CLUSTERING ORDER BY (createdate DESC, emailcrypt DESC);
 {code}
 I am trying to take advantage of the new Tuple support to issue a query to 
 request multiple rows in a single wide row by (createdate,emailcrypt) pair.  
 I declare a new TupleType that covers the clustering columns and then issue 
 an IN predicate against a list of these values:
 {code}
 private static final TupleType dateEmailTupleType = 
 TupleType.of(DataType.timestamp(), DataType.blob());
 ...
 ListTupleValue partitionKeys = new ArrayList(recipKeys.size());
 ...
 BoundStatement boundStatement = new BoundStatement(preparedStatement);
 boundStatement = boundStatement.bind(siteID, partition, listID);
 boundStatement.setList(3, partitionKeys);
 {code}
 When I issue a SELECT against this table, the server fails apparently trying 
 to break apart the list values:
 {code}
 DEBUG [SharedPool-Worker-1] 2014-10-05 14:20:15,312 Message.java:420 - 
 Received: PREPARE SELECT emailCrypt, emailAddr, removeDate, removeImportID, 
 properties FROM sr WHERE siteID = ? AND partition = ? AND listID = ? AND ( 
 createDate, emailCrypt ) IN ? ;, v=2
 DEBUG [SharedPool-Worker-1] 2014-10-05 14:20:15,323 Tracing.java:157 - 
 request complete
 DEBUG [SharedPool-Worker-1] 2014-10-05 14:20:15,323 Message.java:433 - 
 Responding: RESULT PREPARED a18ff9151e8bd3b13b48a0ba56ecb784 
 [siteid(testdb_1412536748414, sr), 
 org.apache.cassandra.db.marshal.UUIDType][partition(testdb_1412536748414, 
 sr), org.apache.cassandra.db.marshal.Int32Type][listid(testdb_1412536748414, 
 sr), 
 org.apache.cassandra.db.marshal.LongType][in(createdate,emailcrypt)(testdb_1412536748414,
  sr), 
 org.apache.cassandra.db.marshal.ListType(org.apache.cassandra.db.marshal.TupleType(org.apache.cassandra.db.marshal.ReversedType(org.apache.cassandra.db.marshal.TimestampType),org.apache.cassandra.db.marshal.ReversedType(org.apache.cassandra.db.marshal.BytesType)))]
  (resultMetadata=[emailcrypt(testdb_1412536748414, sr), 
 org.apache.cassandra.db.marshal.ReversedType(org.apache.cassandra.db.marshal.BytesType)][emailaddr(testdb_1412536748414,
  sr), 
 org.apache.cassandra.db.marshal.UTF8Type][removedate(testdb_1412536748414, 
 sr), 
 org.apache.cassandra.db.marshal.TimestampType][removeimportid(testdb_1412536748414,
  sr), 
 org.apache.cassandra.db.marshal.LongType][properties(testdb_1412536748414, 
 sr), org.apache.cassandra.db.marshal.UTF8Type]), v=2
 DEBUG [SharedPool-Worker-1] 2014-10-05 14:20:15,363 Message.java:420 - 
 Received: EXECUTE a18ff9151e8bd3b13b48a0ba56ecb784 with 4 values at 
 consistency QUORUM, v=2
 DEBUG [SharedPool-Worker-2] 2014-10-05 14:20:15,380 Message.java:420 - 
 Received: EXECUTE a18ff9151e8bd3b13b48a0ba56ecb784 with 4 values at 
 consistency QUORUM, v=2
 DEBUG [SharedPool-Worker-5] 2014-10-05 14:20:15,402 Message.java:420 - 
 Received: EXECUTE a18ff9151e8bd3b13b48a0ba56ecb784 with 4 values at 
 consistency QUORUM, v=2
 ERROR [SharedPool-Worker-5] 2014-10-05 14:20:16,125 ErrorMessage.java:218 - 
 Unexpected exception during request
 java.lang.IllegalArgumentException: null
   at java.nio.Buffer.limit(Unknown Source) ~[na:1.7.0_25]
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:539) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:122)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.serializers.ListSerializer.deserializeForNativeProtocol(ListSerializer.java:87)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.serializers.ListSerializer.deserializeForNativeProtocol(ListSerializer.java:27)
  

[jira] [Updated] (CASSANDRA-8062) IllegalArgumentException passing blob as tuple value element in list

2014-10-15 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8062:
---
Attachment: 8062.txt

8062.txt uses the proper protocol version when deserializing IN value lists of 
tuples.  I have a [new 
dtest|https://github.com/thobbs/cassandra-dtest/tree/CASSANDRA-8062] that 
reproduces the issue as well.

 IllegalArgumentException passing blob as tuple value element in list
 

 Key: CASSANDRA-8062
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8062
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows 7, DataStax 2.1.0 Cassandra server, Java 
 cassandra-driver-2.1.1 
Reporter: Bill Mitchell
Assignee: Tyler Hobbs
 Fix For: 2.1.1

 Attachments: 8062.txt


 I am using the same table schema as described in earlier reports, e.g., 
 CASSANDRA-7105:
 {code}
 CREATE TABLE sr (siteid uuid, listid bigint, partition int, createdate 
 timestamp, emailcrypt blob, emailaddr text, properties text, removedate 
 timestamp. removeimportid bigint,
 PRIMARY KEY ((siteid, listid, partition), createdate, emailcrypt)
 ) WITH CLUSTERING ORDER BY (createdate DESC, emailcrypt DESC);
 {code}
 I am trying to take advantage of the new Tuple support to issue a query to 
 request multiple rows in a single wide row by (createdate,emailcrypt) pair.  
 I declare a new TupleType that covers the clustering columns and then issue 
 an IN predicate against a list of these values:
 {code}
 private static final TupleType dateEmailTupleType = 
 TupleType.of(DataType.timestamp(), DataType.blob());
 ...
 ListTupleValue partitionKeys = new ArrayList(recipKeys.size());
 ...
 BoundStatement boundStatement = new BoundStatement(preparedStatement);
 boundStatement = boundStatement.bind(siteID, partition, listID);
 boundStatement.setList(3, partitionKeys);
 {code}
 When I issue a SELECT against this table, the server fails apparently trying 
 to break apart the list values:
 {code}
 DEBUG [SharedPool-Worker-1] 2014-10-05 14:20:15,312 Message.java:420 - 
 Received: PREPARE SELECT emailCrypt, emailAddr, removeDate, removeImportID, 
 properties FROM sr WHERE siteID = ? AND partition = ? AND listID = ? AND ( 
 createDate, emailCrypt ) IN ? ;, v=2
 DEBUG [SharedPool-Worker-1] 2014-10-05 14:20:15,323 Tracing.java:157 - 
 request complete
 DEBUG [SharedPool-Worker-1] 2014-10-05 14:20:15,323 Message.java:433 - 
 Responding: RESULT PREPARED a18ff9151e8bd3b13b48a0ba56ecb784 
 [siteid(testdb_1412536748414, sr), 
 org.apache.cassandra.db.marshal.UUIDType][partition(testdb_1412536748414, 
 sr), org.apache.cassandra.db.marshal.Int32Type][listid(testdb_1412536748414, 
 sr), 
 org.apache.cassandra.db.marshal.LongType][in(createdate,emailcrypt)(testdb_1412536748414,
  sr), 
 org.apache.cassandra.db.marshal.ListType(org.apache.cassandra.db.marshal.TupleType(org.apache.cassandra.db.marshal.ReversedType(org.apache.cassandra.db.marshal.TimestampType),org.apache.cassandra.db.marshal.ReversedType(org.apache.cassandra.db.marshal.BytesType)))]
  (resultMetadata=[emailcrypt(testdb_1412536748414, sr), 
 org.apache.cassandra.db.marshal.ReversedType(org.apache.cassandra.db.marshal.BytesType)][emailaddr(testdb_1412536748414,
  sr), 
 org.apache.cassandra.db.marshal.UTF8Type][removedate(testdb_1412536748414, 
 sr), 
 org.apache.cassandra.db.marshal.TimestampType][removeimportid(testdb_1412536748414,
  sr), 
 org.apache.cassandra.db.marshal.LongType][properties(testdb_1412536748414, 
 sr), org.apache.cassandra.db.marshal.UTF8Type]), v=2
 DEBUG [SharedPool-Worker-1] 2014-10-05 14:20:15,363 Message.java:420 - 
 Received: EXECUTE a18ff9151e8bd3b13b48a0ba56ecb784 with 4 values at 
 consistency QUORUM, v=2
 DEBUG [SharedPool-Worker-2] 2014-10-05 14:20:15,380 Message.java:420 - 
 Received: EXECUTE a18ff9151e8bd3b13b48a0ba56ecb784 with 4 values at 
 consistency QUORUM, v=2
 DEBUG [SharedPool-Worker-5] 2014-10-05 14:20:15,402 Message.java:420 - 
 Received: EXECUTE a18ff9151e8bd3b13b48a0ba56ecb784 with 4 values at 
 consistency QUORUM, v=2
 ERROR [SharedPool-Worker-5] 2014-10-05 14:20:16,125 ErrorMessage.java:218 - 
 Unexpected exception during request
 java.lang.IllegalArgumentException: null
   at java.nio.Buffer.limit(Unknown Source) ~[na:1.7.0_25]
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:539) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:122)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.serializers.ListSerializer.deserializeForNativeProtocol(ListSerializer.java:87)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 

[jira] [Updated] (CASSANDRA-8062) IllegalArgumentException passing blob as tuple value element in list

2014-10-15 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8062:
---
Since Version: 2.1.0

 IllegalArgumentException passing blob as tuple value element in list
 

 Key: CASSANDRA-8062
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8062
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows 7, DataStax 2.1.0 Cassandra server, Java 
 cassandra-driver-2.1.1 
Reporter: Bill Mitchell
Assignee: Tyler Hobbs
 Fix For: 2.1.1

 Attachments: 8062.txt


 I am using the same table schema as described in earlier reports, e.g., 
 CASSANDRA-7105:
 {code}
 CREATE TABLE sr (siteid uuid, listid bigint, partition int, createdate 
 timestamp, emailcrypt blob, emailaddr text, properties text, removedate 
 timestamp. removeimportid bigint,
 PRIMARY KEY ((siteid, listid, partition), createdate, emailcrypt)
 ) WITH CLUSTERING ORDER BY (createdate DESC, emailcrypt DESC);
 {code}
 I am trying to take advantage of the new Tuple support to issue a query to 
 request multiple rows in a single wide row by (createdate,emailcrypt) pair.  
 I declare a new TupleType that covers the clustering columns and then issue 
 an IN predicate against a list of these values:
 {code}
 private static final TupleType dateEmailTupleType = 
 TupleType.of(DataType.timestamp(), DataType.blob());
 ...
 ListTupleValue partitionKeys = new ArrayList(recipKeys.size());
 ...
 BoundStatement boundStatement = new BoundStatement(preparedStatement);
 boundStatement = boundStatement.bind(siteID, partition, listID);
 boundStatement.setList(3, partitionKeys);
 {code}
 When I issue a SELECT against this table, the server fails apparently trying 
 to break apart the list values:
 {code}
 DEBUG [SharedPool-Worker-1] 2014-10-05 14:20:15,312 Message.java:420 - 
 Received: PREPARE SELECT emailCrypt, emailAddr, removeDate, removeImportID, 
 properties FROM sr WHERE siteID = ? AND partition = ? AND listID = ? AND ( 
 createDate, emailCrypt ) IN ? ;, v=2
 DEBUG [SharedPool-Worker-1] 2014-10-05 14:20:15,323 Tracing.java:157 - 
 request complete
 DEBUG [SharedPool-Worker-1] 2014-10-05 14:20:15,323 Message.java:433 - 
 Responding: RESULT PREPARED a18ff9151e8bd3b13b48a0ba56ecb784 
 [siteid(testdb_1412536748414, sr), 
 org.apache.cassandra.db.marshal.UUIDType][partition(testdb_1412536748414, 
 sr), org.apache.cassandra.db.marshal.Int32Type][listid(testdb_1412536748414, 
 sr), 
 org.apache.cassandra.db.marshal.LongType][in(createdate,emailcrypt)(testdb_1412536748414,
  sr), 
 org.apache.cassandra.db.marshal.ListType(org.apache.cassandra.db.marshal.TupleType(org.apache.cassandra.db.marshal.ReversedType(org.apache.cassandra.db.marshal.TimestampType),org.apache.cassandra.db.marshal.ReversedType(org.apache.cassandra.db.marshal.BytesType)))]
  (resultMetadata=[emailcrypt(testdb_1412536748414, sr), 
 org.apache.cassandra.db.marshal.ReversedType(org.apache.cassandra.db.marshal.BytesType)][emailaddr(testdb_1412536748414,
  sr), 
 org.apache.cassandra.db.marshal.UTF8Type][removedate(testdb_1412536748414, 
 sr), 
 org.apache.cassandra.db.marshal.TimestampType][removeimportid(testdb_1412536748414,
  sr), 
 org.apache.cassandra.db.marshal.LongType][properties(testdb_1412536748414, 
 sr), org.apache.cassandra.db.marshal.UTF8Type]), v=2
 DEBUG [SharedPool-Worker-1] 2014-10-05 14:20:15,363 Message.java:420 - 
 Received: EXECUTE a18ff9151e8bd3b13b48a0ba56ecb784 with 4 values at 
 consistency QUORUM, v=2
 DEBUG [SharedPool-Worker-2] 2014-10-05 14:20:15,380 Message.java:420 - 
 Received: EXECUTE a18ff9151e8bd3b13b48a0ba56ecb784 with 4 values at 
 consistency QUORUM, v=2
 DEBUG [SharedPool-Worker-5] 2014-10-05 14:20:15,402 Message.java:420 - 
 Received: EXECUTE a18ff9151e8bd3b13b48a0ba56ecb784 with 4 values at 
 consistency QUORUM, v=2
 ERROR [SharedPool-Worker-5] 2014-10-05 14:20:16,125 ErrorMessage.java:218 - 
 Unexpected exception during request
 java.lang.IllegalArgumentException: null
   at java.nio.Buffer.limit(Unknown Source) ~[na:1.7.0_25]
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:539) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:122)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.serializers.ListSerializer.deserializeForNativeProtocol(ListSerializer.java:87)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.serializers.ListSerializer.deserializeForNativeProtocol(ListSerializer.java:27)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 

[jira] [Assigned] (CASSANDRA-8087) Multiple non-DISTINCT rows returned when page_size set

2014-10-15 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs reassigned CASSANDRA-8087:
--

Assignee: Tyler Hobbs

 Multiple non-DISTINCT rows returned when page_size set
 --

 Key: CASSANDRA-8087
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8087
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Adam Holmberg
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 2.0.11


 Using the following statements to reproduce:
 {code}
 CREATE TABLE test (
 k int,
 p int,
 s int static,
 PRIMARY KEY (k, p)
 );
 INSERT INTO test (k, p) VALUES (1, 1);
 INSERT INTO test (k, p) VALUES (1, 2);
 SELECT DISTINCT k, s FROM test ;
 {code}
 Native clients that set result_page_size in the query message receive 
 multiple non-distinct rows back (one per clustered value p in row k).
 This is only reproduced on 2.0.10. Does not appear in 2.1.0
 It does not appear in cqlsh for 2.0.10 because thrift.
 See https://datastax-oss.atlassian.net/browse/PYTHON-164 for background



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8108) Errors paging DISTINCT queries on static columns

2014-10-15 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8108:
---
Summary: Errors paging DISTINCT queries on static columns  (was: 
ClassCastException in AbstractCellNameType)

 Errors paging DISTINCT queries on static columns
 

 Key: CASSANDRA-8108
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8108
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.0
 Datastax AMI on EC2 (Ubuntu Linux)
Reporter: David Hearnden
Assignee: Tyler Hobbs
 Fix For: 2.1.1

 Attachments: 8108.txt


 {noformat}
 java.lang.ClassCastException: 
 org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast 
 to org.apache.cassandra.db.composites.CellName
   at 
 org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:170)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.SliceQueryPager.init(SliceQueryPager.java:57)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.MultiPartitionPager.makePager(MultiPartitionPager.java:84)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.MultiPartitionPager.init(MultiPartitionPager.java:68)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.QueryPagers.pager(QueryPagers.java:101) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.QueryPagers.pager(QueryPagers.java:125) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:215)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:60)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:187)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:413)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:133)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:422)
  [apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:318)
  [apache-cassandra-2.1.0.jar:2.1.0]
   at 
 io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:103)
  [netty-all-4.0.20.Final.jar:4.0.20.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:332)
  [netty-all-4.0.20.Final.jar:4.0.20.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:31)
  [netty-all-4.0.20.Final.jar:4.0.20.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:323)
  [netty-all-4.0.20.Final.jar:4.0.20.Final]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 [na:1.7.0_51]
   at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:163)
  [apache-cassandra-2.1.0.jar:2.1.0]
   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:103) 
 [apache-cassandra-2.1.0.jar:2.1.0]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8108) Errors paging DISTINCT queries on static columns

2014-10-15 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173185#comment-14173185
 ] 

Tyler Hobbs commented on CASSANDRA-8108:


It looks like the second fix I mentioned is the cause for CASSANDRA-8087, so 
I'll resolve that as a duplicate of this and attach a patch for 2.0.

 Errors paging DISTINCT queries on static columns
 

 Key: CASSANDRA-8108
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8108
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.0
 Datastax AMI on EC2 (Ubuntu Linux)
Reporter: David Hearnden
Assignee: Tyler Hobbs
 Fix For: 2.0.11, 2.1.1

 Attachments: 8108.txt


 {noformat}
 java.lang.ClassCastException: 
 org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast 
 to org.apache.cassandra.db.composites.CellName
   at 
 org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:170)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.SliceQueryPager.init(SliceQueryPager.java:57)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.MultiPartitionPager.makePager(MultiPartitionPager.java:84)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.MultiPartitionPager.init(MultiPartitionPager.java:68)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.QueryPagers.pager(QueryPagers.java:101) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.QueryPagers.pager(QueryPagers.java:125) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:215)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:60)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:187)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:413)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:133)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:422)
  [apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:318)
  [apache-cassandra-2.1.0.jar:2.1.0]
   at 
 io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:103)
  [netty-all-4.0.20.Final.jar:4.0.20.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:332)
  [netty-all-4.0.20.Final.jar:4.0.20.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:31)
  [netty-all-4.0.20.Final.jar:4.0.20.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:323)
  [netty-all-4.0.20.Final.jar:4.0.20.Final]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 [na:1.7.0_51]
   at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:163)
  [apache-cassandra-2.1.0.jar:2.1.0]
   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:103) 
 [apache-cassandra-2.1.0.jar:2.1.0]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8108) Errors paging DISTINCT queries on static columns

2014-10-15 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8108:
---
Fix Version/s: 2.0.11

 Errors paging DISTINCT queries on static columns
 

 Key: CASSANDRA-8108
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8108
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.0
 Datastax AMI on EC2 (Ubuntu Linux)
Reporter: David Hearnden
Assignee: Tyler Hobbs
 Fix For: 2.0.11, 2.1.1

 Attachments: 8108.txt


 {noformat}
 java.lang.ClassCastException: 
 org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast 
 to org.apache.cassandra.db.composites.CellName
   at 
 org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:170)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.SliceQueryPager.init(SliceQueryPager.java:57)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.MultiPartitionPager.makePager(MultiPartitionPager.java:84)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.MultiPartitionPager.init(MultiPartitionPager.java:68)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.QueryPagers.pager(QueryPagers.java:101) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.QueryPagers.pager(QueryPagers.java:125) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:215)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:60)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:187)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:413)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:133)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:422)
  [apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:318)
  [apache-cassandra-2.1.0.jar:2.1.0]
   at 
 io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:103)
  [netty-all-4.0.20.Final.jar:4.0.20.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:332)
  [netty-all-4.0.20.Final.jar:4.0.20.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:31)
  [netty-all-4.0.20.Final.jar:4.0.20.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:323)
  [netty-all-4.0.20.Final.jar:4.0.20.Final]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 [na:1.7.0_51]
   at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:163)
  [apache-cassandra-2.1.0.jar:2.1.0]
   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:103) 
 [apache-cassandra-2.1.0.jar:2.1.0]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8087) Multiple non-DISTINCT rows returned when page_size set

2014-10-15 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs resolved CASSANDRA-8087.

Resolution: Duplicate

One of the fixes in CASSANDRA-8108 resolves this, so I'm marking this as a 
duplicate.

 Multiple non-DISTINCT rows returned when page_size set
 --

 Key: CASSANDRA-8087
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8087
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Adam Holmberg
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 2.0.11


 Using the following statements to reproduce:
 {code}
 CREATE TABLE test (
 k int,
 p int,
 s int static,
 PRIMARY KEY (k, p)
 );
 INSERT INTO test (k, p) VALUES (1, 1);
 INSERT INTO test (k, p) VALUES (1, 2);
 SELECT DISTINCT k, s FROM test ;
 {code}
 Native clients that set result_page_size in the query message receive 
 multiple non-distinct rows back (one per clustered value p in row k).
 This is only reproduced on 2.0.10. Does not appear in 2.1.0
 It does not appear in cqlsh for 2.0.10 because thrift.
 See https://datastax-oss.atlassian.net/browse/PYTHON-164 for background



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8108) Errors paging DISTINCT queries on static columns

2014-10-15 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8108:
---
Attachment: 8108-2.0.txt

 Errors paging DISTINCT queries on static columns
 

 Key: CASSANDRA-8108
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8108
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.0
 Datastax AMI on EC2 (Ubuntu Linux)
Reporter: David Hearnden
Assignee: Tyler Hobbs
 Fix For: 2.0.11, 2.1.1

 Attachments: 8108-2.0.txt, 8108.txt


 {noformat}
 java.lang.ClassCastException: 
 org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast 
 to org.apache.cassandra.db.composites.CellName
   at 
 org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:170)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.SliceQueryPager.init(SliceQueryPager.java:57)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.MultiPartitionPager.makePager(MultiPartitionPager.java:84)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.MultiPartitionPager.init(MultiPartitionPager.java:68)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.QueryPagers.pager(QueryPagers.java:101) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.service.pager.QueryPagers.pager(QueryPagers.java:125) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:215)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:60)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:187)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:413)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:133)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:422)
  [apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:318)
  [apache-cassandra-2.1.0.jar:2.1.0]
   at 
 io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:103)
  [netty-all-4.0.20.Final.jar:4.0.20.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:332)
  [netty-all-4.0.20.Final.jar:4.0.20.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:31)
  [netty-all-4.0.20.Final.jar:4.0.20.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:323)
  [netty-all-4.0.20.Final.jar:4.0.20.Final]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 [na:1.7.0_51]
   at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:163)
  [apache-cassandra-2.1.0.jar:2.1.0]
   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:103) 
 [apache-cassandra-2.1.0.jar:2.1.0]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8062) IllegalArgumentException passing blob as tuple value element in list

2014-10-15 Thread Bill Mitchell (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173200#comment-14173200
 ] 

Bill Mitchell commented on CASSANDRA-8062:
--

Great, Tyler.  I applied patch 8062.txt to my copy of the 2.1 source and it 
worked like a champ.  

 IllegalArgumentException passing blob as tuple value element in list
 

 Key: CASSANDRA-8062
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8062
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows 7, DataStax 2.1.0 Cassandra server, Java 
 cassandra-driver-2.1.1 
Reporter: Bill Mitchell
Assignee: Tyler Hobbs
 Fix For: 2.1.1

 Attachments: 8062.txt


 I am using the same table schema as described in earlier reports, e.g., 
 CASSANDRA-7105:
 {code}
 CREATE TABLE sr (siteid uuid, listid bigint, partition int, createdate 
 timestamp, emailcrypt blob, emailaddr text, properties text, removedate 
 timestamp. removeimportid bigint,
 PRIMARY KEY ((siteid, listid, partition), createdate, emailcrypt)
 ) WITH CLUSTERING ORDER BY (createdate DESC, emailcrypt DESC);
 {code}
 I am trying to take advantage of the new Tuple support to issue a query to 
 request multiple rows in a single wide row by (createdate,emailcrypt) pair.  
 I declare a new TupleType that covers the clustering columns and then issue 
 an IN predicate against a list of these values:
 {code}
 private static final TupleType dateEmailTupleType = 
 TupleType.of(DataType.timestamp(), DataType.blob());
 ...
 ListTupleValue partitionKeys = new ArrayList(recipKeys.size());
 ...
 BoundStatement boundStatement = new BoundStatement(preparedStatement);
 boundStatement = boundStatement.bind(siteID, partition, listID);
 boundStatement.setList(3, partitionKeys);
 {code}
 When I issue a SELECT against this table, the server fails apparently trying 
 to break apart the list values:
 {code}
 DEBUG [SharedPool-Worker-1] 2014-10-05 14:20:15,312 Message.java:420 - 
 Received: PREPARE SELECT emailCrypt, emailAddr, removeDate, removeImportID, 
 properties FROM sr WHERE siteID = ? AND partition = ? AND listID = ? AND ( 
 createDate, emailCrypt ) IN ? ;, v=2
 DEBUG [SharedPool-Worker-1] 2014-10-05 14:20:15,323 Tracing.java:157 - 
 request complete
 DEBUG [SharedPool-Worker-1] 2014-10-05 14:20:15,323 Message.java:433 - 
 Responding: RESULT PREPARED a18ff9151e8bd3b13b48a0ba56ecb784 
 [siteid(testdb_1412536748414, sr), 
 org.apache.cassandra.db.marshal.UUIDType][partition(testdb_1412536748414, 
 sr), org.apache.cassandra.db.marshal.Int32Type][listid(testdb_1412536748414, 
 sr), 
 org.apache.cassandra.db.marshal.LongType][in(createdate,emailcrypt)(testdb_1412536748414,
  sr), 
 org.apache.cassandra.db.marshal.ListType(org.apache.cassandra.db.marshal.TupleType(org.apache.cassandra.db.marshal.ReversedType(org.apache.cassandra.db.marshal.TimestampType),org.apache.cassandra.db.marshal.ReversedType(org.apache.cassandra.db.marshal.BytesType)))]
  (resultMetadata=[emailcrypt(testdb_1412536748414, sr), 
 org.apache.cassandra.db.marshal.ReversedType(org.apache.cassandra.db.marshal.BytesType)][emailaddr(testdb_1412536748414,
  sr), 
 org.apache.cassandra.db.marshal.UTF8Type][removedate(testdb_1412536748414, 
 sr), 
 org.apache.cassandra.db.marshal.TimestampType][removeimportid(testdb_1412536748414,
  sr), 
 org.apache.cassandra.db.marshal.LongType][properties(testdb_1412536748414, 
 sr), org.apache.cassandra.db.marshal.UTF8Type]), v=2
 DEBUG [SharedPool-Worker-1] 2014-10-05 14:20:15,363 Message.java:420 - 
 Received: EXECUTE a18ff9151e8bd3b13b48a0ba56ecb784 with 4 values at 
 consistency QUORUM, v=2
 DEBUG [SharedPool-Worker-2] 2014-10-05 14:20:15,380 Message.java:420 - 
 Received: EXECUTE a18ff9151e8bd3b13b48a0ba56ecb784 with 4 values at 
 consistency QUORUM, v=2
 DEBUG [SharedPool-Worker-5] 2014-10-05 14:20:15,402 Message.java:420 - 
 Received: EXECUTE a18ff9151e8bd3b13b48a0ba56ecb784 with 4 values at 
 consistency QUORUM, v=2
 ERROR [SharedPool-Worker-5] 2014-10-05 14:20:16,125 ErrorMessage.java:218 - 
 Unexpected exception during request
 java.lang.IllegalArgumentException: null
   at java.nio.Buffer.limit(Unknown Source) ~[na:1.7.0_25]
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:539) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:122)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.serializers.ListSerializer.deserializeForNativeProtocol(ListSerializer.java:87)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
   at 
 org.apache.cassandra.serializers.ListSerializer.deserializeForNativeProtocol(ListSerializer.java:27)
  

[jira] [Commented] (CASSANDRA-8076) Expose an mbean method to poll for repair job status

2014-10-15 Thread Mike Bulman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173276#comment-14173276
 ] 

Mike Bulman commented on CASSANDRA-8076:


I think that would work just fine.  If we're running a previously calculated 
subrange, all we care about is when that subrange is done being 
repaired/attempted to be repaired, so whether c* is dividing that up under the 
hood doesn't matter.

[~philip.doctor] ?

 Expose an mbean method to poll for repair job status
 

 Key: CASSANDRA-8076
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8076
 Project: Cassandra
  Issue Type: Improvement
Reporter: Philip S Doctor

 Given the int reply-id from forceRepairAsync, allow a client to request the 
 status of this ID via jmx.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6602) Compaction improvements to optimize time series data

2014-10-15 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173399#comment-14173399
 ] 

Marcus Eriksson commented on CASSANDRA-6602:


how about:
timestamp_resolution = 'MICROSECONDS' (and use TimeUnit.valueOf(...) to convert)
base_time_seconds = 3600
max_sstable_age_days = 365


 Compaction improvements to optimize time series data
 

 Key: CASSANDRA-6602
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6602
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Tupshin Harper
Assignee: Björn Hegerfors
  Labels: compaction, performance
 Fix For: 2.0.11

 Attachments: 1 week.txt, 8 weeks.txt, STCS 16 hours.txt, 
 TimestampViewer.java, 
 cassandra-2.0-CASSANDRA-6602-DateTieredCompactionStrategy.txt, 
 cassandra-2.0-CASSANDRA-6602-DateTieredCompactionStrategy_v2.txt, 
 cassandra-2.0-CASSANDRA-6602-DateTieredCompactionStrategy_v3.txt


 There are some unique characteristics of many/most time series use cases that 
 both provide challenges, as well as provide unique opportunities for 
 optimizations.
 One of the major challenges is in compaction. The existing compaction 
 strategies will tend to re-compact data on disk at least a few times over the 
 lifespan of each data point, greatly increasing the cpu and IO costs of that 
 write.
 Compaction exists to
 1) ensure that there aren't too many files on disk
 2) ensure that data that should be contiguous (part of the same partition) is 
 laid out contiguously
 3) deleting data due to ttls or tombstones
 The special characteristics of time series data allow us to optimize away all 
 three.
 Time series data
 1) tends to be delivered in time order, with relatively constrained exceptions
 2) often has a pre-determined and fixed expiration date
 3) Never gets deleted prior to TTL
 4) Has relatively predictable ingestion rates
 Note that I filed CASSANDRA-5561 and this ticket potentially replaces or 
 lowers the need for it. In that ticket, jbellis reasonably asks, how that 
 compaction strategy is better than disabling compaction.
 Taking that to heart, here is a compaction-strategy-less approach that could 
 be extremely efficient for time-series use cases that follow the above 
 pattern.
 (For context, I'm thinking of an example use case involving lots of streams 
 of time-series data with a 5GB per day ingestion rate, and a 1000 day 
 retention with TTL, resulting in an eventual steady state of 5TB per node)
 1) You have an extremely large memtable (preferably off heap, if/when doable) 
 for the table, and that memtable is sized to be able to hold a lengthy window 
 of time. A typical period might be one day. At the end of that period, you 
 flush the contents of the memtable to an sstable and move to the next one. 
 This is basically identical to current behaviour, but with thresholds 
 adjusted so that you can ensure flushing at predictable intervals. (Open 
 question is whether predictable intervals is actually necessary, or whether 
 just waiting until the huge memtable is nearly full is sufficient)
 2) Combine the behaviour with CASSANDRA-5228 so that sstables will be 
 efficiently dropped once all of the columns have. (Another side note, it 
 might be valuable to have a modified version of CASSANDRA-3974 that doesn't 
 bother storing per-column TTL since it is required that all columns have the 
 same TTL)
 3) Be able to mark column families as read/write only (no explicit deletes), 
 so no tombstones.
 4) Optionally add back an additional type of delete that would delete all 
 data earlier than a particular timestamp, resulting in immediate dropping of 
 obsoleted sstables.
 The result is that for in-order delivered data, Every cell will be laid out 
 optimally on disk on the first pass, and over the course of 1000 days and 5TB 
 of data, there will only be 1000 5GB sstables, so the number of filehandles 
 will be reasonable.
 For exceptions (out-of-order delivery), most cases will be caught by the 
 extended (24 hour+) memtable flush times and merged correctly automatically. 
 For those that were slightly askew at flush time, or were delivered so far 
 out of order that they go in the wrong sstable, there is relatively low 
 overhead to reading from two sstables for a time slice, instead of one, and 
 that overhead would be incurred relatively rarely unless out-of-order 
 delivery was the common case, in which case, this strategy should not be used.
 Another possible optimization to address out-of-order would be to maintain 
 more than one time-centric memtables in memory at a time (e.g. two 12 hour 
 ones), and then you always insert into whichever one of the two owns the 
 appropriate range of time. By delaying flushing the ahead