[jira] [Comment Edited] (CASSANDRA-13259) Use platform specific X.509 default algorithm

2017-03-10 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905828#comment-15905828
 ] 

Jason Brown edited comment on CASSANDRA-13259 at 3/11/17 2:49 AM:
--

wrt {{store_type}}, can java8 correctly figure out the difference between a 
PKCS12 and JKS? Further, what if somebody went bananas and used a JCEKS (I'm 
not totally sure this case applies to TLS)? I agree with you that one declared 
{{store_type}} is not correct for all situations (covering both the key and 
trust stores), but that leads us logically to having a separate {{store_type}} 
config option for both keystore and truststore. The {{javax.net.ssl.*}} allow a 
differentiation of the store types, but see next paragraph.

wrt JVM-based properties ({{javax.net.ssl.*}}), we currently allow users to 
have a different configuration for client-server and internode (peero-to-peer) 
communications. By removing both options in favor of using the JVM-based 
properties, operators who previously had separate configs are now forced to use 
the same config for both, and I'm not sure how big of a breakage that is (in 
terms of the actual number of opertators/clusters affected).

Also, I spoke with one of the netty developers, and they ignore the 
{{javax.net.ssl.*}} properties. Thus I don't think the JVM-based properties is 
the way to go.

UPDATE: We could *still* use the {{javax.net.ssl.*}} properties, but we would 
need to plumb them through ourselves to netty. So perhaps there's an advantage 
for the the operator (using the properties is "consistent" with JVM 
conventions), but we incur a larger cost of supporting those options and 
correctly translating those to netty-land.



was (Author: jasobrown):
wrt {{store_type}}, can java8 correctly figure out the difference between a 
PKCS12 and JKS? Further, what if somebody went bananas and used a JCEKS (I'm 
not totally sure this case applies to TLS)? I agree with you that one declared 
{{store_type}} is not correct for all situations (covering both the key and 
trust stores), but that leads us logically to having a separate {{store_type}} 
config option for both keystore and truststore. The {{javax.net.ssl.*}} allow a 
differentiation of the store types, but see next paragraph.

wrt JVM-based properties ({{javax.net.ssl.*}}), we currently allow users to 
have a different configuration for client-server and internode (peero-to-peer) 
communications. By removing both options in favor of using the JVM-based 
properties, operators who previously had separate configs are now forced to use 
the same config for both, and I'm not sure how big of a breakage that is (in 
terms of the actual number of opertators/clusters affected).

Also, I spoke with one of the netty developers, and they ignore the 
{{javax.net.ssl.*}} properties. Thus I don't think the JVM-based properties is 
the way to go.


> Use platform specific X.509 default algorithm
> -
>
> Key: CASSANDRA-13259
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13259
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Minor
> Fix For: 4.x
>
>
> We should replace the hardcoded "SunX509" default algorithm and use the JRE 
> default instead. This implementation will currently not work on less popular 
> platforms (e.g. IBM) and won't get any further updates.
> See also:
> https://bugs.openjdk.java.net/browse/JDK-8169745



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13308) Hint files not being deleted on nodetool decommission

2017-03-10 Thread Arijit (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15906001#comment-15906001
 ] 

Arijit commented on CASSANDRA-13308:


My workaround for now is to delete hint files for a node before starting 
Cassandra and running "nodetool decommission" on it (since it is taking quite 
long). Does that sound legitimate?

> Hint files not being deleted on nodetool decommission
> -
>
> Key: CASSANDRA-13308
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13308
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: Using Cassandra version 3.0.9
>Reporter: Arijit
> Attachments: 28207.stack, logs, logs_decommissioned_node
>
>
> How to reproduce the issue I'm seeing:
> Shut down Cassandra on one node of the cluster and wait until we accumulate a 
> ton of hints. Start Cassandra on the node and immediately run "nodetool 
> decommission" on it.
> The node streams its replicas and marks itself as DECOMMISSIONED, but other 
> nodes do not seem to see this message. "nodetool status" shows the 
> decommissioned node in state "UL" on all other nodes (it is also present in 
> system.peers), and Cassandra logs show that gossip tasks on nodes are not 
> proceeding (number of pending tasks keeps increasing). Jstack suggests that a 
> gossip task is blocked on hints dispatch (I can provide traces if this is not 
> obvious). Because the cluster is large and there are a lot of hints, this is 
> taking a while. 
> On inspecting "/var/lib/cassandra/hints" on the nodes, I see a bunch of hint 
> files for the decommissioned node. Documentation seems to suggest that these 
> hints should be deleted during "nodetool decommission", but it does not seem 
> to be the case here. This is the bug being reported.
> To recover from this scenario, if I manually delete hint files on the nodes, 
> the hints dispatcher threads throw a bunch of exceptions and the 
> decommissioned node is now in state "DL" (perhaps it missed some gossip 
> messages?). The node is still in my "system.peers" table
> Restarting Cassandra on all nodes after this step does not fix the issue (the 
> node remains in the peers table). In fact, after this point the 
> decommissioned node is in state "DN"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13323) IncomingTcpConnection closed due to one bad message

2017-03-10 Thread Simon Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Zhou updated CASSANDRA-13323:
---
Description: 
We got this exception:
{code}
WARN  [MessagingService-Incoming-/] 2017-02-14 17:33:33,177 
IncomingTcpConnection.java:101 - UnknownColumnFamilyException reading from 
socket; closing
org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find table for 
cfId 2a3ab630-df74-11e6-9f81-b56251e1559e. If a table was just created, this is 
likely due to the schema not being fully propagated.  Please wait for schema 
agreement on table creation.
at 
org.apache.cassandra.config.CFMetaData$Serializer.deserialize(CFMetaData.java:1336)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:660)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:635)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.service.paxos.Commit$CommitSerializer.deserialize(Commit.java:131)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.service.paxos.Commit$CommitSerializer.deserialize(Commit.java:113)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at org.apache.cassandra.net.MessageIn.read(MessageIn.java:98) 
~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:201)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:178)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:92)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
{code}

Also we saw this log in another host indicating it needs to re-connect:
{code}
INFO  [HANDSHAKE-/] 2017-02-21 13:37:50,216 OutboundTcpConnection.java:515 
- Handshaking version with /
{code}

The reason is that the node was receiving hinted data for a dropped table. This 
may happen with other messages as well. On Cassandra side, 
IncomingTcpConnection shouldn't close on just one bad message, even though it 
will be restarted soon later by SocketThread in MessagingService.

  was:
We got this exception:
{code}
WARN  [MessagingService-Incoming-/] 2017-02-14 17:33:33,177 
IncomingTcpConnection.java:101 - UnknownColumnFamilyException reading from 
socket; closing
org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find table for 
cfId 2a3ab630-df74-11e6-9f81-b56251e1559e. If a table was just created, this is 
likely due to the schema not being fully propagated.  Please wait for schema 
agreement on table creation.
at 
org.apache.cassandra.config.CFMetaData$Serializer.deserialize(CFMetaData.java:1336)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:660)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:635)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.service.paxos.Commit$CommitSerializer.deserialize(Commit.java:131)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.service.paxos.Commit$CommitSerializer.deserialize(Commit.java:113)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at org.apache.cassandra.net.MessageIn.read(MessageIn.java:98) 
~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:201)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:178)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:92)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
{code}

Also we saw this log in another host indicating it needs to re-connect:
{code}
INFO  [HANDSHAKE-/] 2017-02-21 13:37:50,216 OutboundTcpConnection.java:515 
- Handshaking version with /
{code}

The reason is that another node was sending hinted data to this node. However 
the hinted data was for a table that had been dropped. This may happen with 
other messages as well. On Cassandra side, IncomingTcpConnection shouldn't 
close on just one bad message, even though it will be restarted soon later by 
SocketThread in MessagingService.


> IncomingTcpConnection closed due to one bad message
> ---
>
> Key: CASSANDRA-13323
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13323
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Simon Zhou
>  

[jira] [Updated] (CASSANDRA-13323) IncomingTcpConnection closed due to one bad message

2017-03-10 Thread Simon Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Zhou updated CASSANDRA-13323:
---
Status: Patch Available  (was: Open)

> IncomingTcpConnection closed due to one bad message
> ---
>
> Key: CASSANDRA-13323
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13323
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Simon Zhou
>Assignee: Simon Zhou
> Fix For: 3.0.13
>
> Attachments: CASSANDRA-13323-v1.patch
>
>
> We got this exception:
> {code}
> WARN  [MessagingService-Incoming-/] 2017-02-14 17:33:33,177 
> IncomingTcpConnection.java:101 - UnknownColumnFamilyException reading from 
> socket; closing
> org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find table for 
> cfId 2a3ab630-df74-11e6-9f81-b56251e1559e. If a table was just created, this 
> is likely due to the schema not being fully propagated.  Please wait for 
> schema agreement on table creation.
> at 
> org.apache.cassandra.config.CFMetaData$Serializer.deserialize(CFMetaData.java:1336)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:660)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:635)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.service.paxos.Commit$CommitSerializer.deserialize(Commit.java:131)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.service.paxos.Commit$CommitSerializer.deserialize(Commit.java:113)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at org.apache.cassandra.net.MessageIn.read(MessageIn.java:98) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:201)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:178)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:92)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> {code}
> Also we saw this log in another host indicating it needs to re-connect:
> {code}
> INFO  [HANDSHAKE-/] 2017-02-21 13:37:50,216 
> OutboundTcpConnection.java:515 - Handshaking version with /
> {code}
> The reason is that another node was sending hinted data to this node. However 
> the hinted data was for a table that had been dropped. This may happen with 
> other messages as well. On Cassandra side, IncomingTcpConnection shouldn't 
> close on just one bad message, even though it will be restarted soon later by 
> SocketThread in MessagingService.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13323) IncomingTcpConnection closed due to one bad message

2017-03-10 Thread Simon Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Zhou updated CASSANDRA-13323:
---
Attachment: CASSANDRA-13323-v1.patch

> IncomingTcpConnection closed due to one bad message
> ---
>
> Key: CASSANDRA-13323
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13323
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Simon Zhou
>Assignee: Simon Zhou
> Fix For: 3.0.13
>
> Attachments: CASSANDRA-13323-v1.patch
>
>
> We got this exception:
> {code}
> WARN  [MessagingService-Incoming-/] 2017-02-14 17:33:33,177 
> IncomingTcpConnection.java:101 - UnknownColumnFamilyException reading from 
> socket; closing
> org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find table for 
> cfId 2a3ab630-df74-11e6-9f81-b56251e1559e. If a table was just created, this 
> is likely due to the schema not being fully propagated.  Please wait for 
> schema agreement on table creation.
> at 
> org.apache.cassandra.config.CFMetaData$Serializer.deserialize(CFMetaData.java:1336)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:660)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:635)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.service.paxos.Commit$CommitSerializer.deserialize(Commit.java:131)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.service.paxos.Commit$CommitSerializer.deserialize(Commit.java:113)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at org.apache.cassandra.net.MessageIn.read(MessageIn.java:98) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:201)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:178)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:92)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> {code}
> Also we saw this log in another host indicating it needs to re-connect:
> {code}
> INFO  [HANDSHAKE-/] 2017-02-21 13:37:50,216 
> OutboundTcpConnection.java:515 - Handshaking version with /
> {code}
> The reason is that another node was sending hinted data to this node. However 
> the hinted data was for a table that had been dropped. This may happen with 
> other messages as well. On Cassandra side, IncomingTcpConnection shouldn't 
> close on just one bad message, even though it will be restarted soon later by 
> SocketThread in MessagingService.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13323) IncomingTcpConnection closed due to one bad message

2017-03-10 Thread Simon Zhou (JIRA)
Simon Zhou created CASSANDRA-13323:
--

 Summary: IncomingTcpConnection closed due to one bad message
 Key: CASSANDRA-13323
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13323
 Project: Cassandra
  Issue Type: Bug
Reporter: Simon Zhou
Assignee: Simon Zhou
 Fix For: 3.0.13


We got this exception:
{code}
WARN  [MessagingService-Incoming-/] 2017-02-14 17:33:33,177 
IncomingTcpConnection.java:101 - UnknownColumnFamilyException reading from 
socket; closing
org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find table for 
cfId 2a3ab630-df74-11e6-9f81-b56251e1559e. If a table was just created, this is 
likely due to the schema not being fully propagated.  Please wait for schema 
agreement on table creation.
at 
org.apache.cassandra.config.CFMetaData$Serializer.deserialize(CFMetaData.java:1336)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:660)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:635)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.service.paxos.Commit$CommitSerializer.deserialize(Commit.java:131)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.service.paxos.Commit$CommitSerializer.deserialize(Commit.java:113)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at org.apache.cassandra.net.MessageIn.read(MessageIn.java:98) 
~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:201)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:178)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:92)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
{code}

Also we saw this log in another host indicating it needs to re-connect:
{code}
INFO  [HANDSHAKE-/] 2017-02-21 13:37:50,216 OutboundTcpConnection.java:515 
- Handshaking version with /
{code}

The reason is that another node was sending hinted data to this node. However 
the hinted data was for a table that had been dropped. This may happen with 
other messages as well. On Cassandra side, IncomingTcpConnection shouldn't 
close on just one bad message, even though it will be restarted soon later by 
SocketThread in MessagingService.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13314) Config file based SSL settings

2017-03-10 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905833#comment-15905833
 ] 

Jason Brown commented on CASSANDRA-13314:
-

See my comments on CASSANDRA-13259, but TL;DR netty doesn't use the 
{{javax.net.ssl.*}} properties.

> Config file based SSL settings
> --
>
> Key: CASSANDRA-13314
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13314
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration, Tools
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Minor
> Fix For: 4.x
>
>
> As follow up of CASSANDRA-13259, I'd like to continue discussing how we can 
> make SSL less awkward to use and further move SSL related code out of our 
> code base. Currently we construct our own SSLContext in SSLFactory based on 
> EncryptionOptions passed by the MessagingService or any individual tool where 
> we need to offer SSL support. This leads to a situation where the user has 
> not only to learn how to enable the correct settings in cassandra.yaml, but 
> these settings must also be reflected in each tool's own command line 
> options. As argued in CASSANDRA-13259, these settings could be done as well 
> by setting the appropriate system and security properties 
> ([overview|http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html#InstallationAndCustomization])
>  and we should just point the user to the right files to do that (jvm.options 
> and java.security) and make sure that daemon and all affected tools will 
> source them. 
> Since giving this a quick try on my WIP branch, I've noticed the following 
> issues in doing so:
> * Keystore passwords will show up in process list 
> (-Djavax.net.ssl.keyStorePassword=..). We should keep the password setting in 
> cassandra.yaml and clis and do a System.setProperty() if they have been 
> provided. 
> * It's only possible to configure settings for a single default 
> key-/truststore. Since we currently allow configuring both 
> ServerEncryptionOptions and ClientEncryptionOptions with different settings, 
> we'd have to make this a breaking change. I don't really see why you would 
> want to use different stores for node-to-node and node-to-client, but that 
> wouldn't be possible anymore. 
> * This would probably only make sense if we really remove the affected CLI 
> options, or we'll end up with just another way to configure this stuff. This 
> will break existing scripts and obsolete existing documentation.
> Any opinions?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13259) Use platform specific X.509 default algorithm

2017-03-10 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905828#comment-15905828
 ] 

Jason Brown commented on CASSANDRA-13259:
-

wrt {{store_type}}, can java8 correctly figure out the difference between a 
PKCS12 and JKS? Further, what if somebody went bananas and used a JCEKS (I'm 
not totally sure this case applies to TLS)? I agree with you that one declared 
{{store_type}} is not correct for all situations (covering both the key and 
trust stores), but that leads us logically to having a separate {{store_type}} 
config option for both keystore and truststore. The {{javax.net.ssl.*}} allow a 
differentiation of the store types, but see next paragraph.

wrt JVM-based properties ({{javax.net.ssl.*}}), we currently allow users to 
have a different configuration for client-server and internode (peero-to-peer) 
communications. By removing both options in favor of using the JVM-based 
properties, operators who previously had separate configs are now forced to use 
the same config for both, and I'm not sure how big of a breakage that is (in 
terms of the actual number of opertators/clusters affected).

Also, I spoke with one of the netty developers, and they ignore the 
{{javax.net.ssl.*}} properties. Thus I don't think the JVM-based properties is 
the way to go.


> Use platform specific X.509 default algorithm
> -
>
> Key: CASSANDRA-13259
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13259
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Minor
> Fix For: 4.x
>
>
> We should replace the hardcoded "SunX509" default algorithm and use the JRE 
> default instead. This implementation will currently not work on less popular 
> platforms (e.g. IBM) and won't get any further updates.
> See also:
> https://bugs.openjdk.java.net/browse/JDK-8169745



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (CASSANDRA-13320) upgradesstables fails after upgrading from 2.1.x to 3.0.11

2017-03-10 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa reassigned CASSANDRA-13320:
--

Assignee: Zhongxiang Zheng

> upgradesstables fails after upgrading from 2.1.x to 3.0.11
> --
>
> Key: CASSANDRA-13320
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13320
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Zhongxiang Zheng
>Assignee: Zhongxiang Zheng
> Attachments: 13320.patch
>
>
> I tried to execute {{nodetool upgradesstables}} after upgrading cluster from 
> 2.1.16 to 3.0.11, but it fails when upgrading a table with 2i.
> This problem can be reproduced as follows.
> {code}
> $ ccm create test -v 2.1.16 -n 1 -s
> $ ccm node1 cqlsh  -e "CREATE KEYSPACE test WITH replication = 
> {'class':'SimpleStrategy', 'replication_factor':1}"
> $ ccm node1 cqlsh  -e "CREATE TABLE test.test(k1 text, k2 text, PRIMARY KEY( 
> k1 ));"
> $ ccm node1 cqlsh  -e "CREATE INDEX k2 ON test.test(k2);"
>  
> $ ccm node1 cqlsh  -e "INSERT INTO test.test (k1, k2 ) VALUES ( 'a', 'a') ;"
> $ ccm node1 cqlsh  -e "INSERT INTO test.test (k1, k2 ) VALUES ( 'a', 'b') ;"
>  
> $ ccm node1 nodetool flush
>  
> $ for i in `seq 1 `; do ccm node${i} stop; ccm node${i} setdir -v3.0.11;ccm 
> node${i} start; done
> $ ccm node1 nodetool upgradesstables test test
> Traceback (most recent call last):
>   File "/home/y/bin/ccm", line 86, in 
> cmd.run()
>   File "/home/y/lib/python2.7/site-packages/ccmlib/cmds/node_cmds.py", line 
> 267, in run
> stdout, stderr = self.node.nodetool(" ".join(self.args[1:]))
>   File "/home/y/lib/python2.7/site-packages/ccmlib/node.py", line 742, in 
> nodetool
> raise NodetoolError(" ".join(args), exit_status, stdout, stderr)
> ccmlib.node.NodetoolError: Nodetool command 
> '/home/zzheng/.ccm/repository/3.0.11/bin/nodetool -h localhost -p 7100 
> upgradesstables test test' failed; exit status: 2; stderr: WARN  06:29:08 
> Only 10476 MB free across all data volumes. Consider adding more capacity to 
> your cluster or removing obsolete snapshots
> error: null
> -- StackTrace --
> java.lang.AssertionError
>   at org.apache.cassandra.db.rows.Rows.collectStats(Rows.java:70)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter$StatsCollector.applyToRow(BigTableWriter.java:197)
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:116)
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:107)
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:41)
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:156)
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:122)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:147)
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:125)
>   at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:57)
>   at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:109)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:195)
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:89)
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$5.execute(CompactionManager.java:415)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:307)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> The result of dumping the 2i sstable is as follows.
> {code}
> [
> {"key": "a",
>  "cells": [["61",1488961273,1488961269822817,"d"]]},
> {"key": "b",
>  "cells": [["61","",1488961273015759]]}
> ]
> {code}
> This problem is occurred by the tombstone row. When this row is processed in 
> {{LegacyLayout.java}}, it will be treated as a row maker.
> 

[jira] [Updated] (CASSANDRA-13130) Strange result of several list updates in a single request

2017-03-10 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-13130:
---
Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

Committed into 2.2 at 5ef8a8b408d4c492f7f2ffbbbe6fce237140c7cb and merged into 
3.0, 3.11 and trunk

> Strange result of several list updates in a single request
> --
>
> Key: CASSANDRA-13130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13130
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Mikhail Krupitskiy
>Assignee: Benjamin Lerer
>Priority: Trivial
> Fix For: 2.2.x, 3.0.x, 3.11.x, 4.x
>
>
> Let's assume that we have a row with the 'listColumn' column and value 
> \{1,2,3,4\}.
> For me it looks logical to expect that the following two pieces of code will 
> ends up with the same result but it isn't so.
> Code1:
> {code}
> UPDATE t SET listColumn[2] = 7, listColumn[2] = 8  WHERE id = 1;
> {code}
> Expected result: listColumn=\{1,2,8,4\} 
> Actual result: listColumn=\{1,2,7,8,4\}
> Code2:
> {code}
> UPDATE t SET listColumn[2] = 7  WHERE id = 1;
> UPDATE t SET listColumn[2] = 8  WHERE id = 1;
> {code}
> Expected result: listColumn=\{1,2,8,4\} 
> Actual result: listColumn=\{1,2,8,4\}
> So the question is why Code1 and Code2 give different results?
> Looks like Code1 should give the same result as Code2.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13320) upgradesstables fails after upgrading from 2.1.x to 3.0.11

2017-03-10 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-13320:
---
Reviewer: Benjamin Lerer

> upgradesstables fails after upgrading from 2.1.x to 3.0.11
> --
>
> Key: CASSANDRA-13320
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13320
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Zhongxiang Zheng
> Attachments: 13320.patch
>
>
> I tried to execute {{nodetool upgradesstables}} after upgrading cluster from 
> 2.1.16 to 3.0.11, but it fails when upgrading a table with 2i.
> This problem can be reproduced as follows.
> {code}
> $ ccm create test -v 2.1.16 -n 1 -s
> $ ccm node1 cqlsh  -e "CREATE KEYSPACE test WITH replication = 
> {'class':'SimpleStrategy', 'replication_factor':1}"
> $ ccm node1 cqlsh  -e "CREATE TABLE test.test(k1 text, k2 text, PRIMARY KEY( 
> k1 ));"
> $ ccm node1 cqlsh  -e "CREATE INDEX k2 ON test.test(k2);"
>  
> $ ccm node1 cqlsh  -e "INSERT INTO test.test (k1, k2 ) VALUES ( 'a', 'a') ;"
> $ ccm node1 cqlsh  -e "INSERT INTO test.test (k1, k2 ) VALUES ( 'a', 'b') ;"
>  
> $ ccm node1 nodetool flush
>  
> $ for i in `seq 1 `; do ccm node${i} stop; ccm node${i} setdir -v3.0.11;ccm 
> node${i} start; done
> $ ccm node1 nodetool upgradesstables test test
> Traceback (most recent call last):
>   File "/home/y/bin/ccm", line 86, in 
> cmd.run()
>   File "/home/y/lib/python2.7/site-packages/ccmlib/cmds/node_cmds.py", line 
> 267, in run
> stdout, stderr = self.node.nodetool(" ".join(self.args[1:]))
>   File "/home/y/lib/python2.7/site-packages/ccmlib/node.py", line 742, in 
> nodetool
> raise NodetoolError(" ".join(args), exit_status, stdout, stderr)
> ccmlib.node.NodetoolError: Nodetool command 
> '/home/zzheng/.ccm/repository/3.0.11/bin/nodetool -h localhost -p 7100 
> upgradesstables test test' failed; exit status: 2; stderr: WARN  06:29:08 
> Only 10476 MB free across all data volumes. Consider adding more capacity to 
> your cluster or removing obsolete snapshots
> error: null
> -- StackTrace --
> java.lang.AssertionError
>   at org.apache.cassandra.db.rows.Rows.collectStats(Rows.java:70)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter$StatsCollector.applyToRow(BigTableWriter.java:197)
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:116)
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:107)
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:41)
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:156)
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:122)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:147)
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:125)
>   at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:57)
>   at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:109)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:195)
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:89)
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$5.execute(CompactionManager.java:415)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:307)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> The result of dumping the 2i sstable is as follows.
> {code}
> [
> {"key": "a",
>  "cells": [["61",1488961273,1488961269822817,"d"]]},
> {"key": "b",
>  "cells": [["61","",1488961273015759]]}
> ]
> {code}
> This problem is occurred by the tombstone row. When this row is processed in 
> {{LegacyLayout.java}}, it will be treated as a row maker.
> 

[jira] [Commented] (CASSANDRA-13320) upgradesstables fails after upgrading from 2.1.x to 3.0.11

2017-03-10 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905651#comment-15905651
 ] 

Benjamin Lerer commented on CASSANDRA-13320:


I will try to look into that next week.

> upgradesstables fails after upgrading from 2.1.x to 3.0.11
> --
>
> Key: CASSANDRA-13320
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13320
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Zhongxiang Zheng
> Attachments: 13320.patch
>
>
> I tried to execute {{nodetool upgradesstables}} after upgrading cluster from 
> 2.1.16 to 3.0.11, but it fails when upgrading a table with 2i.
> This problem can be reproduced as follows.
> {code}
> $ ccm create test -v 2.1.16 -n 1 -s
> $ ccm node1 cqlsh  -e "CREATE KEYSPACE test WITH replication = 
> {'class':'SimpleStrategy', 'replication_factor':1}"
> $ ccm node1 cqlsh  -e "CREATE TABLE test.test(k1 text, k2 text, PRIMARY KEY( 
> k1 ));"
> $ ccm node1 cqlsh  -e "CREATE INDEX k2 ON test.test(k2);"
>  
> $ ccm node1 cqlsh  -e "INSERT INTO test.test (k1, k2 ) VALUES ( 'a', 'a') ;"
> $ ccm node1 cqlsh  -e "INSERT INTO test.test (k1, k2 ) VALUES ( 'a', 'b') ;"
>  
> $ ccm node1 nodetool flush
>  
> $ for i in `seq 1 `; do ccm node${i} stop; ccm node${i} setdir -v3.0.11;ccm 
> node${i} start; done
> $ ccm node1 nodetool upgradesstables test test
> Traceback (most recent call last):
>   File "/home/y/bin/ccm", line 86, in 
> cmd.run()
>   File "/home/y/lib/python2.7/site-packages/ccmlib/cmds/node_cmds.py", line 
> 267, in run
> stdout, stderr = self.node.nodetool(" ".join(self.args[1:]))
>   File "/home/y/lib/python2.7/site-packages/ccmlib/node.py", line 742, in 
> nodetool
> raise NodetoolError(" ".join(args), exit_status, stdout, stderr)
> ccmlib.node.NodetoolError: Nodetool command 
> '/home/zzheng/.ccm/repository/3.0.11/bin/nodetool -h localhost -p 7100 
> upgradesstables test test' failed; exit status: 2; stderr: WARN  06:29:08 
> Only 10476 MB free across all data volumes. Consider adding more capacity to 
> your cluster or removing obsolete snapshots
> error: null
> -- StackTrace --
> java.lang.AssertionError
>   at org.apache.cassandra.db.rows.Rows.collectStats(Rows.java:70)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter$StatsCollector.applyToRow(BigTableWriter.java:197)
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:116)
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:107)
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:41)
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:156)
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:122)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:147)
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:125)
>   at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:57)
>   at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:109)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:195)
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:89)
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$5.execute(CompactionManager.java:415)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:307)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> The result of dumping the 2i sstable is as follows.
> {code}
> [
> {"key": "a",
>  "cells": [["61",1488961273,1488961269822817,"d"]]},
> {"key": "b",
>  "cells": [["61","",1488961273015759]]}
> ]
> {code}
> This problem is occurred by the tombstone row. When this row is processed in 
> {{LegacyLayout.java}}, it will be treated as a row maker.
> 

[jira] [Assigned] (CASSANDRA-13320) upgradesstables fails after upgrading from 2.1.x to 3.0.11

2017-03-10 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer reassigned CASSANDRA-13320:
--

Assignee: Benjamin Lerer

> upgradesstables fails after upgrading from 2.1.x to 3.0.11
> --
>
> Key: CASSANDRA-13320
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13320
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Zhongxiang Zheng
>Assignee: Benjamin Lerer
> Attachments: 13320.patch
>
>
> I tried to execute {{nodetool upgradesstables}} after upgrading cluster from 
> 2.1.16 to 3.0.11, but it fails when upgrading a table with 2i.
> This problem can be reproduced as follows.
> {code}
> $ ccm create test -v 2.1.16 -n 1 -s
> $ ccm node1 cqlsh  -e "CREATE KEYSPACE test WITH replication = 
> {'class':'SimpleStrategy', 'replication_factor':1}"
> $ ccm node1 cqlsh  -e "CREATE TABLE test.test(k1 text, k2 text, PRIMARY KEY( 
> k1 ));"
> $ ccm node1 cqlsh  -e "CREATE INDEX k2 ON test.test(k2);"
>  
> $ ccm node1 cqlsh  -e "INSERT INTO test.test (k1, k2 ) VALUES ( 'a', 'a') ;"
> $ ccm node1 cqlsh  -e "INSERT INTO test.test (k1, k2 ) VALUES ( 'a', 'b') ;"
>  
> $ ccm node1 nodetool flush
>  
> $ for i in `seq 1 `; do ccm node${i} stop; ccm node${i} setdir -v3.0.11;ccm 
> node${i} start; done
> $ ccm node1 nodetool upgradesstables test test
> Traceback (most recent call last):
>   File "/home/y/bin/ccm", line 86, in 
> cmd.run()
>   File "/home/y/lib/python2.7/site-packages/ccmlib/cmds/node_cmds.py", line 
> 267, in run
> stdout, stderr = self.node.nodetool(" ".join(self.args[1:]))
>   File "/home/y/lib/python2.7/site-packages/ccmlib/node.py", line 742, in 
> nodetool
> raise NodetoolError(" ".join(args), exit_status, stdout, stderr)
> ccmlib.node.NodetoolError: Nodetool command 
> '/home/zzheng/.ccm/repository/3.0.11/bin/nodetool -h localhost -p 7100 
> upgradesstables test test' failed; exit status: 2; stderr: WARN  06:29:08 
> Only 10476 MB free across all data volumes. Consider adding more capacity to 
> your cluster or removing obsolete snapshots
> error: null
> -- StackTrace --
> java.lang.AssertionError
>   at org.apache.cassandra.db.rows.Rows.collectStats(Rows.java:70)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter$StatsCollector.applyToRow(BigTableWriter.java:197)
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:116)
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:107)
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:41)
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:156)
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:122)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:147)
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:125)
>   at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:57)
>   at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:109)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:195)
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:89)
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$5.execute(CompactionManager.java:415)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:307)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> The result of dumping the 2i sstable is as follows.
> {code}
> [
> {"key": "a",
>  "cells": [["61",1488961273,1488961269822817,"d"]]},
> {"key": "b",
>  "cells": [["61","",1488961273015759]]}
> ]
> {code}
> This problem is occurred by the tombstone row. When this row is processed in 
> {{LegacyLayout.java}}, it will be treated as a row maker.
> 

[jira] [Assigned] (CASSANDRA-13320) upgradesstables fails after upgrading from 2.1.x to 3.0.11

2017-03-10 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer reassigned CASSANDRA-13320:
--

Assignee: (was: Benjamin Lerer)

> upgradesstables fails after upgrading from 2.1.x to 3.0.11
> --
>
> Key: CASSANDRA-13320
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13320
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Zhongxiang Zheng
> Attachments: 13320.patch
>
>
> I tried to execute {{nodetool upgradesstables}} after upgrading cluster from 
> 2.1.16 to 3.0.11, but it fails when upgrading a table with 2i.
> This problem can be reproduced as follows.
> {code}
> $ ccm create test -v 2.1.16 -n 1 -s
> $ ccm node1 cqlsh  -e "CREATE KEYSPACE test WITH replication = 
> {'class':'SimpleStrategy', 'replication_factor':1}"
> $ ccm node1 cqlsh  -e "CREATE TABLE test.test(k1 text, k2 text, PRIMARY KEY( 
> k1 ));"
> $ ccm node1 cqlsh  -e "CREATE INDEX k2 ON test.test(k2);"
>  
> $ ccm node1 cqlsh  -e "INSERT INTO test.test (k1, k2 ) VALUES ( 'a', 'a') ;"
> $ ccm node1 cqlsh  -e "INSERT INTO test.test (k1, k2 ) VALUES ( 'a', 'b') ;"
>  
> $ ccm node1 nodetool flush
>  
> $ for i in `seq 1 `; do ccm node${i} stop; ccm node${i} setdir -v3.0.11;ccm 
> node${i} start; done
> $ ccm node1 nodetool upgradesstables test test
> Traceback (most recent call last):
>   File "/home/y/bin/ccm", line 86, in 
> cmd.run()
>   File "/home/y/lib/python2.7/site-packages/ccmlib/cmds/node_cmds.py", line 
> 267, in run
> stdout, stderr = self.node.nodetool(" ".join(self.args[1:]))
>   File "/home/y/lib/python2.7/site-packages/ccmlib/node.py", line 742, in 
> nodetool
> raise NodetoolError(" ".join(args), exit_status, stdout, stderr)
> ccmlib.node.NodetoolError: Nodetool command 
> '/home/zzheng/.ccm/repository/3.0.11/bin/nodetool -h localhost -p 7100 
> upgradesstables test test' failed; exit status: 2; stderr: WARN  06:29:08 
> Only 10476 MB free across all data volumes. Consider adding more capacity to 
> your cluster or removing obsolete snapshots
> error: null
> -- StackTrace --
> java.lang.AssertionError
>   at org.apache.cassandra.db.rows.Rows.collectStats(Rows.java:70)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter$StatsCollector.applyToRow(BigTableWriter.java:197)
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:116)
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:107)
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:41)
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:156)
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:122)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:147)
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:125)
>   at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:57)
>   at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:109)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:195)
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:89)
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$5.execute(CompactionManager.java:415)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:307)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> The result of dumping the 2i sstable is as follows.
> {code}
> [
> {"key": "a",
>  "cells": [["61",1488961273,1488961269822817,"d"]]},
> {"key": "b",
>  "cells": [["61","",1488961273015759]]}
> ]
> {code}
> This problem is occurred by the tombstone row. When this row is processed in 
> {{LegacyLayout.java}}, it will be treated as a row maker.
> 

[jira] [Commented] (CASSANDRA-13320) upgradesstables fails after upgrading from 2.1.x to 3.0.11

2017-03-10 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905643#comment-15905643
 ] 

Jeff Jirsa commented on CASSANDRA-13320:


For troubleshooting in case it helps someone, here's some dumps of the sstable 
data post-upgradesstables with 3.0.10:

{code}
$ ~/.ccm/repository/3.0.10/tools/bin/sstabledump 
~/.ccm/test/node1/data0/test/test-0c0e762005c511e7990409d9d370a92a/mc-2-big-Data.db
[
  {
"partition" : {
  "key" : [ "a" ],
  "position" : 0
},
"rows" : [
  {
"type" : "row",
"position" : 15,
"liveness_info" : { "tstamp" : "2017-03-10T19:09:14.478850Z" },
"cells" : [
  { "name" : "k2", "value" : "b" }
]
  }
]
  }
]

$ ~/.ccm/repository/3.0.10/tools/bin/sstabledump 
~/.ccm/test/node1/data0/test/test-0c0e762005c511e7990409d9d370a92a/.k2/mc-2-big-Data.db
[
  {
"partition" : {
  "key" : [ "a" ],
  "position" : 0
},
"rows" : [
  {
"type" : "row",
"position" : 15,
"clustering" : [ "61" ],
"liveness_info" : { "tstamp" : "2017-03-10T19:09:13.979340Z" },
"cells" : [ ]
  }
]
  },
  {
"partition" : {
  "key" : [ "b" ],
  "position" : 23
},
"rows" : [
  {
"type" : "row",
"position" : 38,
"clustering" : [ "61" ],
"liveness_info" : { "tstamp" : "2017-03-10T19:09:14.478850Z" },
"cells" : [ ]
  }
]
  }
]
{code}

and 3.0.11 + the patch from [~zzheng] 

{code}
$ ~/.ccm/repository/3.0.11/tools/bin/sstabledump 
~/.ccm/test/node1/data0/test/test-a4feee4005cb11e79e0709d9d370a92a/mc-2-big-Data.db
[
  {
"partition" : {
  "key" : [ "a" ],
  "position" : 0
},
"rows" : [
  {
"type" : "row",
"position" : 22,
"liveness_info" : { "tstamp" : "2017-03-10T20:10:03.973045Z" },
"cells" : [
  { "name" : "k2", "value" : "b" }
]
  }
]
  }
]
$ ~/.ccm/repository/3.0.11/tools/bin/sstabledump 
~/.ccm/test/node1/data0/test/test-a4feee4005cb11e79e0709d9d370a92a/.k2/mc-2-big-Data.db
[
  {
"partition" : {
  "key" : [ "a" ],
  "position" : 0
},
"rows" : [
  {
"type" : "row",
"position" : 26,
"clustering" : [ "61" ],
"deletion_info" : { "marked_deleted" : "2017-03-10T20:09:59.667091Z", 
"local_delete_time" : "2017-03-10T20:10:03Z" },
"cells" : [ ]
  }
]
  },
  {
"partition" : {
  "key" : [ "b" ],
  "position" : 27
},
"rows" : [
  {
"type" : "row",
"position" : 52,
"clustering" : [ "61" ],
"liveness_info" : { "tstamp" : "2017-03-10T20:10:03.973045Z" },
"cells" : [ ]
  }
]
  }
]

{code}



> upgradesstables fails after upgrading from 2.1.x to 3.0.11
> --
>
> Key: CASSANDRA-13320
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13320
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Zhongxiang Zheng
> Attachments: 13320.patch
>
>
> I tried to execute {{nodetool upgradesstables}} after upgrading cluster from 
> 2.1.16 to 3.0.11, but it fails when upgrading a table with 2i.
> This problem can be reproduced as follows.
> {code}
> $ ccm create test -v 2.1.16 -n 1 -s
> $ ccm node1 cqlsh  -e "CREATE KEYSPACE test WITH replication = 
> {'class':'SimpleStrategy', 'replication_factor':1}"
> $ ccm node1 cqlsh  -e "CREATE TABLE test.test(k1 text, k2 text, PRIMARY KEY( 
> k1 ));"
> $ ccm node1 cqlsh  -e "CREATE INDEX k2 ON test.test(k2);"
>  
> $ ccm node1 cqlsh  -e "INSERT INTO test.test (k1, k2 ) VALUES ( 'a', 'a') ;"
> $ ccm node1 cqlsh  -e "INSERT INTO test.test (k1, k2 ) VALUES ( 'a', 'b') ;"
>  
> $ ccm node1 nodetool flush
>  
> $ for i in `seq 1 `; do ccm node${i} stop; ccm node${i} setdir -v3.0.11;ccm 
> node${i} start; done
> $ ccm node1 nodetool upgradesstables test test
> Traceback (most recent call last):
>   File "/home/y/bin/ccm", line 86, in 
> cmd.run()
>   File "/home/y/lib/python2.7/site-packages/ccmlib/cmds/node_cmds.py", line 
> 267, in run
> stdout, stderr = self.node.nodetool(" ".join(self.args[1:]))
>   File "/home/y/lib/python2.7/site-packages/ccmlib/node.py", line 742, in 
> nodetool
> raise NodetoolError(" ".join(args), exit_status, stdout, stderr)
> ccmlib.node.NodetoolError: Nodetool command 
> '/home/zzheng/.ccm/repository/3.0.11/bin/nodetool -h localhost -p 7100 
> upgradesstables test test' failed; exit status: 2; stderr: WARN  06:29:08 
> Only 10476 MB free across all data volumes. Consider adding more capacity to 
> your cluster or removing obsolete snapshots
> error: null
> -- StackTrace --
> java.lang.AssertionError
>   at org.apache.cassandra.db.rows.Rows.collectStats(Rows.java:70)
>   at 
> 

[jira] [Commented] (CASSANDRA-13320) upgradesstables fails after upgrading from 2.1.x to 3.0.11

2017-03-10 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1590#comment-1590
 ] 

Jeff Jirsa commented on CASSANDRA-13320:


[~slebresne] or [~blerer] - given your work on CASSANDRA-12620 , either of you 
available to comment on the correctness of this? 


> upgradesstables fails after upgrading from 2.1.x to 3.0.11
> --
>
> Key: CASSANDRA-13320
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13320
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Zhongxiang Zheng
> Attachments: 13320.patch
>
>
> I tried to execute {{nodetool upgradesstables}} after upgrading cluster from 
> 2.1.16 to 3.0.11, but it fails when upgrading a table with 2i.
> This problem can be reproduced as follows.
> {code}
> $ ccm create test -v 2.1.16 -n 1 -s
> $ ccm node1 cqlsh  -e "CREATE KEYSPACE test WITH replication = 
> {'class':'SimpleStrategy', 'replication_factor':1}"
> $ ccm node1 cqlsh  -e "CREATE TABLE test.test(k1 text, k2 text, PRIMARY KEY( 
> k1 ));"
> $ ccm node1 cqlsh  -e "CREATE INDEX k2 ON test.test(k2);"
>  
> $ ccm node1 cqlsh  -e "INSERT INTO test.test (k1, k2 ) VALUES ( 'a', 'a') ;"
> $ ccm node1 cqlsh  -e "INSERT INTO test.test (k1, k2 ) VALUES ( 'a', 'b') ;"
>  
> $ ccm node1 nodetool flush
>  
> $ for i in `seq 1 `; do ccm node${i} stop; ccm node${i} setdir -v3.0.11;ccm 
> node${i} start; done
> $ ccm node1 nodetool upgradesstables test test
> Traceback (most recent call last):
>   File "/home/y/bin/ccm", line 86, in 
> cmd.run()
>   File "/home/y/lib/python2.7/site-packages/ccmlib/cmds/node_cmds.py", line 
> 267, in run
> stdout, stderr = self.node.nodetool(" ".join(self.args[1:]))
>   File "/home/y/lib/python2.7/site-packages/ccmlib/node.py", line 742, in 
> nodetool
> raise NodetoolError(" ".join(args), exit_status, stdout, stderr)
> ccmlib.node.NodetoolError: Nodetool command 
> '/home/zzheng/.ccm/repository/3.0.11/bin/nodetool -h localhost -p 7100 
> upgradesstables test test' failed; exit status: 2; stderr: WARN  06:29:08 
> Only 10476 MB free across all data volumes. Consider adding more capacity to 
> your cluster or removing obsolete snapshots
> error: null
> -- StackTrace --
> java.lang.AssertionError
>   at org.apache.cassandra.db.rows.Rows.collectStats(Rows.java:70)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter$StatsCollector.applyToRow(BigTableWriter.java:197)
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:116)
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:107)
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:41)
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:156)
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:122)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:147)
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:125)
>   at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:57)
>   at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:109)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:195)
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:89)
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$5.execute(CompactionManager.java:415)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:307)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> The result of dumping the 2i sstable is as follows.
> {code}
> [
> {"key": "a",
>  "cells": [["61",1488961273,1488961269822817,"d"]]},
> {"key": "b",
>  "cells": [["61","",1488961273015759]]}
> ]
> {code}
> This problem is occurred by the tombstone row. When this row is processed in 
> 

[jira] [Commented] (CASSANDRA-13289) Make it possible to monitor an ideal consistency level separate from actual consistency level

2017-03-10 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905541#comment-15905541
 ] 

Jason Brown commented on CASSANDRA-13289:
-

Some thoughts:

- maybe only instantiate 
{{AbstractWriteResponseHandler#responsesAndExpirations}} in 
{{#setIdealCLResponseHandler()}}, and thus only create the {{AtomicInteger}} 
when you know you are actually going to use it.
- if the ideal CL and the requested CL are the same, should we even bother 
capturing metrics about it? I'm kinda mixed on it...
- what happens if the user mixes non-CAS consistency levels with CAS 
consistency levels (or vice versa)? I think the behavior will be correct (we 
won't inadvertantly violate paxos semantics), but the semantic difference 
between CAS and non-CAS requests might not be meaningful. So perhaps ignore the 
idealCl if the CL types are different? wdyt?
- how will timed out message metrics be affected? We create an entry in 
{{MessagingService#callbacks}} for each peer contacted for an operation (just 
talking reads/mutations right now), and say the request CL is satisfied, but 
the idealCL doesn't hear back from some nodes. In that case we'll increment the 
timeouts, {{ConnectionMetrics.totalTimeouts.mark()}}, even though they weren't 
explicitly part of the user's request. It might be confusing to users or 
operators. I'm not sure how hard it is to code around that, or if it's 
worthwhile. If we feel it's not, perhaps we just document it in the yaml that 
"you may see higher than usual timeout counts". Thoughts?
- calling it "ideal consistency level" doesn't sound quite right. Maybe 
something like "alternative" or "secondary" might work. It might be good to 
point out that the emphasis here should be on discovering the latencies a 
different CL would bring, and not necessarily the impact on data consistency 
itself.



> Make it possible to monitor an ideal consistency level separate from actual 
> consistency level
> -
>
> Key: CASSANDRA-13289
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13289
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
> Fix For: 4.0
>
>
> As an operator there are several issues related to multi-datacenter 
> replication and consistency you may want to have more information on from 
> your production database.
> For instance. If your application writes at LOCAL_QUORUM how often are those 
> writes failing to achieve EACH_QUORUM at other data centers. If you failed 
> your application over to one of those data centers roughly how inconsistent 
> might it be given the number of writes that didn't propagate since the last 
> incremental repair?
> You might also want to know roughly what the latency of writes would be if 
> you switched to a different consistency level. For instance you are writing 
> at LOCAL_QUORUM and want to know what would happen if you switched to 
> EACH_QUORUM.
> The proposed change is to allow an ideal_consistency_level to be specified in 
> cassandra.yaml as well as get/set via JMX. If no ideal consistency level is 
> specified no additional tracking is done.
> if an ideal consistency level is specified then the 
> {{AbstractWriteResponesHandler}} will contain a delegate WriteResponseHandler 
> that tracks whether the ideal consistency level is met before a write times 
> out. It also tracks the latency for achieving the ideal CL  of successful 
> writes.
> These two metrics would be reported on a per keyspace basis.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (CASSANDRA-13196) test failure in snitch_test.TestGossipingPropertyFileSnitch.test_prefer_local_reconnect_on_listen_address

2017-03-10 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov reassigned CASSANDRA-13196:
---

Assignee: Alex Petrov

> test failure in 
> snitch_test.TestGossipingPropertyFileSnitch.test_prefer_local_reconnect_on_listen_address
> -
>
> Key: CASSANDRA-13196
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13196
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Shuler
>Assignee: Alex Petrov
>  Labels: dtest, test-failure
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1487/testReport/snitch_test/TestGossipingPropertyFileSnitch/test_prefer_local_reconnect_on_listen_address
> {code}
> {novnode}
> Error Message
> Error from server: code=2200 [Invalid query] message="keyspace keyspace1 does 
> not exist"
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-k6b0iF
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> cassandra.policies: INFO: Using datacenter 'dc1' for DCAwareRoundRobinPolicy 
> (via host '127.0.0.1'); if incorrect, please specify a local_dc to the 
> constructor, or limit contact points to local cluster nodes
> cassandra.cluster: INFO: New Cassandra host  discovered
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/snitch_test.py", line 87, in 
> test_prefer_local_reconnect_on_listen_address
> new_rows = list(session.execute("SELECT * FROM {}".format(stress_table)))
>   File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line 
> 1998, in execute
> return self.execute_async(query, parameters, trace, custom_payload, 
> timeout, execution_profile, paging_state).result()
>   File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line 
> 3784, in result
> raise self._final_exception
> 'Error from server: code=2200 [Invalid query] message="keyspace keyspace1 
> does not exist"\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /tmp/dtest-k6b0iF\ndtest: DEBUG: Done setting configuration options:\n{   
> \'initial_token\': None,\n\'num_tokens\': \'32\',\n
> \'phi_convict_threshold\': 5,\n\'range_request_timeout_in_ms\': 1,\n  
>   \'read_request_timeout_in_ms\': 1,\n\'request_timeout_in_ms\': 
> 1,\n\'truncate_request_timeout_in_ms\': 1,\n
> \'write_request_timeout_in_ms\': 1}\ncassandra.policies: INFO: Using 
> datacenter \'dc1\' for DCAwareRoundRobinPolicy (via host \'127.0.0.1\'); if 
> incorrect, please specify a local_dc to the constructor, or limit contact 
> points to local cluster nodes\ncassandra.cluster: INFO: New Cassandra host 
>  discovered\n- >> end captured 
> logging << -'
> {novnode}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (CASSANDRA-13196) test failure in snitch_test.TestGossipingPropertyFileSnitch.test_prefer_local_reconnect_on_listen_address

2017-03-10 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov reassigned CASSANDRA-13196:
---

Assignee: Aleksandr Sorokoumov  (was: Alex Petrov)

> test failure in 
> snitch_test.TestGossipingPropertyFileSnitch.test_prefer_local_reconnect_on_listen_address
> -
>
> Key: CASSANDRA-13196
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13196
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Shuler
>Assignee: Aleksandr Sorokoumov
>  Labels: dtest, test-failure
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1487/testReport/snitch_test/TestGossipingPropertyFileSnitch/test_prefer_local_reconnect_on_listen_address
> {code}
> {novnode}
> Error Message
> Error from server: code=2200 [Invalid query] message="keyspace keyspace1 does 
> not exist"
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-k6b0iF
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> cassandra.policies: INFO: Using datacenter 'dc1' for DCAwareRoundRobinPolicy 
> (via host '127.0.0.1'); if incorrect, please specify a local_dc to the 
> constructor, or limit contact points to local cluster nodes
> cassandra.cluster: INFO: New Cassandra host  discovered
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/snitch_test.py", line 87, in 
> test_prefer_local_reconnect_on_listen_address
> new_rows = list(session.execute("SELECT * FROM {}".format(stress_table)))
>   File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line 
> 1998, in execute
> return self.execute_async(query, parameters, trace, custom_payload, 
> timeout, execution_profile, paging_state).result()
>   File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line 
> 3784, in result
> raise self._final_exception
> 'Error from server: code=2200 [Invalid query] message="keyspace keyspace1 
> does not exist"\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /tmp/dtest-k6b0iF\ndtest: DEBUG: Done setting configuration options:\n{   
> \'initial_token\': None,\n\'num_tokens\': \'32\',\n
> \'phi_convict_threshold\': 5,\n\'range_request_timeout_in_ms\': 1,\n  
>   \'read_request_timeout_in_ms\': 1,\n\'request_timeout_in_ms\': 
> 1,\n\'truncate_request_timeout_in_ms\': 1,\n
> \'write_request_timeout_in_ms\': 1}\ncassandra.policies: INFO: Using 
> datacenter \'dc1\' for DCAwareRoundRobinPolicy (via host \'127.0.0.1\'); if 
> incorrect, please specify a local_dc to the constructor, or limit contact 
> points to local cluster nodes\ncassandra.cluster: INFO: New Cassandra host 
>  discovered\n- >> end captured 
> logging << -'
> {novnode}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13315) Semantically meaningful Consistency Levels

2017-03-10 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905491#comment-15905491
 ] 

Jon Haddad commented on CASSANDRA-13315:


Seems like picking an event number of replicas should warn the user, regardless 
of this JIRA.

> Semantically meaningful Consistency Levels
> --
>
> Key: CASSANDRA-13315
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13315
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Ryan Svihla
>
> New users really struggle with consistency level and fall into a large number 
> of tarpits trying to decide on the right one.
> o
> 1. There are a LOT of consistency levels and it's up to the end user to 
> reason about what combinations are valid and what is really what they intend 
> it to be. Is there any reason why write at ALL and read at CL TWO is better 
> than read at CL ONE? 
> 2. They require a good understanding of failure modes to do well. It's not 
> uncommon for people to use CL one and wonder why their data is missing.
> 3. The serial consistency level "bucket" is confusing to even write about and 
> easy to get wrong even for experienced users.
> So I propose the following steps (EDIT based on Jonathan's comment):
> 1. Remove the "serial consistency" level of consistency levels and just have 
> all consistency levels in one bucket to set, conditions still need to be 
> required for SERIAL/LOCAL_SERIAL
> 2. add 3 new consistency levels pointing to existing ones but that infer 
> intent much more cleanly:
> EDIT better names bases on comments.
>* EVENTUALLY = LOCAL_ONE reads and writes
>* STRONG = LOCAL_QUORUM reads and writes
>* SERIAL = LOCAL_SERIAL reads and writes (though a ton of folks dont know 
> what SERIAL means so this is why I suggested TRANSACTIONAL even if its not as 
> correct as Id like)
> for global levels of this I propose keeping the old ones around, they're 
> rarely used in the field except by accident or particularly opinionated and 
> advanced users.
> Drivers should put the new consistency levels in a new package and docs 
> should be updated to suggest their use. Likewise setting default CL should 
> only provide those three settings and applying it for reads and writes at the 
> same time.
> CQLSH I'm gonna suggest should default to HIGHLY_CONSISTENT. New sysadmins 
> get surprised by this frequently and I can think of a couple very major 
> escalations because people were confused what the default behavior was.
> The benefit to all this change is we shrink the surface area that one has to 
> understand when learning Cassandra greatly, and we have far less bad initial 
> experiences and surprises. New users will more likely be able to wrap their 
> brains around those 3 ideas more readily then they can "what happens when I 
> have RF2, QUROUM writes and ONE reads". Advanced users get access to all the 
> way still, while new users don't have to learn all the ins and outs of 
> distributed theory just to write data and be able to read it back.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13216) testall failure in org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages

2017-03-10 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13216:
---
Status: Ready to Commit  (was: Patch Available)

> testall failure in 
> org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages
> 
>
> Key: CASSANDRA-13216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13216
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: test-failure, testall
> Attachments: TEST-org.apache.cassandra.net.MessagingServiceTest.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.11_testall/81/testReport/org.apache.cassandra.net/MessagingServiceTest/testDroppedMessages
> {code}
> Error Message
> expected:<... dropped latency: 27[30 ms and Mean cross-node dropped latency: 
> 2731] ms> but was:<... dropped latency: 27[28 ms and Mean cross-node dropped 
> latency: 2730] ms>
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: expected:<... dropped latency: 27[30 ms 
> and Mean cross-node dropped latency: 2731] ms> but was:<... dropped latency: 
> 27[28 ms and Mean cross-node dropped latency: 2730] ms>
>   at 
> org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages(MessagingServiceTest.java:83)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13216) testall failure in org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages

2017-03-10 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13216:
---
Reviewer: Michael Kjellman

> testall failure in 
> org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages
> 
>
> Key: CASSANDRA-13216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13216
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: test-failure, testall
> Attachments: TEST-org.apache.cassandra.net.MessagingServiceTest.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.11_testall/81/testReport/org.apache.cassandra.net/MessagingServiceTest/testDroppedMessages
> {code}
> Error Message
> expected:<... dropped latency: 27[30 ms and Mean cross-node dropped latency: 
> 2731] ms> but was:<... dropped latency: 27[28 ms and Mean cross-node dropped 
> latency: 2730] ms>
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: expected:<... dropped latency: 27[30 ms 
> and Mean cross-node dropped latency: 2731] ms> but was:<... dropped latency: 
> 27[28 ms and Mean cross-node dropped latency: 2730] ms>
>   at 
> org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages(MessagingServiceTest.java:83)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13315) Semantically meaningful Consistency Levels

2017-03-10 Thread Romain Hardouin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905471#comment-15905471
 ] 

Romain Hardouin commented on CASSANDRA-13315:
-

If we keep all current CL and if it can help newcomers then why not. 
That said, it won't prevent the most common mistake I've seen: RF=2 with 
LOCAL_QUORUM. "What? My setup is not fault tolerant?"

> Semantically meaningful Consistency Levels
> --
>
> Key: CASSANDRA-13315
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13315
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Ryan Svihla
>
> New users really struggle with consistency level and fall into a large number 
> of tarpits trying to decide on the right one.
> o
> 1. There are a LOT of consistency levels and it's up to the end user to 
> reason about what combinations are valid and what is really what they intend 
> it to be. Is there any reason why write at ALL and read at CL TWO is better 
> than read at CL ONE? 
> 2. They require a good understanding of failure modes to do well. It's not 
> uncommon for people to use CL one and wonder why their data is missing.
> 3. The serial consistency level "bucket" is confusing to even write about and 
> easy to get wrong even for experienced users.
> So I propose the following steps (EDIT based on Jonathan's comment):
> 1. Remove the "serial consistency" level of consistency levels and just have 
> all consistency levels in one bucket to set, conditions still need to be 
> required for SERIAL/LOCAL_SERIAL
> 2. add 3 new consistency levels pointing to existing ones but that infer 
> intent much more cleanly:
> EDIT better names bases on comments.
>* EVENTUALLY = LOCAL_ONE reads and writes
>* STRONG = LOCAL_QUORUM reads and writes
>* SERIAL = LOCAL_SERIAL reads and writes (though a ton of folks dont know 
> what SERIAL means so this is why I suggested TRANSACTIONAL even if its not as 
> correct as Id like)
> for global levels of this I propose keeping the old ones around, they're 
> rarely used in the field except by accident or particularly opinionated and 
> advanced users.
> Drivers should put the new consistency levels in a new package and docs 
> should be updated to suggest their use. Likewise setting default CL should 
> only provide those three settings and applying it for reads and writes at the 
> same time.
> CQLSH I'm gonna suggest should default to HIGHLY_CONSISTENT. New sysadmins 
> get surprised by this frequently and I can think of a couple very major 
> escalations because people were confused what the default behavior was.
> The benefit to all this change is we shrink the surface area that one has to 
> understand when learning Cassandra greatly, and we have far less bad initial 
> experiences and surprises. New users will more likely be able to wrap their 
> brains around those 3 ideas more readily then they can "what happens when I 
> have RF2, QUROUM writes and ONE reads". Advanced users get access to all the 
> way still, while new users don't have to learn all the ins and outs of 
> distributed theory just to write data and be able to read it back.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13321) Add a checksum component for the sstable metadata (-Statistics.db) file

2017-03-10 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905449#comment-15905449
 ] 

Jason Brown commented on CASSANDRA-13321:
-

[~krummas] elaborate a little further in an offline conversation:
{quote}
a problem is when we change the repairedAt in an existing file for example, 
some other thread could want to deserialize the compaction metadata (which is 
in the same file) and fail because the file changed while it was deserializing, 
if we safely instead add a new file we will avoid that
{quote}

I agree with this idea, and I'll leave it to Marcus to make the magic happen.

> Add a checksum component for the sstable metadata (-Statistics.db) file
> ---
>
> Key: CASSANDRA-13321
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13321
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 4.x
>
>
> Since we keep important information in the sstable metadata file now, we 
> should add a checksum component for it. One danger being if a bit gets 
> flipped in repairedAt we could consider the sstable repaired when it is not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-13265) Expiration in OutboundTcpConnection can block the reader Thread

2017-03-10 Thread Christian Esken (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905409#comment-15905409
 ] 

Christian Esken edited comment on CASSANDRA-13265 at 3/10/17 5:05 PM:
--

I committed the change containing the configuration, as I would like some 
feedback whether I am on the right path. Please note that I did not yet have 
time for tests (planned for next Monday), but I thought it is better to give a 
chance to review the current changes.

I also had to add back "AtomicBoolean backlogExpirationActive" Otherwise I 
cannot guarantee that only a single Thread is iterating the Queue, especially 
if a small expiration interval (1ms, or 0ms) is configured. The "AtomicLong 
backlogNextExpirationTime" could now be "volatile long".


was (Author: cesken):
I committed the change containing the configuration, as I would like some 
feedback whether I am on the right path. Please note that I did not yet have 
time for tests (planned for next Monday), but I thought it is better to give a 
chance to review the current changes.

Please note that I had to add back "AtomicBoolean backlogExpirationActive" 
Otherwise I cannot guarantee that only a single Thread is iterating the Queue, 
especially if a small expiration interval (1ms, or 0ms) is configured. The 
"AtomicLong backlogNextExpirationTime" could now be "volatile long".

> Expiration in OutboundTcpConnection can block the reader Thread
> ---
>
> Key: CASSANDRA-13265
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13265
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.9
> Java HotSpot(TM) 64-Bit Server VM version 25.112-b15 (Java version 
> 1.8.0_112-b15)
> Linux 3.16
>Reporter: Christian Esken
>Assignee: Christian Esken
> Attachments: cassandra.pb-cache4-dus.2017-02-17-19-36-26.chist.xz, 
> cassandra.pb-cache4-dus.2017-02-17-19-36-26.td.xz
>
>
> I observed that sometimes a single node in a Cassandra cluster fails to 
> communicate to the other nodes. This can happen at any time, during peak load 
> or low load. Restarting that single node from the cluster fixes the issue.
> Before going in to details, I want to state that I have analyzed the 
> situation and am already developing a possible fix. Here is the analysis so 
> far:
> - A Threaddump in this situation showed  324 Threads in the 
> OutboundTcpConnection class that want to lock the backlog queue for doing 
> expiration.
> - A class histogram shows 262508 instances of 
> OutboundTcpConnection$QueuedMessage.
> What is the effect of it? As soon as the Cassandra node has reached a certain 
> amount of queued messages, it starts thrashing itself to death. Each of the 
> Thread fully locks the Queue for reading and writing by calling 
> iterator.next(), making the situation worse and worse.
> - Writing: Only after 262508 locking operation it can progress with actually 
> writing to the Queue.
> - Reading: Is also blocked, as 324 Threads try to do iterator.next(), and 
> fully lock the Queue
> This means: Writing blocks the Queue for reading, and readers might even be 
> starved which makes the situation even worse.
> -
> The setup is:
>  - 3-node cluster
>  - replication factor 2
>  - Consistency LOCAL_ONE
>  - No remote DC's
>  - high write throughput (10 INSERT statements per second and more during 
> peak times).
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13265) Expiration in OutboundTcpConnection can block the reader Thread

2017-03-10 Thread Christian Esken (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905409#comment-15905409
 ] 

Christian Esken commented on CASSANDRA-13265:
-

I committed the change containing the configuration, as I would like some 
feedback whether I am on the right path. Please note that I did not yet have 
time for tests (planned for next Monday), but I thought it is better to give a 
chance to review the current changes.

Please note that I had to add back "AtomicBoolean backlogExpirationActive" 
Otherwise I cannot guarantee that only a single Thread is iterating the Queue, 
especially if a small expiration interval (1ms, or 0ms) is configured. The 
"AtomicLong backlogNextExpirationTime" could now be "volatile long".

> Expiration in OutboundTcpConnection can block the reader Thread
> ---
>
> Key: CASSANDRA-13265
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13265
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.9
> Java HotSpot(TM) 64-Bit Server VM version 25.112-b15 (Java version 
> 1.8.0_112-b15)
> Linux 3.16
>Reporter: Christian Esken
>Assignee: Christian Esken
> Attachments: cassandra.pb-cache4-dus.2017-02-17-19-36-26.chist.xz, 
> cassandra.pb-cache4-dus.2017-02-17-19-36-26.td.xz
>
>
> I observed that sometimes a single node in a Cassandra cluster fails to 
> communicate to the other nodes. This can happen at any time, during peak load 
> or low load. Restarting that single node from the cluster fixes the issue.
> Before going in to details, I want to state that I have analyzed the 
> situation and am already developing a possible fix. Here is the analysis so 
> far:
> - A Threaddump in this situation showed  324 Threads in the 
> OutboundTcpConnection class that want to lock the backlog queue for doing 
> expiration.
> - A class histogram shows 262508 instances of 
> OutboundTcpConnection$QueuedMessage.
> What is the effect of it? As soon as the Cassandra node has reached a certain 
> amount of queued messages, it starts thrashing itself to death. Each of the 
> Thread fully locks the Queue for reading and writing by calling 
> iterator.next(), making the situation worse and worse.
> - Writing: Only after 262508 locking operation it can progress with actually 
> writing to the Queue.
> - Reading: Is also blocked, as 324 Threads try to do iterator.next(), and 
> fully lock the Queue
> This means: Writing blocks the Queue for reading, and readers might even be 
> starved which makes the situation even worse.
> -
> The setup is:
>  - 3-node cluster
>  - replication factor 2
>  - Consistency LOCAL_ONE
>  - No remote DC's
>  - high write throughput (10 INSERT statements per second and more during 
> peak times).
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13269) Snapshot support for custom secondary indices

2017-03-10 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-13269:
---
Fix Version/s: (was: 3.0.12)
   (was: 3.11.0)
   3.11.x
   3.0.x

> Snapshot support for custom secondary indices
> -
>
> Key: CASSANDRA-13269
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13269
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: vincent royer
>Priority: Trivial
>  Labels: features
> Fix For: 3.0.x, 3.11.x
>
> Attachments: 0001-CASSANDRA-13269-custom-indices-snapshot.patch
>
>
> Enhance the index API to support snapshot of custom secondary indices.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13270) Add function hooks to deliver Elasticsearch as a Cassandra plugin

2017-03-10 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-13270:
---
Fix Version/s: (was: 3.0.12)
   (was: 3.11.0)
   4.x

> Add function hooks to deliver Elasticsearch as a Cassandra plugin
> -
>
> Key: CASSANDRA-13270
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13270
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: vincent royer
>Priority: Minor
>  Labels: features
> Fix For: 4.x
>
> Attachments: 0001-CASSANDRA-13270-elasticsearch-as-a-plugin.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> With these basic modifications (see the patch) and the following tickets, the 
> Elassandra project (see https://github.com/strapdata/elassandra) could be an 
> Elasticsearch plugin for Cassandra.
> * CASSANDRA-12837 Add multi-threaded support to nodetool rebuild_index.
> * CASSANDRA-13267 Add CQL functions.
> * CASSANDRA-13268 Allow to create custom secondary index on static columns.
> * CASSANDRA-13269 Snapshot support for custom secondary indices



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13197) +=/-= shortcut syntax bugs/inconsistencies

2017-03-10 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905326#comment-15905326
 ] 

Alex Petrov commented on CASSANDRA-13197:
-

I've started working on a patch. For literal values it's quite simple to 
provide some sensible error messages. For non-literal values and prepare 
statements it's going to be much more complicated, and I do not think it 
actually makes sense to implement on the statement level. Since CQL is 
dynamically typed and we resolve the only at the last moment right before 
evaluation. We might want to tackle more complex scenarios in the future though.

> +=/-= shortcut syntax bugs/inconsistencies
> --
>
> Key: CASSANDRA-13197
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13197
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Kishan Karunaratne
>Assignee: Alex Petrov
>
> CASSANDRA-12232 introduced (+=/-=) shortcuts for counters and collection 
> types. I ran into some bugs/consistencies.
> Given the schema:
> {noformat}
> CREATE TABLE simplex.collection_table (k int PRIMARY KEY, d_l List, d_s 
> Set, d_m Map, d_t Tuple);
> {noformat}
> 1) Using -= on a list column removes all elements that match the value, 
> instead of the first or last occurrence of it. Is this expected?
> {noformat}
> Given d_l = [0, 1, 2, 1, 1]
> UPDATE collection_table SET d_l -= [1] WHERE k=0;
> yields 
> [0, 2]
> {noformat}
> 2) I can't seem to remove a map key/value pair:
> {noformat}
> Given d_m = {0: 0, 1: 1}
> UPDATE collection_table SET d_m -= {1:1} WHERE k=0;
> yields
> Invalid map literal for d_m of type frozen
> {noformat}
> However {noformat}UPDATE collection_table SET d_m -= {1} WHERE k=0;{noformat} 
> does work.
> 3) Tuples are immutable so it make sense that +=/-= doesn't apply. However 
> the error message could be better, now that other collection types are 
> allowed:
> {noformat}
> UPDATE collection_table SET d_t += (1) WHERE k=0;
> yields
> Invalid operation (d_t = d_t + (1)) for non counter column d_t
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


svn commit: r1786374 - in /cassandra/site: publish/download/index.html src/_data/releases.yaml src/download.md

2017-03-10 Thread mshuler
Author: mshuler
Date: Fri Mar 10 16:03:01 2017
New Revision: 1786374

URL: http://svn.apache.org/viewvc?rev=1786374=rev
Log:
Update site for 3.0.12 release

Modified:
cassandra/site/publish/download/index.html
cassandra/site/src/_data/releases.yaml
cassandra/site/src/download.md

Modified: cassandra/site/publish/download/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/download/index.html?rev=1786374=1786373=1786374=diff
==
--- cassandra/site/publish/download/index.html (original)
+++ cassandra/site/publish/download/index.html Fri Mar 10 16:03:01 2017
@@ -110,7 +110,7 @@ released against the most recent bug fix
 The following older Cassandra releases are still supported:
 
 
-  Apache Cassandra 3.0 is supported until 6 months after 4.0 
release (date TBD). The latest release is http://www.apache.org/dyn/closer.lua/cassandra/3.0.11/apache-cassandra-3.0.11-bin.tar.gz;>3.0.11
 (http://www.apache.org/dist/cassandra/3.0.11/apache-cassandra-3.0.11-bin.tar.gz.asc;>pgp,
 http://www.apache.org/dist/cassandra/3.0.11/apache-cassandra-3.0.11-bin.tar.gz.md5;>md5
 and http://www.apache.org/dist/cassandra/3.0.11/apache-cassandra-3.0.11-bin.tar.gz.sha1;>sha1),
 released on 2017-02-21.
+  Apache Cassandra 3.0 is supported until 6 months after 4.0 
release (date TBD). The latest release is http://www.apache.org/dyn/closer.lua/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz;>3.0.12
 (http://www.apache.org/dist/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz.asc;>pgp,
 http://www.apache.org/dist/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz.md5;>md5
 and http://www.apache.org/dist/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz.sha1;>sha1),
 released on 2017-03-10.
   Apache Cassandra 2.2 is supported until 4.0 release (date 
TBD). The latest release is http://www.apache.org/dyn/closer.lua/cassandra/2.2.9/apache-cassandra-2.2.9-bin.tar.gz;>2.2.9
 (http://www.apache.org/dist/cassandra/2.2.9/apache-cassandra-2.2.9-bin.tar.gz.asc;>pgp,
 http://www.apache.org/dist/cassandra/2.2.9/apache-cassandra-2.2.9-bin.tar.gz.md5;>md5
 and http://www.apache.org/dist/cassandra/2.2.9/apache-cassandra-2.2.9-bin.tar.gz.sha1;>sha1),
 released on 2017-02-21.
   Apache Cassandra 2.1 is supported until 4.0 release (date 
TBD) with critical fixes only. The latest release is
 http://www.apache.org/dyn/closer.lua/cassandra/2.1.17/apache-cassandra-2.1.17-bin.tar.gz;>2.1.17
 (http://www.apache.org/dist/cassandra/2.1.17/apache-cassandra-2.1.17-bin.tar.gz.asc;>pgp,
 http://www.apache.org/dist/cassandra/2.1.17/apache-cassandra-2.1.17-bin.tar.gz.md5;>md5
 and http://www.apache.org/dist/cassandra/2.1.17/apache-cassandra-2.1.17-bin.tar.gz.sha1;>sha1),
 released on 2017-02-21.

Modified: cassandra/site/src/_data/releases.yaml
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/_data/releases.yaml?rev=1786374=1786373=1786374=diff
==
--- cassandra/site/src/_data/releases.yaml (original)
+++ cassandra/site/src/_data/releases.yaml Fri Mar 10 16:03:01 2017
@@ -7,8 +7,8 @@ latest:
 #  date: 2016-04-01
 
 "3.0":
-  name: 3.0.11
-  date: 2017-02-21
+  name: 3.0.12
+  date: 2017-03-10
 
 "2.2":
   name: 2.2.9

Modified: cassandra/site/src/download.md
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/download.md?rev=1786374=1786373=1786374=diff
==
--- cassandra/site/src/download.md (original)
+++ cassandra/site/src/download.md Fri Mar 10 16:03:01 2017
@@ -35,10 +35,10 @@ Older (unsupported) versions of Cassandr
 * For older pre-tick-tock releases, the `` is the major 
version number, without dot, and with an
   appended `x`. So currently it can one of `21x`, `22x` or `30x`.
 
-* Add the Apache repository of Cassandra to 
`/etc/apt/sources.list.d/cassandra.sources.list`, for example for version 3.9:
+* Add the Apache repository of Cassandra to 
`/etc/apt/sources.list.d/cassandra.sources.list`, for example for version 3.10:
 
 ```
-echo "deb http://www.apache.org/dist/cassandra/debian 39x main" | sudo tee -a 
/etc/apt/sources.list.d/cassandra.sources.list
+echo "deb http://www.apache.org/dist/cassandra/debian 310x main" | sudo tee -a 
/etc/apt/sources.list.d/cassandra.sources.list
 ```
 
 * Add the Apache Cassandra repository keys:
@@ -56,7 +56,7 @@ sudo apt-get update
 * If you encounter this error:
 
 ```
-GPG error: http://www.apache.org 39x InRelease: The following signatures 
couldn't be verified because the public key is not available: NO_PUBKEY 
A278B781FE4B2BDA
+GPG error: http://www.apache.org 310x InRelease: The following signatures 
couldn't be verified because the public key is not available: NO_PUBKEY 
A278B781FE4B2BDA
 ```
 Then add the public key A278B781FE4B2BDA as follows:
 




[jira] [Comment Edited] (CASSANDRA-13265) Expiration in OutboundTcpConnection can block the reader Thread

2017-03-10 Thread Christian Esken (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904949#comment-15904949
 ] 

Christian Esken edited comment on CASSANDRA-13265 at 3/10/17 3:48 PM:
--

I am nearly done with the configuration, and have two questions about it:

1.  How to handle the default value? My approach is to pre-configure the 
default value in Config:
{code}
public static final int otc_backlog_expiration_interval_in_ms_default = 200;
public volatile Integer otc_backlog_expiration_interval_in_ms = 
otc_backlog_expiration_interval_in_ms_default;
{code}

Additionally I will handle null values, that might have been set via JMX in the 
getter of DatabaseDescriptor:
{code}
public static Integer getOtcBacklogExpirationInterval()
{
Integer confValue = conf.otc_backlog_expiration_interval_in_ms;
return confValue != null ? confValue : 
Config.otc_backlog_expiration_interval_in_ms_default;
}
{code}
Is that OK? Should I also handle other illegal values in that getter (negative 
values), or reject them in the setter?  I have not found a  code example in 
Cassandra that handles bad values uniformly for MBean and Config.

2. How to read the config value? I am seeing some 
{{Integer.getInteger(propName, defaultValue)}}, but this looks strange to me. I 
think changes from JMX would not even be reflected. Thus I am calling the 
getter from above: {{DatabaseDescriptor.getOtcBacklogExpirationInterval()}}. Is 
the latter OK?



was (Author: cesken):
I am nearly done with the configuration, and have two questions about it:

1.  How to handle the default value? My approach is to pre-configure the 
default value in Config:
{code}
public static final int otc_backlog_expiration_interval_in_ms_default = 200;
public volatile Integer otc_backlog_expiration_interval_in_ms = 
otc_backlog_expiration_interval_in_ms_default;
{code}

Additionally I will handle null values, that might have been set via JMX in the 
getter of DatabaseDescriptor:
{code}
public static Integer getOtcBacklogExpirationInterval()
{
Integer confValue = conf.otc_backlog_expiration_interval_in_ms;
return confValue != null ? confValue : 
Config.otc_backlog_expiration_interval_in_ms_default;
}
{code}

2. How to read the config value? I am seeing some 
{{Integer.getInteger(propName, defaultValue)}}, but this looks strange to me. I 
think changes from JMX would not even be reflected. Thus I am calling the 
getter from above: {{DatabaseDescriptor.getOtcBacklogExpirationInterval()}}. Is 
the latter OK?


> Expiration in OutboundTcpConnection can block the reader Thread
> ---
>
> Key: CASSANDRA-13265
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13265
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.9
> Java HotSpot(TM) 64-Bit Server VM version 25.112-b15 (Java version 
> 1.8.0_112-b15)
> Linux 3.16
>Reporter: Christian Esken
>Assignee: Christian Esken
> Attachments: cassandra.pb-cache4-dus.2017-02-17-19-36-26.chist.xz, 
> cassandra.pb-cache4-dus.2017-02-17-19-36-26.td.xz
>
>
> I observed that sometimes a single node in a Cassandra cluster fails to 
> communicate to the other nodes. This can happen at any time, during peak load 
> or low load. Restarting that single node from the cluster fixes the issue.
> Before going in to details, I want to state that I have analyzed the 
> situation and am already developing a possible fix. Here is the analysis so 
> far:
> - A Threaddump in this situation showed  324 Threads in the 
> OutboundTcpConnection class that want to lock the backlog queue for doing 
> expiration.
> - A class histogram shows 262508 instances of 
> OutboundTcpConnection$QueuedMessage.
> What is the effect of it? As soon as the Cassandra node has reached a certain 
> amount of queued messages, it starts thrashing itself to death. Each of the 
> Thread fully locks the Queue for reading and writing by calling 
> iterator.next(), making the situation worse and worse.
> - Writing: Only after 262508 locking operation it can progress with actually 
> writing to the Queue.
> - Reading: Is also blocked, as 324 Threads try to do iterator.next(), and 
> fully lock the Queue
> This means: Writing blocks the Queue for reading, and readers might even be 
> starved which makes the situation even worse.
> -
> The setup is:
>  - 3-node cluster
>  - replication factor 2
>  - Consistency LOCAL_ONE
>  - No remote DC's
>  - high write throughput (10 INSERT statements per second and more during 
> peak times).
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-13265) Expiration in OutboundTcpConnection can block the reader Thread

2017-03-10 Thread Christian Esken (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904949#comment-15904949
 ] 

Christian Esken edited comment on CASSANDRA-13265 at 3/10/17 3:42 PM:
--

I am nearly done with the configuration, and have two questions about it:

1.  How to handle the default value? My approach is to pre-configure the 
default value in Config:
{code}
public static final int otc_backlog_expiration_interval_in_ms_default = 200;
public volatile Integer otc_backlog_expiration_interval_in_ms = 
otc_backlog_expiration_interval_in_ms_default;
{code}

Additionally I will handle null values, that might have been set via JMX in the 
getter of DatabaseDescriptor:
{code}
public static Integer getOtcBacklogExpirationInterval()
{
Integer confValue = conf.otc_backlog_expiration_interval_in_ms;
return confValue != null ? confValue : 
Config.otc_backlog_expiration_interval_in_ms_default;
}
{code}

2. How to read the config value? I am seeing some 
{{Integer.getInteger(propName, defaultValue)}}, but this looks strange to me. I 
think changes from JMX would not even be reflected. Thus I am calling the 
getter from above: {{DatabaseDescriptor.getOtcBacklogExpirationInterval()}}. Is 
the latter OK?



was (Author: cesken):
I am nearly done with the configuration, and have two questions about it:

1.  How to handle the default value? My approach is to pre-configure the 
default value in Config:
{code}
public static final int otc_backlog_expiration_interval_in_ms_default = 200;
public volatile Integer otc_backlog_expiration_interval_in_ms = 
otc_backlog_expiration_interval_in_ms_default;
{code}

Additionally I will handle null values, that might have been set via JMX in the 
getter of DatabaseDescriptor:
{code}
public static Integer getOtcBacklogExpirationInterval()
{
Integer confValue = conf.otc_backlog_expiration_interval_in_ms;
return confValue != null ? confValue : 
Config.otc_backlog_expiration_interval_in_ms_default;
}
{code}

2. How to read the config value? I am seeing some 
{{Integer.getInteger(propName, defaultValue)}}, but this looks strange to me. I 
think changes from JMX would not even be reflected. Thus I am calling the 
getter from above: {{DatabaseDescriptor.getOtcBacklogExpirationInterval()}}. Is 
hte latter OK?


> Expiration in OutboundTcpConnection can block the reader Thread
> ---
>
> Key: CASSANDRA-13265
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13265
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.9
> Java HotSpot(TM) 64-Bit Server VM version 25.112-b15 (Java version 
> 1.8.0_112-b15)
> Linux 3.16
>Reporter: Christian Esken
>Assignee: Christian Esken
> Attachments: cassandra.pb-cache4-dus.2017-02-17-19-36-26.chist.xz, 
> cassandra.pb-cache4-dus.2017-02-17-19-36-26.td.xz
>
>
> I observed that sometimes a single node in a Cassandra cluster fails to 
> communicate to the other nodes. This can happen at any time, during peak load 
> or low load. Restarting that single node from the cluster fixes the issue.
> Before going in to details, I want to state that I have analyzed the 
> situation and am already developing a possible fix. Here is the analysis so 
> far:
> - A Threaddump in this situation showed  324 Threads in the 
> OutboundTcpConnection class that want to lock the backlog queue for doing 
> expiration.
> - A class histogram shows 262508 instances of 
> OutboundTcpConnection$QueuedMessage.
> What is the effect of it? As soon as the Cassandra node has reached a certain 
> amount of queued messages, it starts thrashing itself to death. Each of the 
> Thread fully locks the Queue for reading and writing by calling 
> iterator.next(), making the situation worse and worse.
> - Writing: Only after 262508 locking operation it can progress with actually 
> writing to the Queue.
> - Reading: Is also blocked, as 324 Threads try to do iterator.next(), and 
> fully lock the Queue
> This means: Writing blocks the Queue for reading, and readers might even be 
> starved which makes the situation even worse.
> -
> The setup is:
>  - 3-node cluster
>  - replication factor 2
>  - Consistency LOCAL_ONE
>  - No remote DC's
>  - high write throughput (10 INSERT statements per second and more during 
> peak times).
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13322) testall failure in org.apache.cassandra.io.compress.CompressedRandomAccessReaderTest.testDataCorruptionDetection-compression

2017-03-10 Thread Sean McCarthy (JIRA)
Sean McCarthy created CASSANDRA-13322:
-

 Summary: testall failure in 
org.apache.cassandra.io.compress.CompressedRandomAccessReaderTest.testDataCorruptionDetection-compression
 Key: CASSANDRA-13322
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13322
 Project: Cassandra
  Issue Type: Bug
  Components: Testing
Reporter: Sean McCarthy


example failure:

http://cassci.datastax.com/job/cassandra-2.2_testall/658/testReport/org.apache.cassandra.io.compress/CompressedRandomAccessReaderTest/testDataCorruptionDetection_compression

{code}
Stacktrace

junit.framework.AssertionFailedError: 
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReaderTest.testDataCorruptionDetection(CompressedRandomAccessReaderTest.java:218)
{code}{code}
Standard Output

WARN  10:58:45 open(null, O_RDONLY) failed, errno (14).
WARN  10:58:45 open(null, O_RDONLY) failed, errno (14).
WARN  10:58:45 open(null, O_RDONLY) failed, errno (14).
WARN  10:58:45 open(null, O_RDONLY) failed, errno (14).
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


svn commit: r18668 - in /release/cassandra: 3.0.12/ debian/dists/30x/ debian/dists/30x/main/binary-amd64/ debian/dists/30x/main/binary-i386/ debian/dists/30x/main/source/ debian/pool/main/c/cassandra/

2017-03-10 Thread mshuler
Author: mshuler
Date: Fri Mar 10 15:22:55 2017
New Revision: 18668

Log:
Release Apache Cassandra 3.0.12

Added:
release/cassandra/3.0.12/
release/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz   (with props)
release/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz.asc
release/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz.asc.md5
release/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz.asc.sha1
release/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz.md5
release/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz.sha1
release/cassandra/3.0.12/apache-cassandra-3.0.12-src.tar.gz   (with props)
release/cassandra/3.0.12/apache-cassandra-3.0.12-src.tar.gz.asc
release/cassandra/3.0.12/apache-cassandra-3.0.12-src.tar.gz.asc.md5
release/cassandra/3.0.12/apache-cassandra-3.0.12-src.tar.gz.asc.sha1
release/cassandra/3.0.12/apache-cassandra-3.0.12-src.tar.gz.md5
release/cassandra/3.0.12/apache-cassandra-3.0.12-src.tar.gz.sha1

release/cassandra/debian/pool/main/c/cassandra/cassandra-tools_3.0.12_all.deb   
(with props)
release/cassandra/debian/pool/main/c/cassandra/cassandra_3.0.12.diff.gz   
(with props)
release/cassandra/debian/pool/main/c/cassandra/cassandra_3.0.12.dsc
release/cassandra/debian/pool/main/c/cassandra/cassandra_3.0.12.orig.tar.gz 
  (with props)

release/cassandra/debian/pool/main/c/cassandra/cassandra_3.0.12.orig.tar.gz.asc
release/cassandra/debian/pool/main/c/cassandra/cassandra_3.0.12_all.deb   
(with props)
Modified:
release/cassandra/debian/dists/30x/InRelease
release/cassandra/debian/dists/30x/Release
release/cassandra/debian/dists/30x/Release.gpg
release/cassandra/debian/dists/30x/main/binary-amd64/Packages
release/cassandra/debian/dists/30x/main/binary-amd64/Packages.gz
release/cassandra/debian/dists/30x/main/binary-i386/Packages
release/cassandra/debian/dists/30x/main/binary-i386/Packages.gz
release/cassandra/debian/dists/30x/main/source/Sources.gz

Added: release/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz
==
Binary file - no diff available.

Propchange: release/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz
--
svn:mime-type = application/octet-stream

Added: release/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz.asc
==
--- release/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz.asc (added)
+++ release/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz.asc Fri Mar 10 
15:22:55 2017
@@ -0,0 +1,17 @@
+-BEGIN PGP SIGNATURE-
+Version: GnuPG v1
+
+iQIcBAABCAAGBQJYvtkOAAoJEKJ4t4H+SyvaW60QAIh9Cf+ONtkEuUYKKuBeXGfd
+pgqMExZJF61rT/hcTelToNNY3yioF1K1X8G1wY1xQm5k0Thjec6k6uzTidODiGt1
+ToD6bhqdgThUkXibQDNrCNm/8LDQdu34wzryPlLncjywcCG1YNH/5fzadStocVuB
+esR8+fQDkFLPqtQZ+MQKrZo4esvV8HRo4SF+XW5jyaDbzIiiaeqvmvSafYuX1MDh
+3xQS4nr1v24ZNcQvxSWBxukMpwEjLFN9mmQA+icdjDVM0ePxjQGkjNasXOzHsm6M
+vqSuqi6tux2QuJN9Qblod9h+dhvoHr3az/qthpFJtUAnyj19PziD9tT2Iu0Ichb0
+yfAHG5jwfqYtS9mCbv1qmRtqGLpdriAge2VmWokkVHOTHEFGdUDHPg7u4vQEJIeR
+2KsHKiwNdHft68wTBdOhZ+xKE06K9KkT8bWRUrWm6sqDvuvEyneKna/pjw0a9kq2
+gMXhLckqyVPS6a1FDULGweJbww0E6PuZVD09/PGqegdM3eAPglJgeRdfWweUVPZg
+b1TtRACDObDFovXzCsTxWqHwNOyqSMzd1Jc9FyUfm1raNfioqzn7E+IfyirFgcoM
+fo4/cULnahXpw1HkqIJ+HlbQr4iPVi7CbZAIgZvH+Asd1iBtYZasuxaDJGrU2xfW
+irH1NTJQG29YEVhhtj8R
+=GpNW
+-END PGP SIGNATURE-

Added: release/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz.asc.md5
==
--- release/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz.asc.md5 (added)
+++ release/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz.asc.md5 Fri Mar 
10 15:22:55 2017
@@ -0,0 +1 @@
+790576da89965e9d4003d57a1f28d861
\ No newline at end of file

Added: release/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz.asc.sha1
==
--- release/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz.asc.sha1 (added)
+++ release/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz.asc.sha1 Fri 
Mar 10 15:22:55 2017
@@ -0,0 +1 @@
+2a92b2ea8be22043bd5986d7e03cd3be3f9d5b12
\ No newline at end of file

Added: release/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz.md5
==
--- release/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz.md5 (added)
+++ release/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz.md5 Fri Mar 10 
15:22:55 2017
@@ -0,0 +1 @@
+71ebbfdae273a59ca202c4019e1f74a7
\ No newline at end of file

Added: release/cassandra/3.0.12/apache-cassandra-3.0.12-bin.tar.gz.sha1
==
--- 

[cassandra] Git Push Summary

2017-03-10 Thread mshuler
Repository: cassandra
Updated Tags:  refs/tags/cassandra-3.0.12 [created] 7056a42ae


[cassandra] Git Push Summary

2017-03-10 Thread mshuler
Repository: cassandra
Updated Tags:  refs/tags/3.0.12-tentative [deleted] 50560aaf0


[jira] [Updated] (CASSANDRA-13321) Add a checksum component for the sstable metadata (-Statistics.db) file

2017-03-10 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-13321:

Status: Open  (was: Patch Available)

> Add a checksum component for the sstable metadata (-Statistics.db) file
> ---
>
> Key: CASSANDRA-13321
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13321
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 4.x
>
>
> Since we keep important information in the sstable metadata file now, we 
> should add a checksum component for it. One danger being if a bit gets 
> flipped in repairedAt we could consider the sstable repaired when it is not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13216) testall failure in org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages

2017-03-10 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905190#comment-15905190
 ] 

Jason Brown commented on CASSANDRA-13216:
-

OK, looked it over, and I'm +1, as well.

> testall failure in 
> org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages
> 
>
> Key: CASSANDRA-13216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13216
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: test-failure, testall
> Attachments: TEST-org.apache.cassandra.net.MessagingServiceTest.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.11_testall/81/testReport/org.apache.cassandra.net/MessagingServiceTest/testDroppedMessages
> {code}
> Error Message
> expected:<... dropped latency: 27[30 ms and Mean cross-node dropped latency: 
> 2731] ms> but was:<... dropped latency: 27[28 ms and Mean cross-node dropped 
> latency: 2730] ms>
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: expected:<... dropped latency: 27[30 ms 
> and Mean cross-node dropped latency: 2731] ms> but was:<... dropped latency: 
> 27[28 ms and Mean cross-node dropped latency: 2730] ms>
>   at 
> org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages(MessagingServiceTest.java:83)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13321) Add a checksum component for the sstable metadata (-Statistics.db) file

2017-03-10 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905186#comment-15905186
 ] 

Marcus Eriksson commented on CASSANDRA-13321:
-

good points, fixed the tests and the method rename in my last commit

And I agree that we need to do something about the double renaming - will work 
on this. My plan now is to add a "version" number to the sstable metadata file, 
then we can atomically switch which file we are using and remove the old files 
after that. And if they are still on disk, no problem, we only use the latest 
version.

> Add a checksum component for the sstable metadata (-Statistics.db) file
> ---
>
> Key: CASSANDRA-13321
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13321
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 4.x
>
>
> Since we keep important information in the sstable metadata file now, we 
> should add a checksum component for it. One danger being if a bit gets 
> flipped in repairedAt we could consider the sstable repaired when it is not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13216) testall failure in org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages

2017-03-10 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905182#comment-15905182
 ] 

Jason Brown commented on CASSANDRA-13216:
-

[~mkjellman] Looks like you are reviewing it :P. I'll give this look over, as 
well.

> testall failure in 
> org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages
> 
>
> Key: CASSANDRA-13216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13216
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: test-failure, testall
> Attachments: TEST-org.apache.cassandra.net.MessagingServiceTest.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.11_testall/81/testReport/org.apache.cassandra.net/MessagingServiceTest/testDroppedMessages
> {code}
> Error Message
> expected:<... dropped latency: 27[30 ms and Mean cross-node dropped latency: 
> 2731] ms> but was:<... dropped latency: 27[28 ms and Mean cross-node dropped 
> latency: 2730] ms>
> {code}{code}
> Stacktrace
> junit.framework.AssertionFailedError: expected:<... dropped latency: 27[30 ms 
> and Mean cross-node dropped latency: 2731] ms> but was:<... dropped latency: 
> 27[28 ms and Mean cross-node dropped latency: 2730] ms>
>   at 
> org.apache.cassandra.net.MessagingServiceTest.testDroppedMessages(MessagingServiceTest.java:83)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13020) Hint delivery fails when prefer_local enabled

2017-03-10 Thread Stefan Podkowinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Podkowinski updated CASSANDRA-13020:
---
Since Version: 3.0.0 rc1
  Summary: Hint delivery fails when prefer_local enabled  (was: Stuck 
in LEAVING state (Transferring all hints to null))

> Hint delivery fails when prefer_local enabled
> -
>
> Key: CASSANDRA-13020
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13020
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: v3.0.9
>Reporter: Aleksandr Ivanov
>Assignee: Stefan Podkowinski
>  Labels: decommission, hints
>
> I tried to decommission one node.
> Node sent all data to another node and got stuck in LEAVING state.
> Log message shows Exception in HintsDispatcher thread.
> Could it be reason of stuck in LEAVING state?
> command output:
> {noformat}
> root@cas-node6:~# time nodetool decommission
> error: null
> -- StackTrace --
> java.lang.NullPointerException
> at 
> java.util.concurrent.ConcurrentHashMap.replaceNode(ConcurrentHashMap.java:1106)
> at 
> java.util.concurrent.ConcurrentHashMap.remove(ConcurrentHashMap.java:1097)
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:203)
> at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
> at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> at 
> java.util.concurrent.ConcurrentHashMap$ValueSpliterator.forEachRemaining(ConcurrentHashMap.java:3566)
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
> at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
> at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$TransferHintsTask.transfer(HintsDispatchExecutor.java:168)
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$TransferHintsTask.run(HintsDispatchExecutor.java:141)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> real147m7.483s
> user0m17.388s
> sys 0m1.968s
> {noformat}
> nodetool netstats:
> {noformat}
> root@cas-node6:~# nodetool netstats
> Mode: LEAVING
> Not sending any streams.
> Read Repair Statistics:
> Attempted: 35082
> Mismatch (Blocking): 18
> Mismatch (Background): 0
> Pool NameActive   Pending  Completed   Dropped
> Large messages  n/a 1  0 0
> Small messages  n/a 0   16109860   112
> Gossip messages n/a 0 287074 0
> {noformat}
> Log:
> {noformat}
> INFO  [RMI TCP Connection(58)-127.0.0.1] 2016-12-07 12:52:59,467 
> StorageService.java:1170 - LEAVING: sleeping 3 ms for batch processing 
> and pending range setup
> INFO  [RMI TCP Connection(58)-127.0.0.1] 2016-12-07 12:53:39,455 
> StorageService.java:1170 - LEAVING: replaying batch log and streaming data to 
> other nodes
> INFO  [RMI TCP Connection(58)-127.0.0.1] 2016-12-07 12:53:39,910 
> StreamResultFuture.java:87 - [Stream #2cc874c0-bc7c-11e6-b0df-e7f1ecd3dcfb] 
> Executing streaming plan for Unbootstrap
> INFO  [StreamConnectionEstablisher:1] 2016-12-07 12:53:39,911 
> StreamSession.java:239 - [Stream #2cc874c0-bc7c-11e6-b0df-e7f1ecd3dcfb] 
> Starting streaming to /10.10.10.17
> INFO  [StreamConnectionEstablisher:2] 2016-12-07 12:53:39,911 
> StreamSession.java:232 - [Stream #2cc874c0-bc7c-11e6-b0df-e7f1ecd3dcfb] 
> Session does not have any tasks.
> INFO  [StreamConnectionEstablisher:3] 2016-12-07 12:53:39,912 
> StreamSession.java:232 - [Stream #2cc874c0-bc7c-11e6-b0df-e7f1ecd3dcfb] 
> Session does not have any tasks.
> INFO  [StreamConnectionEstablisher:4] 2016-12-07 12:53:39,912 
> StreamSession.java:232 - [Stream #2cc874c0-bc7c-11e6-b0df-e7f1ecd3dcfb] 
> Session does not have any tasks.
> INFO  [RMI TCP Connection(58)-127.0.0.1] 2016-12-07 12:53:39,912 
> StorageService.java:1170 - LEAVING: streaming hints to other nodes
> INFO  

[jira] [Updated] (CASSANDRA-13020) Stuck in LEAVING state (Transferring all hints to null)

2017-03-10 Thread Stefan Podkowinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Podkowinski updated CASSANDRA-13020:
---
Assignee: Stefan Podkowinski
  Status: Patch Available  (was: Open)

||3.0||3.11||trunk||
|[branch|https://github.com/spodkowinski/cassandra/tree/CASSANDRA-13020-3.0]|[branch|https://github.com/spodkowinski/cassandra/tree/CASSANDRA-13020-3.11]|[branch|https://github.com/spodkowinski/cassandra/tree/CASSANDRA-13020-trunk]|
|[dtest|http://cassci.datastax.com/view/Dev/view/spodkowinski/job/spodkowinski-CASSANDRA-13020-3.0-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/spodkowinski/job/spodkowinski-CASSANDRA-13020-3.11-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/spodkowinski/job/spodkowinski-CASSANDRA-13020-trunk-dtest/]|
|[testall|http://cassci.datastax.com/view/Dev/view/spodkowinski/job/spodkowinski-CASSANDRA-13020-3.0-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/spodkowinski/job/spodkowinski-CASSANDRA-13020-3.11-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/spodkowinski/job/spodkowinski-CASSANDRA-13020-trunk-testall/]|

Anyone up for review? 

Assumptions:
tokenMetadata will only contain public broadcast addresses as keys, so we must 
not use the internal IP for retrieving the nodeID
Hints will only be streamed to public addresses in the end anyways

> Stuck in LEAVING state (Transferring all hints to null)
> ---
>
> Key: CASSANDRA-13020
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13020
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: v3.0.9
>Reporter: Aleksandr Ivanov
>Assignee: Stefan Podkowinski
>  Labels: decommission, hints
>
> I tried to decommission one node.
> Node sent all data to another node and got stuck in LEAVING state.
> Log message shows Exception in HintsDispatcher thread.
> Could it be reason of stuck in LEAVING state?
> command output:
> {noformat}
> root@cas-node6:~# time nodetool decommission
> error: null
> -- StackTrace --
> java.lang.NullPointerException
> at 
> java.util.concurrent.ConcurrentHashMap.replaceNode(ConcurrentHashMap.java:1106)
> at 
> java.util.concurrent.ConcurrentHashMap.remove(ConcurrentHashMap.java:1097)
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:203)
> at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
> at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> at 
> java.util.concurrent.ConcurrentHashMap$ValueSpliterator.forEachRemaining(ConcurrentHashMap.java:3566)
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
> at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
> at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$TransferHintsTask.transfer(HintsDispatchExecutor.java:168)
> at 
> org.apache.cassandra.hints.HintsDispatchExecutor$TransferHintsTask.run(HintsDispatchExecutor.java:141)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> real147m7.483s
> user0m17.388s
> sys 0m1.968s
> {noformat}
> nodetool netstats:
> {noformat}
> root@cas-node6:~# nodetool netstats
> Mode: LEAVING
> Not sending any streams.
> Read Repair Statistics:
> Attempted: 35082
> Mismatch (Blocking): 18
> Mismatch (Background): 0
> Pool NameActive   Pending  Completed   Dropped
> Large messages  n/a 1  0 0
> Small messages  n/a 0   16109860   112
> Gossip messages n/a 0 287074 0
> {noformat}
> Log:
> {noformat}
> INFO  [RMI TCP Connection(58)-127.0.0.1] 2016-12-07 12:52:59,467 
> StorageService.java:1170 - LEAVING: sleeping 3 ms for batch processing 
> and pending range setup
> INFO  [RMI TCP Connection(58)-127.0.0.1] 2016-12-07 12:53:39,455 
> StorageService.java:1170 - LEAVING: replaying batch log 

[jira] [Commented] (CASSANDRA-13321) Add a checksum component for the sstable metadata (-Statistics.db) file

2017-03-10 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905094#comment-15905094
 ] 

Jason Brown commented on CASSANDRA-13321:
-

The changes look good on the whole, but there are some failing unit tests.

Minor nits:
- maybe rename {{IMetadataSerializer#writeMetadata()}} to 
{{IMetadataSerializer#serializeWithChecksum{}}} as it was confusing to me the 
difference between {{#serialize{}}} and {{#writeMetadata()}}. Also, a javadoc 
comment would be a nice addition.
- I'm not sure it's necessary (or possible), but since two files are being 
modified in {{#MetadataSerializer#rewriteSSTableMetadata()}}, is it possible or 
useful to use a transaction? That way we can guard against only one of the 
files being written or renamed correctly.

> Add a checksum component for the sstable metadata (-Statistics.db) file
> ---
>
> Key: CASSANDRA-13321
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13321
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 4.x
>
>
> Since we keep important information in the sstable metadata file now, we 
> should add a checksum component for it. One danger being if a bit gets 
> flipped in repairedAt we could consider the sstable repaired when it is not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13321) Add a checksum component for the sstable metadata (-Statistics.db) file

2017-03-10 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-13321:

Reviewer: Jason Brown

> Add a checksum component for the sstable metadata (-Statistics.db) file
> ---
>
> Key: CASSANDRA-13321
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13321
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 4.x
>
>
> Since we keep important information in the sstable metadata file now, we 
> should add a checksum component for it. One danger being if a bit gets 
> flipped in repairedAt we could consider the sstable repaired when it is not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-12915) SASI: Index intersection with an empty range really inefficient

2017-03-10 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905037#comment-15905037
 ] 

Alex Petrov commented on CASSANDRA-12915:
-

Committed as 
[2c111d15bb080283b9b98d48fab4bcf4db515b5a|https://github.com/apache/cassandra/commit/2c111d15bb080283b9b98d48fab4bcf4db515b5a]
 to 3.11 and merged up to trunk.

A small side note: this was my very first time as I have committed anyone 
else's code to the repository, and I have accidentally forgotten to include 
{{--amend}} with author name, so the patch got pushed under my credentials. 
Since we're a big project we can not really amend the history on primary 
branches and just force-push. I've included a proper attribution on the 
authorship in the patch comment, but the github history unfortunately would 
still hold my email. I'm really sorry about that, I will take an extra care 
next time. Thank you for understanding,

> SASI: Index intersection with an empty range really inefficient
> ---
>
> Key: CASSANDRA-12915
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12915
> Project: Cassandra
>  Issue Type: Improvement
>  Components: sasi
>Reporter: Corentin Chary
>Assignee: Corentin Chary
> Fix For: 3.11.x, 4.x
>
>
> It looks like RangeIntersectionIterator.java and be pretty inefficient in 
> some cases. Let's take the following query:
> SELECT data FROM table WHERE index1 = 'foo' AND index2 = 'bar';
> In this case:
> * index1 = 'foo' will match 2 items
> * index2 = 'bar' will match ~300k items
> On my setup, the query will take ~1 sec, most of the time being spent in 
> disk.TokenTree.getTokenAt().
> if I patch RangeIntersectionIterator so that it doesn't try to do the 
> intersection (and effectively only use 'index1') the query will run in a few 
> tenth of milliseconds.
> I see multiple solutions for that:
> * Add a static thresold to avoid the use of the index for the intersection 
> when we know it will be slow. Probably when the range size factor is very 
> small and the range size is big.
> * CASSANDRA-10765



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-12915) SASI: Index intersection with an empty range really inefficient

2017-03-10 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-12915:

Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

> SASI: Index intersection with an empty range really inefficient
> ---
>
> Key: CASSANDRA-12915
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12915
> Project: Cassandra
>  Issue Type: Improvement
>  Components: sasi
>Reporter: Corentin Chary
>Assignee: Corentin Chary
> Fix For: 3.11.x, 4.x
>
>
> It looks like RangeIntersectionIterator.java and be pretty inefficient in 
> some cases. Let's take the following query:
> SELECT data FROM table WHERE index1 = 'foo' AND index2 = 'bar';
> In this case:
> * index1 = 'foo' will match 2 items
> * index2 = 'bar' will match ~300k items
> On my setup, the query will take ~1 sec, most of the time being spent in 
> disk.TokenTree.getTokenAt().
> if I patch RangeIntersectionIterator so that it doesn't try to do the 
> intersection (and effectively only use 'index1') the query will run in a few 
> tenth of milliseconds.
> I see multiple solutions for that:
> * Add a static thresold to avoid the use of the index for the intersection 
> when we know it will be slow. Probably when the range size factor is very 
> small and the range size is big.
> * CASSANDRA-10765



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[2/3] cassandra git commit: Improve SASI range iterator efficiency on intersection with an empty range.

2017-03-10 Thread ifesdjeen
Improve SASI range iterator efficiency on intersection with an empty range.

Patch by Corentin Chary; reviewed by Alex Petrov for CASSANDRA-12915.

Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2c111d15
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2c111d15
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2c111d15

Branch: refs/heads/trunk
Commit: 2c111d15bb080283b9b98d48fab4bcf4db515b5a
Parents: 9efa682
Author: Alex Petrov 
Authored: Fri Mar 10 13:28:16 2017 +0100
Committer: Alex Petrov 
Committed: Fri Mar 10 13:31:15 2017 +0100

--
 CHANGES.txt |  1 +
 .../cassandra/index/sasi/TermIterator.java  |  2 +-
 .../index/sasi/plan/QueryController.java|  3 -
 .../sasi/utils/RangeIntersectionIterator.java   |  9 ++-
 .../index/sasi/utils/RangeIterator.java | 79 +++-
 .../index/sasi/utils/RangeUnionIterator.java|  9 ++-
 .../cassandra/index/sasi/SASIIndexTest.java |  6 +-
 .../index/sasi/utils/LongIteratorTest.java  | 56 ++
 .../utils/RangeIntersectionIteratorTest.java| 78 ++-
 .../sasi/utils/RangeUnionIteratorTest.java  | 72 +-
 10 files changed, 283 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2c111d15/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index acef1c2..302a028 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.11.0
+ * Improve SASI range iterator efficiency on intersection with an empty range 
(CASSANDRA-12915).
  * Fix equality comparisons of columns using the duration type 
(CASSANDRA-13174)
  * Obfuscate password in stress-graphs (CASSANDRA-12233)
  * Move to FastThreadLocalThread and FastThreadLocal (CASSANDRA-13034)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2c111d15/src/java/org/apache/cassandra/index/sasi/TermIterator.java
--
diff --git a/src/java/org/apache/cassandra/index/sasi/TermIterator.java 
b/src/java/org/apache/cassandra/index/sasi/TermIterator.java
index 03dea18..85f81b0 100644
--- a/src/java/org/apache/cassandra/index/sasi/TermIterator.java
+++ b/src/java/org/apache/cassandra/index/sasi/TermIterator.java
@@ -157,7 +157,7 @@ public class TermIterator extends RangeIterator
 e.checkpoint();
 
 RangeIterator ranges = 
RangeUnionIterator.build(tokens);
-return ranges == null ? null : new TermIterator(e, ranges, 
referencedIndexes);
+return new TermIterator(e, ranges, referencedIndexes);
 }
 catch (Throwable ex)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2c111d15/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java
--
diff --git a/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java 
b/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java
index 155cd4f..22fca68 100644
--- a/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java
+++ b/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java
@@ -144,9 +144,6 @@ public class QueryController
 @SuppressWarnings("resource") // RangeIterators are closed by 
releaseIndexes
 RangeIterator index = TermIterator.build(e.getKey(), 
e.getValue());
 
-if (index == null)
-continue;
-
 builder.add(index);
 perIndexUnions.add(index);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2c111d15/src/java/org/apache/cassandra/index/sasi/utils/RangeIntersectionIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/index/sasi/utils/RangeIntersectionIterator.java 
b/src/java/org/apache/cassandra/index/sasi/utils/RangeIntersectionIterator.java
index 02d9527..bd8c725 100644
--- 
a/src/java/org/apache/cassandra/index/sasi/utils/RangeIntersectionIterator.java
+++ 
b/src/java/org/apache/cassandra/index/sasi/utils/RangeIntersectionIterator.java
@@ -59,10 +59,13 @@ public class RangeIntersectionIterator
 
 protected RangeIterator buildIterator()
 {
-// if the range is disjoint we can simply return empty
-// iterator of any type, because it's not going to produce any 
results.
+// if the range is disjoint or we have an intersection with an 
empty set,
+// we can simply return an empty iterator, because it's not going 
to produce any results.
 if 

[1/3] cassandra git commit: Improve SASI range iterator efficiency on intersection with an empty range.

2017-03-10 Thread ifesdjeen
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.11 9efa682b3 -> 2c111d15b
  refs/heads/trunk 957ad8c0b -> 67e9a5ffd


Improve SASI range iterator efficiency on intersection with an empty range.

Patch by Corentin Chary; reviewed by Alex Petrov for CASSANDRA-12915.

Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2c111d15
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2c111d15
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2c111d15

Branch: refs/heads/cassandra-3.11
Commit: 2c111d15bb080283b9b98d48fab4bcf4db515b5a
Parents: 9efa682
Author: Alex Petrov 
Authored: Fri Mar 10 13:28:16 2017 +0100
Committer: Alex Petrov 
Committed: Fri Mar 10 13:31:15 2017 +0100

--
 CHANGES.txt |  1 +
 .../cassandra/index/sasi/TermIterator.java  |  2 +-
 .../index/sasi/plan/QueryController.java|  3 -
 .../sasi/utils/RangeIntersectionIterator.java   |  9 ++-
 .../index/sasi/utils/RangeIterator.java | 79 +++-
 .../index/sasi/utils/RangeUnionIterator.java|  9 ++-
 .../cassandra/index/sasi/SASIIndexTest.java |  6 +-
 .../index/sasi/utils/LongIteratorTest.java  | 56 ++
 .../utils/RangeIntersectionIteratorTest.java| 78 ++-
 .../sasi/utils/RangeUnionIteratorTest.java  | 72 +-
 10 files changed, 283 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2c111d15/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index acef1c2..302a028 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.11.0
+ * Improve SASI range iterator efficiency on intersection with an empty range 
(CASSANDRA-12915).
  * Fix equality comparisons of columns using the duration type 
(CASSANDRA-13174)
  * Obfuscate password in stress-graphs (CASSANDRA-12233)
  * Move to FastThreadLocalThread and FastThreadLocal (CASSANDRA-13034)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2c111d15/src/java/org/apache/cassandra/index/sasi/TermIterator.java
--
diff --git a/src/java/org/apache/cassandra/index/sasi/TermIterator.java 
b/src/java/org/apache/cassandra/index/sasi/TermIterator.java
index 03dea18..85f81b0 100644
--- a/src/java/org/apache/cassandra/index/sasi/TermIterator.java
+++ b/src/java/org/apache/cassandra/index/sasi/TermIterator.java
@@ -157,7 +157,7 @@ public class TermIterator extends RangeIterator
 e.checkpoint();
 
 RangeIterator ranges = 
RangeUnionIterator.build(tokens);
-return ranges == null ? null : new TermIterator(e, ranges, 
referencedIndexes);
+return new TermIterator(e, ranges, referencedIndexes);
 }
 catch (Throwable ex)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2c111d15/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java
--
diff --git a/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java 
b/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java
index 155cd4f..22fca68 100644
--- a/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java
+++ b/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java
@@ -144,9 +144,6 @@ public class QueryController
 @SuppressWarnings("resource") // RangeIterators are closed by 
releaseIndexes
 RangeIterator index = TermIterator.build(e.getKey(), 
e.getValue());
 
-if (index == null)
-continue;
-
 builder.add(index);
 perIndexUnions.add(index);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2c111d15/src/java/org/apache/cassandra/index/sasi/utils/RangeIntersectionIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/index/sasi/utils/RangeIntersectionIterator.java 
b/src/java/org/apache/cassandra/index/sasi/utils/RangeIntersectionIterator.java
index 02d9527..bd8c725 100644
--- 
a/src/java/org/apache/cassandra/index/sasi/utils/RangeIntersectionIterator.java
+++ 
b/src/java/org/apache/cassandra/index/sasi/utils/RangeIntersectionIterator.java
@@ -59,10 +59,13 @@ public class RangeIntersectionIterator
 
 protected RangeIterator buildIterator()
 {
-// if the range is disjoint we can simply return empty
-// iterator of any type, because it's not going to produce any 
results.
+// if the range is disjoint or we have an 

[3/3] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-03-10 Thread ifesdjeen
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/67e9a5ff
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/67e9a5ff
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/67e9a5ff

Branch: refs/heads/trunk
Commit: 67e9a5ffd27e78d19d4a82f2f6d31cce44fd6b32
Parents: 957ad8c 2c111d1
Author: Alex Petrov 
Authored: Fri Mar 10 13:31:59 2017 +0100
Committer: Alex Petrov 
Committed: Fri Mar 10 13:31:59 2017 +0100

--
 CHANGES.txt |  1 +
 .../cassandra/index/sasi/TermIterator.java  |  2 +-
 .../index/sasi/plan/QueryController.java|  3 -
 .../sasi/utils/RangeIntersectionIterator.java   |  9 ++-
 .../index/sasi/utils/RangeIterator.java | 79 +++-
 .../index/sasi/utils/RangeUnionIterator.java|  9 ++-
 .../cassandra/index/sasi/SASIIndexTest.java |  6 +-
 .../index/sasi/utils/LongIteratorTest.java  | 56 ++
 .../utils/RangeIntersectionIteratorTest.java| 78 ++-
 .../sasi/utils/RangeUnionIteratorTest.java  | 72 +-
 10 files changed, 283 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/67e9a5ff/CHANGES.txt
--
diff --cc CHANGES.txt
index 0e5bc25,302a028..3acc2b4
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,47 -1,7 +1,48 @@@
 +4.0
 + * Add the currentTimestamp, currentDate, currentTime and currentTimeUUID 
functions (CASSANDRA-13132)
 + * Remove config option index_interval (CASSANDRA-10671)
 + * Reduce lock contention for collection types and serializers 
(CASSANDRA-13271)
 + * Make it possible to override MessagingService.Verb ids (CASSANDRA-13283)
 + * Avoid synchronized on prepareForRepair in ActiveRepairService 
(CASSANDRA-9292)
 + * Adds the ability to use uncompressed chunks in compressed files 
(CASSANDRA-10520)
 + * Don't flush sstables when streaming for incremental repair 
(CASSANDRA-13226)
 + * Remove unused method (CASSANDRA-13227)
 + * Fix minor bugs related to #9143 (CASSANDRA-13217)
 + * Output warning if user increases RF (CASSANDRA-13079)
 + * Remove pre-3.0 streaming compatibility code for 4.0 (CASSANDRA-13081)
 + * Add support for + and - operations on dates (CASSANDRA-11936)
 + * Fix consistency of incrementally repaired data (CASSANDRA-9143)
 + * Increase commitlog version (CASSANDRA-13161)
 + * Make TableMetadata immutable, optimize Schema (CASSANDRA-9425)
 + * Refactor ColumnCondition (CASSANDRA-12981)
 + * Parallelize streaming of different keyspaces (CASSANDRA-4663)
 + * Improved compactions metrics (CASSANDRA-13015)
 + * Speed-up start-up sequence by avoiding un-needed flushes (CASSANDRA-13031)
 + * Use Caffeine (W-TinyLFU) for on-heap caches (CASSANDRA-10855)
 + * Thrift removal (CASSANDRA-5)
 + * Remove pre-3.0 compatibility code for 4.0 (CASSANDRA-12716)
 + * Add column definition kind to dropped columns in schema (CASSANDRA-12705)
 + * Add (automate) Nodetool Documentation (CASSANDRA-12672)
 + * Update bundled cqlsh python driver to 3.7.0 (CASSANDRA-12736)
 + * Reject invalid replication settings when creating or altering a keyspace 
(CASSANDRA-12681)
 + * Clean up the SSTableReader#getScanner API wrt removal of RateLimiter 
(CASSANDRA-12422)
 + * Use new token allocation for non bootstrap case as well (CASSANDRA-13080)
 + * Avoid byte-array copy when key cache is disabled (CASSANDRA-13084)
 + * Require forceful decommission if number of nodes is less than replication 
factor (CASSANDRA-12510)
 + * Allow IN restrictions on column families with collections (CASSANDRA-12654)
 + * Log message size in trace message in OutboundTcpConnection 
(CASSANDRA-13028)
 + * Add timeUnit Days for cassandra-stress (CASSANDRA-13029)
 + * Add mutation size and batch metrics (CASSANDRA-12649)
 + * Add method to get size of endpoints to TokenMetadata (CASSANDRA-12999)
 + * Expose time spent waiting in thread pool queue (CASSANDRA-8398)
 + * Conditionally update index built status to avoid unnecessary flushes 
(CASSANDRA-12969)
 + * cqlsh auto completion: refactor definition of compaction strategy options 
(CASSANDRA-12946)
 + * Add support for arithmetic operators (CASSANDRA-11935)
 +
 +
  3.11.0
+  * Improve SASI range iterator efficiency on intersection with an empty range 
(CASSANDRA-12915).
   * Fix equality comparisons of columns using the duration type 
(CASSANDRA-13174)
 - * Obfuscate password in stress-graphs (CASSANDRA-12233)
   * Move to FastThreadLocalThread and FastThreadLocal (CASSANDRA-13034)
   * nodetool stopdaemon errors out (CASSANDRA-13030)
   * Tables in system_distributed should not use gcgs of 0 (CASSANDRA-12954)


[jira] [Commented] (CASSANDRA-13309) i couldnot able to run the cqlsh service.i am getting an syntax error in cqlsh.py file when i was trying to run cqlsh from bin folder

2017-03-10 Thread Yu LIU (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904984#comment-15904984
 ] 

Yu LIU commented on CASSANDRA-13309:


Are you using python 2.7 ? because python 3 use 'as' in place of the comma 
token for except clause.

> i couldnot able to run the cqlsh service.i am getting an syntax error in 
> cqlsh.py file when i was trying to run cqlsh from bin folder
> -
>
> Key: CASSANDRA-13309
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13309
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: windows 10 64bit 
>Reporter: RANGU MANIKAR
> Fix For: 3.0.11
>
>
> C:\Program Files\apache-cassandra-3.0.11\bin>cqlsh
>  File "C:\Program Files\apache-cassandra-3.0.11\bin\\cqlsh.py", line 141
> except ImportError, e:
>   ^
> SyntaxError: invalid syntax
> C:\Program Files\apache-cassandra-3.0.11\bin>



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-13265) Expiration in OutboundTcpConnection can block the reader Thread

2017-03-10 Thread Christian Esken (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904949#comment-15904949
 ] 

Christian Esken edited comment on CASSANDRA-13265 at 3/10/17 11:46 AM:
---

I am nearly done with the configuration, and have two questions about it:

1.  How to handle the default value? My approach is to pre-configure the 
default value in Config:
{code}
public static final int otc_backlog_expiration_interval_in_ms_default = 200;
public volatile Integer otc_backlog_expiration_interval_in_ms = 
otc_backlog_expiration_interval_in_ms_default;
{code}

Additionally I will handle null values, that might have been set via JMX in the 
getter of DatabaseDescriptor:
{code}
public static Integer getOtcBacklogExpirationInterval()
{
Integer confValue = conf.otc_backlog_expiration_interval_in_ms;
return confValue != null ? confValue : 
Config.otc_backlog_expiration_interval_in_ms_default;
}
{code}

2. How to read the config value? I am seeing some 
{{Integer.getInteger(propName, defaultValue)}}, but this looks strange to me. I 
think changes from JMX would not even be reflected. Thus I am calling the 
getter from above: {{DatabaseDescriptor.getOtcBacklogExpirationInterval()}}. Is 
hte latter OK?



was (Author: cesken):
I am nearly done with the configuration, and have two questions about it:

1.  How to handle the default value? My approach is to pre-configure the 
default value in Config:
{code}
public static final int otc_backlog_expiration_interval_in_ms_default = 200;
public volatile Integer otc_backlog_expiration_interval_in_ms = 
otc_backlog_expiration_interval_in_ms_default;
{code}

Additionally I will handle null values, that might have been set via JMX in the 
getter of DatabaseDescriptor:
{code}
public static Integer getOtcBacklogExpirationInterval()
{
Integer confValue = conf.otc_backlog_expiration_interval_in_ms;
return confValue != null ? confValue : 
Config.otc_backlog_expiration_interval_in_ms_default;
}
{code}

2. How to read the config value? I am seeing some Integer.getInteger(propName, 
defaultValue), but this looks strange to me. I think changes from JMX would not 
even be reflected. Thus I am calling the getter from above: 
{{DatabaseDescriptor.getOtcBacklogExpirationInterval()}}. Is thte latter OK?


> Expiration in OutboundTcpConnection can block the reader Thread
> ---
>
> Key: CASSANDRA-13265
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13265
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.9
> Java HotSpot(TM) 64-Bit Server VM version 25.112-b15 (Java version 
> 1.8.0_112-b15)
> Linux 3.16
>Reporter: Christian Esken
>Assignee: Christian Esken
> Attachments: cassandra.pb-cache4-dus.2017-02-17-19-36-26.chist.xz, 
> cassandra.pb-cache4-dus.2017-02-17-19-36-26.td.xz
>
>
> I observed that sometimes a single node in a Cassandra cluster fails to 
> communicate to the other nodes. This can happen at any time, during peak load 
> or low load. Restarting that single node from the cluster fixes the issue.
> Before going in to details, I want to state that I have analyzed the 
> situation and am already developing a possible fix. Here is the analysis so 
> far:
> - A Threaddump in this situation showed  324 Threads in the 
> OutboundTcpConnection class that want to lock the backlog queue for doing 
> expiration.
> - A class histogram shows 262508 instances of 
> OutboundTcpConnection$QueuedMessage.
> What is the effect of it? As soon as the Cassandra node has reached a certain 
> amount of queued messages, it starts thrashing itself to death. Each of the 
> Thread fully locks the Queue for reading and writing by calling 
> iterator.next(), making the situation worse and worse.
> - Writing: Only after 262508 locking operation it can progress with actually 
> writing to the Queue.
> - Reading: Is also blocked, as 324 Threads try to do iterator.next(), and 
> fully lock the Queue
> This means: Writing blocks the Queue for reading, and readers might even be 
> starved which makes the situation even worse.
> -
> The setup is:
>  - 3-node cluster
>  - replication factor 2
>  - Consistency LOCAL_ONE
>  - No remote DC's
>  - high write throughput (10 INSERT statements per second and more during 
> peak times).
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-13265) Expiration in OutboundTcpConnection can block the reader Thread

2017-03-10 Thread Christian Esken (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904949#comment-15904949
 ] 

Christian Esken edited comment on CASSANDRA-13265 at 3/10/17 11:46 AM:
---

I am nearly done with the configuration, and have two questions about it:

1.  How to handle the default value? My approach is to pre-configure the 
default value in Config:
{code}
public static final int otc_backlog_expiration_interval_in_ms_default = 200;
public volatile Integer otc_backlog_expiration_interval_in_ms = 
otc_backlog_expiration_interval_in_ms_default;
{code}

Additionally I will handle null values, that might have been set via JMX in the 
getter of DatabaseDescriptor:
{code}
public static Integer getOtcBacklogExpirationInterval()
{
Integer confValue = conf.otc_backlog_expiration_interval_in_ms;
return confValue != null ? confValue : 
Config.otc_backlog_expiration_interval_in_ms_default;
}
{code}

2. How to read the config value? I am seeing some Integer.getInteger(propName, 
defaultValue), but this looks strange to me. I think changes from JMX would not 
even be reflected. Thus I am calling the getter from above: 
{{DatabaseDescriptor.getOtcBacklogExpirationInterval()}}. Is thte latter OK?



was (Author: cesken):
I am nearly done with the configuration, and have two questions about it:

1.  How to handle the default value? My approach is to pre-configure the 
default value in Config:
{code}
public static final int otc_backlog_expiration_interval_in_ms_default = 200;
public volatile Integer otc_backlog_expiration_interval_in_ms = 
otc_backlog_expiration_interval_in_ms_default;
{code}

Additionally I will handle null values, that might come in via MBean in the 
getter of DatabaseDescriptor:
{code}
public static Integer getOtcBacklogExpirationInterval()
{
Integer confValue = conf.otc_backlog_expiration_interval_in_ms;
return confValue != null ? confValue : 
Config.otc_backlog_expiration_interval_in_ms_default;
}
{code}

2. How to read the config value? I am seeing some Integer.getInteger(propName, 
defaultValue), but this looks strange to me. I think changes from JMX would not 
even be reflected. Thus I am calling the getter from above: 
{{DatabaseDescriptor.getOtcBacklogExpirationInterval()}}. Is thte latter OK?


> Expiration in OutboundTcpConnection can block the reader Thread
> ---
>
> Key: CASSANDRA-13265
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13265
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.9
> Java HotSpot(TM) 64-Bit Server VM version 25.112-b15 (Java version 
> 1.8.0_112-b15)
> Linux 3.16
>Reporter: Christian Esken
>Assignee: Christian Esken
> Attachments: cassandra.pb-cache4-dus.2017-02-17-19-36-26.chist.xz, 
> cassandra.pb-cache4-dus.2017-02-17-19-36-26.td.xz
>
>
> I observed that sometimes a single node in a Cassandra cluster fails to 
> communicate to the other nodes. This can happen at any time, during peak load 
> or low load. Restarting that single node from the cluster fixes the issue.
> Before going in to details, I want to state that I have analyzed the 
> situation and am already developing a possible fix. Here is the analysis so 
> far:
> - A Threaddump in this situation showed  324 Threads in the 
> OutboundTcpConnection class that want to lock the backlog queue for doing 
> expiration.
> - A class histogram shows 262508 instances of 
> OutboundTcpConnection$QueuedMessage.
> What is the effect of it? As soon as the Cassandra node has reached a certain 
> amount of queued messages, it starts thrashing itself to death. Each of the 
> Thread fully locks the Queue for reading and writing by calling 
> iterator.next(), making the situation worse and worse.
> - Writing: Only after 262508 locking operation it can progress with actually 
> writing to the Queue.
> - Reading: Is also blocked, as 324 Threads try to do iterator.next(), and 
> fully lock the Queue
> This means: Writing blocks the Queue for reading, and readers might even be 
> starved which makes the situation even worse.
> -
> The setup is:
>  - 3-node cluster
>  - replication factor 2
>  - Consistency LOCAL_ONE
>  - No remote DC's
>  - high write throughput (10 INSERT statements per second and more during 
> peak times).
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (CASSANDRA-13265) Expiration in OutboundTcpConnection can block the reader Thread

2017-03-10 Thread Christian Esken (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904949#comment-15904949
 ] 

Christian Esken edited comment on CASSANDRA-13265 at 3/10/17 11:45 AM:
---

I am nearly done with the configuration, and have two questions about it:

1.  How to handle the default value? My approach is to pre-configure the 
default value in Config:
{code}
public static final int otc_backlog_expiration_interval_in_ms_default = 200;
public volatile Integer otc_backlog_expiration_interval_in_ms = 
otc_backlog_expiration_interval_in_ms_default;
{code}

Additionally I will handle null values, that might come in via MBean in the 
getter of DatabaseDescriptor:
{code}
public static Integer getOtcBacklogExpirationInterval()
{
Integer confValue = conf.otc_backlog_expiration_interval_in_ms;
return confValue != null ? confValue : 
Config.otc_backlog_expiration_interval_in_ms_default;
}
{code}

2. How to read the config value? I am seeing some Integer.getInteger(propName, 
defaultValue), but this looks strange to me. I think changes from JMX would not 
even be reflected. Thus I am calling the getter from above: 
{{DatabaseDescriptor.getOtcBacklogExpirationInterval()}}. Is thte latter OK?



was (Author: cesken):
I am nearly done with the configuration, and have two questions about it:

1.  How to handle the default value? My approach is to pre-configure the 
default value in Config:
{code}
public static final int otc_backlog_expiration_interval_in_ms_default = 200;
public volatile Integer otc_backlog_expiration_interval_in_ms = 
otc_backlog_expiration_interval_in_ms_default;
{code}


- Additionally I will handle null values, that might come in via MBean in the 
getter of DatabaseDescriptor:
{code}
public static Integer getOtcBacklogExpirationInterval()
{
Integer confValue = conf.otc_backlog_expiration_interval_in_ms;
return confValue != null ? confValue : 
Config.otc_backlog_expiration_interval_in_ms_default;
}
{code}

2. How to read the config value? I am seeing some Integer.getInteger(propName, 
defaultValue), but this looks strange to me. I think changes from JMX would not 
even be reflected. Thus I am calling the getter from above: 
{{DatabaseDescriptor.getOtcBacklogExpirationInterval()}}. Is thte latter OK?


> Expiration in OutboundTcpConnection can block the reader Thread
> ---
>
> Key: CASSANDRA-13265
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13265
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.9
> Java HotSpot(TM) 64-Bit Server VM version 25.112-b15 (Java version 
> 1.8.0_112-b15)
> Linux 3.16
>Reporter: Christian Esken
>Assignee: Christian Esken
> Attachments: cassandra.pb-cache4-dus.2017-02-17-19-36-26.chist.xz, 
> cassandra.pb-cache4-dus.2017-02-17-19-36-26.td.xz
>
>
> I observed that sometimes a single node in a Cassandra cluster fails to 
> communicate to the other nodes. This can happen at any time, during peak load 
> or low load. Restarting that single node from the cluster fixes the issue.
> Before going in to details, I want to state that I have analyzed the 
> situation and am already developing a possible fix. Here is the analysis so 
> far:
> - A Threaddump in this situation showed  324 Threads in the 
> OutboundTcpConnection class that want to lock the backlog queue for doing 
> expiration.
> - A class histogram shows 262508 instances of 
> OutboundTcpConnection$QueuedMessage.
> What is the effect of it? As soon as the Cassandra node has reached a certain 
> amount of queued messages, it starts thrashing itself to death. Each of the 
> Thread fully locks the Queue for reading and writing by calling 
> iterator.next(), making the situation worse and worse.
> - Writing: Only after 262508 locking operation it can progress with actually 
> writing to the Queue.
> - Reading: Is also blocked, as 324 Threads try to do iterator.next(), and 
> fully lock the Queue
> This means: Writing blocks the Queue for reading, and readers might even be 
> starved which makes the situation even worse.
> -
> The setup is:
>  - 3-node cluster
>  - replication factor 2
>  - Consistency LOCAL_ONE
>  - No remote DC's
>  - high write throughput (10 INSERT statements per second and more during 
> peak times).
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CASSANDRA-13265) Expiration in OutboundTcpConnection can block the reader Thread

2017-03-10 Thread Christian Esken (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904949#comment-15904949
 ] 

Christian Esken commented on CASSANDRA-13265:
-

I am nearly done with the configuration, and have two questions about it:

1.  How to handle the default value? My approach is to pre-configure the 
default value in Config:
{code}
public static final int otc_backlog_expiration_interval_in_ms_default = 200;
public volatile Integer otc_backlog_expiration_interval_in_ms = 
otc_backlog_expiration_interval_in_ms_default;
{code}


- Additionally I will handle null values, that might come in via MBean in the 
getter of DatabaseDescriptor:
{code}
public static Integer getOtcBacklogExpirationInterval()
{
Integer confValue = conf.otc_backlog_expiration_interval_in_ms;
return confValue != null ? confValue : 
Config.otc_backlog_expiration_interval_in_ms_default;
}
{code}

2. How to read the config value? I am seeing some Integer.getInteger(propName, 
defaultValue), but this looks strange to me. I think changes from JMX would not 
even be reflected. Thus I am calling the getter from above: 
{{DatabaseDescriptor.getOtcBacklogExpirationInterval()}}. Is thte latter OK?


> Expiration in OutboundTcpConnection can block the reader Thread
> ---
>
> Key: CASSANDRA-13265
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13265
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.9
> Java HotSpot(TM) 64-Bit Server VM version 25.112-b15 (Java version 
> 1.8.0_112-b15)
> Linux 3.16
>Reporter: Christian Esken
>Assignee: Christian Esken
> Attachments: cassandra.pb-cache4-dus.2017-02-17-19-36-26.chist.xz, 
> cassandra.pb-cache4-dus.2017-02-17-19-36-26.td.xz
>
>
> I observed that sometimes a single node in a Cassandra cluster fails to 
> communicate to the other nodes. This can happen at any time, during peak load 
> or low load. Restarting that single node from the cluster fixes the issue.
> Before going in to details, I want to state that I have analyzed the 
> situation and am already developing a possible fix. Here is the analysis so 
> far:
> - A Threaddump in this situation showed  324 Threads in the 
> OutboundTcpConnection class that want to lock the backlog queue for doing 
> expiration.
> - A class histogram shows 262508 instances of 
> OutboundTcpConnection$QueuedMessage.
> What is the effect of it? As soon as the Cassandra node has reached a certain 
> amount of queued messages, it starts thrashing itself to death. Each of the 
> Thread fully locks the Queue for reading and writing by calling 
> iterator.next(), making the situation worse and worse.
> - Writing: Only after 262508 locking operation it can progress with actually 
> writing to the Queue.
> - Reading: Is also blocked, as 324 Threads try to do iterator.next(), and 
> fully lock the Queue
> This means: Writing blocks the Queue for reading, and readers might even be 
> starved which makes the situation even worse.
> -
> The setup is:
>  - 3-node cluster
>  - replication factor 2
>  - Consistency LOCAL_ONE
>  - No remote DC's
>  - high write throughput (10 INSERT statements per second and more during 
> peak times).
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[1/3] cassandra git commit: Ninja: fix missing variable re-assignment in SASI test

2017-03-10 Thread ifesdjeen
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.11 7b3415d0b -> 9efa682b3
  refs/heads/trunk 753d004cd -> 957ad8c0b


Ninja: fix missing variable re-assignment in SASI test

Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9efa682b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9efa682b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9efa682b

Branch: refs/heads/cassandra-3.11
Commit: 9efa682b3e72c76818be582080bd3329ecdf74e3
Parents: 7b3415d
Author: Alex Petrov 
Authored: Fri Mar 10 11:31:54 2017 +0100
Committer: Alex Petrov 
Committed: Fri Mar 10 11:34:34 2017 +0100

--
 test/unit/org/apache/cassandra/index/sasi/SASIIndexTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9efa682b/test/unit/org/apache/cassandra/index/sasi/SASIIndexTest.java
--
diff --git a/test/unit/org/apache/cassandra/index/sasi/SASIIndexTest.java 
b/test/unit/org/apache/cassandra/index/sasi/SASIIndexTest.java
index 0b4e9e2..399aa40 100644
--- a/test/unit/org/apache/cassandra/index/sasi/SASIIndexTest.java
+++ b/test/unit/org/apache/cassandra/index/sasi/SASIIndexTest.java
@@ -2061,7 +2061,7 @@ public class SASIIndexTest
 // expected since CONTAINS + analyzed only support LIKE
 }
 
-QueryProcessor.executeOnceInternal(String.format("SELECT * FROM %s.%s 
WHERE v LIKE 'Pav%%';", KS_NAME, containsTable));
+results = QueryProcessor.executeOnceInternal(String.format("SELECT * 
FROM %s.%s WHERE v LIKE 'Pav%%';", KS_NAME, containsTable));
 Assert.assertNotNull(results);
 Assert.assertEquals(1, results.size());
 



[3/3] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-03-10 Thread ifesdjeen
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/957ad8c0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/957ad8c0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/957ad8c0

Branch: refs/heads/trunk
Commit: 957ad8c0b127f118212785b8364576c6f42f4034
Parents: 753d004 9efa682
Author: Alex Petrov 
Authored: Fri Mar 10 11:36:09 2017 +0100
Committer: Alex Petrov 
Committed: Fri Mar 10 11:36:09 2017 +0100

--
 test/unit/org/apache/cassandra/index/sasi/SASIIndexTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/957ad8c0/test/unit/org/apache/cassandra/index/sasi/SASIIndexTest.java
--



[2/3] cassandra git commit: Ninja: fix missing variable re-assignment in SASI test

2017-03-10 Thread ifesdjeen
Ninja: fix missing variable re-assignment in SASI test

Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9efa682b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9efa682b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9efa682b

Branch: refs/heads/trunk
Commit: 9efa682b3e72c76818be582080bd3329ecdf74e3
Parents: 7b3415d
Author: Alex Petrov 
Authored: Fri Mar 10 11:31:54 2017 +0100
Committer: Alex Petrov 
Committed: Fri Mar 10 11:34:34 2017 +0100

--
 test/unit/org/apache/cassandra/index/sasi/SASIIndexTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9efa682b/test/unit/org/apache/cassandra/index/sasi/SASIIndexTest.java
--
diff --git a/test/unit/org/apache/cassandra/index/sasi/SASIIndexTest.java 
b/test/unit/org/apache/cassandra/index/sasi/SASIIndexTest.java
index 0b4e9e2..399aa40 100644
--- a/test/unit/org/apache/cassandra/index/sasi/SASIIndexTest.java
+++ b/test/unit/org/apache/cassandra/index/sasi/SASIIndexTest.java
@@ -2061,7 +2061,7 @@ public class SASIIndexTest
 // expected since CONTAINS + analyzed only support LIKE
 }
 
-QueryProcessor.executeOnceInternal(String.format("SELECT * FROM %s.%s 
WHERE v LIKE 'Pav%%';", KS_NAME, containsTable));
+results = QueryProcessor.executeOnceInternal(String.format("SELECT * 
FROM %s.%s WHERE v LIKE 'Pav%%';", KS_NAME, containsTable));
 Assert.assertNotNull(results);
 Assert.assertEquals(1, results.size());
 



[jira] [Updated] (CASSANDRA-12915) SASI: Index intersection with an empty range really inefficient

2017-03-10 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-12915:

Status: Ready to Commit  (was: Patch Available)

> SASI: Index intersection with an empty range really inefficient
> ---
>
> Key: CASSANDRA-12915
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12915
> Project: Cassandra
>  Issue Type: Improvement
>  Components: sasi
>Reporter: Corentin Chary
>Assignee: Corentin Chary
> Fix For: 3.11.x, 4.x
>
>
> It looks like RangeIntersectionIterator.java and be pretty inefficient in 
> some cases. Let's take the following query:
> SELECT data FROM table WHERE index1 = 'foo' AND index2 = 'bar';
> In this case:
> * index1 = 'foo' will match 2 items
> * index2 = 'bar' will match ~300k items
> On my setup, the query will take ~1 sec, most of the time being spent in 
> disk.TokenTree.getTokenAt().
> if I patch RangeIntersectionIterator so that it doesn't try to do the 
> intersection (and effectively only use 'index1') the query will run in a few 
> tenth of milliseconds.
> I see multiple solutions for that:
> * Add a static thresold to avoid the use of the index for the intersection 
> when we know it will be slow. Probably when the range size factor is very 
> small and the range size is big.
> * CASSANDRA-10765



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[07/10] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-03-10 Thread blerer
Merge branch cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/aeca1d2b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/aeca1d2b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/aeca1d2b

Branch: refs/heads/trunk
Commit: aeca1d2bd8e395a2897c3e36224f49b586babd4e
Parents: 31dec3d 5ef8a8b
Author: Benjamin Lerer 
Authored: Fri Mar 10 10:01:01 2017 +0100
Committer: Benjamin Lerer 
Committed: Fri Mar 10 10:02:21 2017 +0100

--
 CHANGES.txt |   1 +
 .../apache/cassandra/cql3/UpdateParameters.java |  24 -
 .../org/apache/cassandra/db/rows/BTreeRow.java  |  43 ++--
 src/java/org/apache/cassandra/db/rows/Row.java  |   6 ++
 .../org/apache/cassandra/utils/btree/BTree.java |  19 
 .../validation/entities/CollectionsTest.java| 100 +++
 .../apache/cassandra/db/rows/RowBuilder.java|   7 ++
 7 files changed, 191 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/aeca1d2b/CHANGES.txt
--
diff --cc CHANGES.txt
index 1876922,09e4039..52a794b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,20 -1,6 +1,21 @@@
 -2.2.10
 +3.0.13
 + * Slice.isEmpty() returns false for some empty slices (CASSANDRA-13305)
 + * Add formatted row output to assertEmpty in CQL Tester (CASSANDRA-13238)
 +Merged from 2.2:
+  * Fix queries updating multiple time the same list (CASSANDRA-13130)
   * Fix GRANT/REVOKE when keyspace isn't specified (CASSANDRA-13053)
 +
 +
 +3.0.12
 + * Prevent data loss on upgrade 2.1 - 3.0 by adding component separator to 
LogRecord absolute path (CASSANDRA-13294)
 + * Improve testing on macOS by eliminating sigar logging (CASSANDRA-13233)
 + * Cqlsh copy-from should error out when csv contains invalid data for 
collections (CASSANDRA-13071)
 + * Update c.yaml doc for offheap memtables (CASSANDRA-13179)
 + * Faster StreamingHistogram (CASSANDRA-13038)
 + * Legacy deserializer can create unexpected boundary range tombstones 
(CASSANDRA-13237)
 + * Remove unnecessary assertion from AntiCompactionTest (CASSANDRA-13070)
 + * Fix cqlsh COPY for dates before 1900 (CASSANDRA-13185)
 +Merged from 2.2:
   * Avoid race on receiver by starting streaming sender thread after sending 
init message (CASSANDRA-12886)
   * Fix "multiple versions of ant detected..." when running ant test 
(CASSANDRA-13232)
   * Coalescing strategy sleeps too much (CASSANDRA-13090)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/aeca1d2b/src/java/org/apache/cassandra/cql3/UpdateParameters.java
--
diff --cc src/java/org/apache/cassandra/cql3/UpdateParameters.java
index 0c58097,65edef7..d902dec
--- a/src/java/org/apache/cassandra/cql3/UpdateParameters.java
+++ b/src/java/org/apache/cassandra/cql3/UpdateParameters.java
@@@ -80,134 -59,71 +80,156 @@@ public class UpdateParameter
  throw new InvalidRequestException(String.format("Out of bound 
timestamp, must be in [%d, %d]", Long.MIN_VALUE + 1, Long.MAX_VALUE));
  }
  
 -public Cell makeColumn(CellName name, ByteBuffer value) throws 
InvalidRequestException
 +public void newRow(Clustering clustering) throws InvalidRequestException
 +{
 +if (metadata.isDense() && !metadata.isCompound())
 +{
 +// If it's a COMPACT STORAGE table with a single clustering 
column, the clustering value is
 +// translated in Thrift to the full Thrift column name, and for 
backward compatibility we
 +// don't want to allow that to be empty (even though this would 
be fine for the storage engine).
 +assert clustering.size() == 1;
 +ByteBuffer value = clustering.get(0);
 +if (value == null || !value.hasRemaining())
 +throw new InvalidRequestException("Invalid empty or null 
value for column " + metadata.clusteringColumns().get(0).name);
 +}
 +
 +if (clustering == Clustering.STATIC_CLUSTERING)
 +{
 +if (staticBuilder == null)
 +staticBuilder = BTreeRow.unsortedBuilder(nowInSec);
 +builder = staticBuilder;
 +}
 +else
 +{
 +if (regularBuilder == null)
 +regularBuilder = BTreeRow.unsortedBuilder(nowInSec);
 +builder = regularBuilder;
 +}
 +
 +builder.newRow(clustering);
 +}
 +
 +public Clustering currentClustering()
 +{
 +return builder.clustering();
 +}
 +
 +public void addPrimaryKeyLivenessInfo()
 +{
 +builder.addPrimaryKeyLivenessInfo(LivenessInfo.create(metadata, 
timestamp, ttl, 

[08/10] cassandra git commit: Merge branch cassandra-3.0 into cassandra-3.11

2017-03-10 Thread blerer
Merge branch cassandra-3.0 into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7b3415d0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7b3415d0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7b3415d0

Branch: refs/heads/cassandra-3.11
Commit: 7b3415d0b06843aca4410ce9cbd6d68ff37e3978
Parents: dc65a57 aeca1d2
Author: Benjamin Lerer 
Authored: Fri Mar 10 10:06:27 2017 +0100
Committer: Benjamin Lerer 
Committed: Fri Mar 10 10:07:14 2017 +0100

--
 CHANGES.txt |   1 +
 .../apache/cassandra/cql3/UpdateParameters.java |  24 -
 .../org/apache/cassandra/db/rows/BTreeRow.java  |  43 ++--
 src/java/org/apache/cassandra/db/rows/Row.java  |   6 ++
 .../org/apache/cassandra/utils/btree/BTree.java |  20 
 .../validation/entities/CollectionsTest.java| 100 +++
 .../apache/cassandra/db/rows/RowBuilder.java|   7 ++
 7 files changed, 192 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7b3415d0/CHANGES.txt
--
diff --cc CHANGES.txt
index 2772fc2,52a794b..acef1c2
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -32,139 -42,6 +32,140 @@@ Merged from 3.0
 live rows in sstabledump (CASSANDRA-13177)
   * Provide user workaround when system_schema.columns does not contain entries
 for a table that's in system_schema.tables (CASSANDRA-13180)
 +Merged from 2.2:
++ * Fix queries updating multiple time the same list (CASSANDRA-13130)
 + * Fix GRANT/REVOKE when keyspace isn't specified (CASSANDRA-13053)
 + * Fix flaky LongLeveledCompactionStrategyTest (CASSANDRA-12202)
 + * Fix failing COPY TO STDOUT (CASSANDRA-12497)
 + * Fix ColumnCounter::countAll behaviour for reverse queries (CASSANDRA-13222)
 + * Exceptions encountered calling getSeeds() breaks OTC thread 
(CASSANDRA-13018)
 + * Fix negative mean latency metric (CASSANDRA-12876)
 + * Use only one file pointer when creating commitlog segments 
(CASSANDRA-12539)
 +Merged from 2.1:
 + * Remove unused repositories (CASSANDRA-13278)
 + * Log stacktrace of uncaught exceptions (CASSANDRA-13108)
 + * Use portable stderr for java error in startup (CASSANDRA-13211)
 + * Fix Thread Leak in OutboundTcpConnection (CASSANDRA-13204)
 + * Coalescing strategy can enter infinite loop (CASSANDRA-13159)
 +
 +
 +3.10
 + * Fix secondary index queries regression (CASSANDRA-13013)
 + * Add duration type to the protocol V5 (CASSANDRA-12850)
 + * Fix duration type validation (CASSANDRA-13143)
 + * Fix flaky GcCompactionTest (CASSANDRA-12664)
 + * Fix TestHintedHandoff.hintedhandoff_decom_test (CASSANDRA-13058)
 + * Fixed query monitoring for range queries (CASSANDRA-13050)
 + * Remove outboundBindAny configuration property (CASSANDRA-12673)
 + * Use correct bounds for all-data range when filtering (CASSANDRA-12666)
 + * Remove timing window in test case (CASSANDRA-12875)
 + * Resolve unit testing without JCE security libraries installed 
(CASSANDRA-12945)
 + * Fix inconsistencies in cassandra-stress load balancing policy 
(CASSANDRA-12919)
 + * Fix validation of non-frozen UDT cells (CASSANDRA-12916)
 + * Don't shut down socket input/output on StreamSession (CASSANDRA-12903)
 + * Fix Murmur3PartitionerTest (CASSANDRA-12858)
 + * Move cqlsh syntax rules into separate module and allow easier 
customization (CASSANDRA-12897)
 + * Fix CommitLogSegmentManagerTest (CASSANDRA-12283)
 + * Fix cassandra-stress truncate option (CASSANDRA-12695)
 + * Fix crossNode value when receiving messages (CASSANDRA-12791)
 + * Don't load MX4J beans twice (CASSANDRA-12869)
 + * Extend native protocol request flags, add versions to SUPPORTED, and 
introduce ProtocolVersion enum (CASSANDRA-12838)
 + * Set JOINING mode when running pre-join tasks (CASSANDRA-12836)
 + * remove net.mintern.primitive library due to license issue (CASSANDRA-12845)
 + * Properly format IPv6 addresses when logging JMX service URL 
(CASSANDRA-12454)
 + * Optimize the vnode allocation for single replica per DC (CASSANDRA-12777)
 + * Use non-token restrictions for bounds when token restrictions are 
overridden (CASSANDRA-12419)
 + * Fix CQLSH auto completion for PER PARTITION LIMIT (CASSANDRA-12803)
 + * Use different build directories for Eclipse and Ant (CASSANDRA-12466)
 + * Avoid potential AttributeError in cqlsh due to no table metadata 
(CASSANDRA-12815)
 + * Fix RandomReplicationAwareTokenAllocatorTest.testExistingCluster 
(CASSANDRA-12812)
 + * Upgrade commons-codec to 1.9 (CASSANDRA-12790)
 + * Make the fanout size for LeveledCompactionStrategy to be configurable 
(CASSANDRA-11550)
 + * Add duration data type (CASSANDRA-11873)
 + * Fix timeout in ReplicationAwareTokenAllocatorTest 

[06/10] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-03-10 Thread blerer
Merge branch cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/aeca1d2b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/aeca1d2b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/aeca1d2b

Branch: refs/heads/cassandra-3.0
Commit: aeca1d2bd8e395a2897c3e36224f49b586babd4e
Parents: 31dec3d 5ef8a8b
Author: Benjamin Lerer 
Authored: Fri Mar 10 10:01:01 2017 +0100
Committer: Benjamin Lerer 
Committed: Fri Mar 10 10:02:21 2017 +0100

--
 CHANGES.txt |   1 +
 .../apache/cassandra/cql3/UpdateParameters.java |  24 -
 .../org/apache/cassandra/db/rows/BTreeRow.java  |  43 ++--
 src/java/org/apache/cassandra/db/rows/Row.java  |   6 ++
 .../org/apache/cassandra/utils/btree/BTree.java |  19 
 .../validation/entities/CollectionsTest.java| 100 +++
 .../apache/cassandra/db/rows/RowBuilder.java|   7 ++
 7 files changed, 191 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/aeca1d2b/CHANGES.txt
--
diff --cc CHANGES.txt
index 1876922,09e4039..52a794b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,20 -1,6 +1,21 @@@
 -2.2.10
 +3.0.13
 + * Slice.isEmpty() returns false for some empty slices (CASSANDRA-13305)
 + * Add formatted row output to assertEmpty in CQL Tester (CASSANDRA-13238)
 +Merged from 2.2:
+  * Fix queries updating multiple time the same list (CASSANDRA-13130)
   * Fix GRANT/REVOKE when keyspace isn't specified (CASSANDRA-13053)
 +
 +
 +3.0.12
 + * Prevent data loss on upgrade 2.1 - 3.0 by adding component separator to 
LogRecord absolute path (CASSANDRA-13294)
 + * Improve testing on macOS by eliminating sigar logging (CASSANDRA-13233)
 + * Cqlsh copy-from should error out when csv contains invalid data for 
collections (CASSANDRA-13071)
 + * Update c.yaml doc for offheap memtables (CASSANDRA-13179)
 + * Faster StreamingHistogram (CASSANDRA-13038)
 + * Legacy deserializer can create unexpected boundary range tombstones 
(CASSANDRA-13237)
 + * Remove unnecessary assertion from AntiCompactionTest (CASSANDRA-13070)
 + * Fix cqlsh COPY for dates before 1900 (CASSANDRA-13185)
 +Merged from 2.2:
   * Avoid race on receiver by starting streaming sender thread after sending 
init message (CASSANDRA-12886)
   * Fix "multiple versions of ant detected..." when running ant test 
(CASSANDRA-13232)
   * Coalescing strategy sleeps too much (CASSANDRA-13090)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/aeca1d2b/src/java/org/apache/cassandra/cql3/UpdateParameters.java
--
diff --cc src/java/org/apache/cassandra/cql3/UpdateParameters.java
index 0c58097,65edef7..d902dec
--- a/src/java/org/apache/cassandra/cql3/UpdateParameters.java
+++ b/src/java/org/apache/cassandra/cql3/UpdateParameters.java
@@@ -80,134 -59,71 +80,156 @@@ public class UpdateParameter
  throw new InvalidRequestException(String.format("Out of bound 
timestamp, must be in [%d, %d]", Long.MIN_VALUE + 1, Long.MAX_VALUE));
  }
  
 -public Cell makeColumn(CellName name, ByteBuffer value) throws 
InvalidRequestException
 +public void newRow(Clustering clustering) throws InvalidRequestException
 +{
 +if (metadata.isDense() && !metadata.isCompound())
 +{
 +// If it's a COMPACT STORAGE table with a single clustering 
column, the clustering value is
 +// translated in Thrift to the full Thrift column name, and for 
backward compatibility we
 +// don't want to allow that to be empty (even though this would 
be fine for the storage engine).
 +assert clustering.size() == 1;
 +ByteBuffer value = clustering.get(0);
 +if (value == null || !value.hasRemaining())
 +throw new InvalidRequestException("Invalid empty or null 
value for column " + metadata.clusteringColumns().get(0).name);
 +}
 +
 +if (clustering == Clustering.STATIC_CLUSTERING)
 +{
 +if (staticBuilder == null)
 +staticBuilder = BTreeRow.unsortedBuilder(nowInSec);
 +builder = staticBuilder;
 +}
 +else
 +{
 +if (regularBuilder == null)
 +regularBuilder = BTreeRow.unsortedBuilder(nowInSec);
 +builder = regularBuilder;
 +}
 +
 +builder.newRow(clustering);
 +}
 +
 +public Clustering currentClustering()
 +{
 +return builder.clustering();
 +}
 +
 +public void addPrimaryKeyLivenessInfo()
 +{
 +builder.addPrimaryKeyLivenessInfo(LivenessInfo.create(metadata, 
timestamp, 

[05/10] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-03-10 Thread blerer
Merge branch cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/aeca1d2b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/aeca1d2b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/aeca1d2b

Branch: refs/heads/cassandra-3.11
Commit: aeca1d2bd8e395a2897c3e36224f49b586babd4e
Parents: 31dec3d 5ef8a8b
Author: Benjamin Lerer 
Authored: Fri Mar 10 10:01:01 2017 +0100
Committer: Benjamin Lerer 
Committed: Fri Mar 10 10:02:21 2017 +0100

--
 CHANGES.txt |   1 +
 .../apache/cassandra/cql3/UpdateParameters.java |  24 -
 .../org/apache/cassandra/db/rows/BTreeRow.java  |  43 ++--
 src/java/org/apache/cassandra/db/rows/Row.java  |   6 ++
 .../org/apache/cassandra/utils/btree/BTree.java |  19 
 .../validation/entities/CollectionsTest.java| 100 +++
 .../apache/cassandra/db/rows/RowBuilder.java|   7 ++
 7 files changed, 191 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/aeca1d2b/CHANGES.txt
--
diff --cc CHANGES.txt
index 1876922,09e4039..52a794b
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,20 -1,6 +1,21 @@@
 -2.2.10
 +3.0.13
 + * Slice.isEmpty() returns false for some empty slices (CASSANDRA-13305)
 + * Add formatted row output to assertEmpty in CQL Tester (CASSANDRA-13238)
 +Merged from 2.2:
+  * Fix queries updating multiple time the same list (CASSANDRA-13130)
   * Fix GRANT/REVOKE when keyspace isn't specified (CASSANDRA-13053)
 +
 +
 +3.0.12
 + * Prevent data loss on upgrade 2.1 - 3.0 by adding component separator to 
LogRecord absolute path (CASSANDRA-13294)
 + * Improve testing on macOS by eliminating sigar logging (CASSANDRA-13233)
 + * Cqlsh copy-from should error out when csv contains invalid data for 
collections (CASSANDRA-13071)
 + * Update c.yaml doc for offheap memtables (CASSANDRA-13179)
 + * Faster StreamingHistogram (CASSANDRA-13038)
 + * Legacy deserializer can create unexpected boundary range tombstones 
(CASSANDRA-13237)
 + * Remove unnecessary assertion from AntiCompactionTest (CASSANDRA-13070)
 + * Fix cqlsh COPY for dates before 1900 (CASSANDRA-13185)
 +Merged from 2.2:
   * Avoid race on receiver by starting streaming sender thread after sending 
init message (CASSANDRA-12886)
   * Fix "multiple versions of ant detected..." when running ant test 
(CASSANDRA-13232)
   * Coalescing strategy sleeps too much (CASSANDRA-13090)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/aeca1d2b/src/java/org/apache/cassandra/cql3/UpdateParameters.java
--
diff --cc src/java/org/apache/cassandra/cql3/UpdateParameters.java
index 0c58097,65edef7..d902dec
--- a/src/java/org/apache/cassandra/cql3/UpdateParameters.java
+++ b/src/java/org/apache/cassandra/cql3/UpdateParameters.java
@@@ -80,134 -59,71 +80,156 @@@ public class UpdateParameter
  throw new InvalidRequestException(String.format("Out of bound 
timestamp, must be in [%d, %d]", Long.MIN_VALUE + 1, Long.MAX_VALUE));
  }
  
 -public Cell makeColumn(CellName name, ByteBuffer value) throws 
InvalidRequestException
 +public void newRow(Clustering clustering) throws InvalidRequestException
 +{
 +if (metadata.isDense() && !metadata.isCompound())
 +{
 +// If it's a COMPACT STORAGE table with a single clustering 
column, the clustering value is
 +// translated in Thrift to the full Thrift column name, and for 
backward compatibility we
 +// don't want to allow that to be empty (even though this would 
be fine for the storage engine).
 +assert clustering.size() == 1;
 +ByteBuffer value = clustering.get(0);
 +if (value == null || !value.hasRemaining())
 +throw new InvalidRequestException("Invalid empty or null 
value for column " + metadata.clusteringColumns().get(0).name);
 +}
 +
 +if (clustering == Clustering.STATIC_CLUSTERING)
 +{
 +if (staticBuilder == null)
 +staticBuilder = BTreeRow.unsortedBuilder(nowInSec);
 +builder = staticBuilder;
 +}
 +else
 +{
 +if (regularBuilder == null)
 +regularBuilder = BTreeRow.unsortedBuilder(nowInSec);
 +builder = regularBuilder;
 +}
 +
 +builder.newRow(clustering);
 +}
 +
 +public Clustering currentClustering()
 +{
 +return builder.clustering();
 +}
 +
 +public void addPrimaryKeyLivenessInfo()
 +{
 +builder.addPrimaryKeyLivenessInfo(LivenessInfo.create(metadata, 
timestamp, 

[09/10] cassandra git commit: Merge branch cassandra-3.0 into cassandra-3.11

2017-03-10 Thread blerer
Merge branch cassandra-3.0 into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7b3415d0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7b3415d0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7b3415d0

Branch: refs/heads/trunk
Commit: 7b3415d0b06843aca4410ce9cbd6d68ff37e3978
Parents: dc65a57 aeca1d2
Author: Benjamin Lerer 
Authored: Fri Mar 10 10:06:27 2017 +0100
Committer: Benjamin Lerer 
Committed: Fri Mar 10 10:07:14 2017 +0100

--
 CHANGES.txt |   1 +
 .../apache/cassandra/cql3/UpdateParameters.java |  24 -
 .../org/apache/cassandra/db/rows/BTreeRow.java  |  43 ++--
 src/java/org/apache/cassandra/db/rows/Row.java  |   6 ++
 .../org/apache/cassandra/utils/btree/BTree.java |  20 
 .../validation/entities/CollectionsTest.java| 100 +++
 .../apache/cassandra/db/rows/RowBuilder.java|   7 ++
 7 files changed, 192 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7b3415d0/CHANGES.txt
--
diff --cc CHANGES.txt
index 2772fc2,52a794b..acef1c2
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -32,139 -42,6 +32,140 @@@ Merged from 3.0
 live rows in sstabledump (CASSANDRA-13177)
   * Provide user workaround when system_schema.columns does not contain entries
 for a table that's in system_schema.tables (CASSANDRA-13180)
 +Merged from 2.2:
++ * Fix queries updating multiple time the same list (CASSANDRA-13130)
 + * Fix GRANT/REVOKE when keyspace isn't specified (CASSANDRA-13053)
 + * Fix flaky LongLeveledCompactionStrategyTest (CASSANDRA-12202)
 + * Fix failing COPY TO STDOUT (CASSANDRA-12497)
 + * Fix ColumnCounter::countAll behaviour for reverse queries (CASSANDRA-13222)
 + * Exceptions encountered calling getSeeds() breaks OTC thread 
(CASSANDRA-13018)
 + * Fix negative mean latency metric (CASSANDRA-12876)
 + * Use only one file pointer when creating commitlog segments 
(CASSANDRA-12539)
 +Merged from 2.1:
 + * Remove unused repositories (CASSANDRA-13278)
 + * Log stacktrace of uncaught exceptions (CASSANDRA-13108)
 + * Use portable stderr for java error in startup (CASSANDRA-13211)
 + * Fix Thread Leak in OutboundTcpConnection (CASSANDRA-13204)
 + * Coalescing strategy can enter infinite loop (CASSANDRA-13159)
 +
 +
 +3.10
 + * Fix secondary index queries regression (CASSANDRA-13013)
 + * Add duration type to the protocol V5 (CASSANDRA-12850)
 + * Fix duration type validation (CASSANDRA-13143)
 + * Fix flaky GcCompactionTest (CASSANDRA-12664)
 + * Fix TestHintedHandoff.hintedhandoff_decom_test (CASSANDRA-13058)
 + * Fixed query monitoring for range queries (CASSANDRA-13050)
 + * Remove outboundBindAny configuration property (CASSANDRA-12673)
 + * Use correct bounds for all-data range when filtering (CASSANDRA-12666)
 + * Remove timing window in test case (CASSANDRA-12875)
 + * Resolve unit testing without JCE security libraries installed 
(CASSANDRA-12945)
 + * Fix inconsistencies in cassandra-stress load balancing policy 
(CASSANDRA-12919)
 + * Fix validation of non-frozen UDT cells (CASSANDRA-12916)
 + * Don't shut down socket input/output on StreamSession (CASSANDRA-12903)
 + * Fix Murmur3PartitionerTest (CASSANDRA-12858)
 + * Move cqlsh syntax rules into separate module and allow easier 
customization (CASSANDRA-12897)
 + * Fix CommitLogSegmentManagerTest (CASSANDRA-12283)
 + * Fix cassandra-stress truncate option (CASSANDRA-12695)
 + * Fix crossNode value when receiving messages (CASSANDRA-12791)
 + * Don't load MX4J beans twice (CASSANDRA-12869)
 + * Extend native protocol request flags, add versions to SUPPORTED, and 
introduce ProtocolVersion enum (CASSANDRA-12838)
 + * Set JOINING mode when running pre-join tasks (CASSANDRA-12836)
 + * remove net.mintern.primitive library due to license issue (CASSANDRA-12845)
 + * Properly format IPv6 addresses when logging JMX service URL 
(CASSANDRA-12454)
 + * Optimize the vnode allocation for single replica per DC (CASSANDRA-12777)
 + * Use non-token restrictions for bounds when token restrictions are 
overridden (CASSANDRA-12419)
 + * Fix CQLSH auto completion for PER PARTITION LIMIT (CASSANDRA-12803)
 + * Use different build directories for Eclipse and Ant (CASSANDRA-12466)
 + * Avoid potential AttributeError in cqlsh due to no table metadata 
(CASSANDRA-12815)
 + * Fix RandomReplicationAwareTokenAllocatorTest.testExistingCluster 
(CASSANDRA-12812)
 + * Upgrade commons-codec to 1.9 (CASSANDRA-12790)
 + * Make the fanout size for LeveledCompactionStrategy to be configurable 
(CASSANDRA-11550)
 + * Add duration data type (CASSANDRA-11873)
 + * Fix timeout in ReplicationAwareTokenAllocatorTest 

[03/10] cassandra git commit: Fix queries updating multiple time the same list

2017-03-10 Thread blerer
Fix queries updating multiple time the same list

patch by Benjamin Lerer; reviewed by Sylvain Lebresne for CASSANDRA-13130


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5ef8a8b4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5ef8a8b4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5ef8a8b4

Branch: refs/heads/cassandra-3.11
Commit: 5ef8a8b408d4c492f7f2ffbbbe6fce237140c7cb
Parents: e4be2d0
Author: Benjamin Lerer 
Authored: Fri Mar 10 09:57:20 2017 +0100
Committer: Benjamin Lerer 
Committed: Fri Mar 10 09:57:20 2017 +0100

--
 CHANGES.txt |   1 +
 src/java/org/apache/cassandra/cql3/Lists.java   |  10 +-
 .../apache/cassandra/cql3/UpdateParameters.java |  31 +-
 .../validation/entities/CollectionsTest.java| 100 +++
 4 files changed, 135 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ef8a8b4/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0982de9..09e4039 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.10
+ * Fix queries updating multiple time the same list (CASSANDRA-13130)
  * Fix GRANT/REVOKE when keyspace isn't specified (CASSANDRA-13053)
  * Avoid race on receiver by starting streaming sender thread after sending 
init message (CASSANDRA-12886)
  * Fix "multiple versions of ant detected..." when running ant test 
(CASSANDRA-13232)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ef8a8b4/src/java/org/apache/cassandra/cql3/Lists.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Lists.java 
b/src/java/org/apache/cassandra/cql3/Lists.java
index da8c48a..cc75476 100644
--- a/src/java/org/apache/cassandra/cql3/Lists.java
+++ b/src/java/org/apache/cassandra/cql3/Lists.java
@@ -21,15 +21,18 @@ import static 
org.apache.cassandra.cql3.Constants.UNSET_VALUE;
 
 import java.nio.ByteBuffer;
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.List;
 import java.util.concurrent.atomic.AtomicReference;
 
+import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.config.ColumnDefinition;
 import org.apache.cassandra.cql3.functions.Function;
 import org.apache.cassandra.db.Cell;
 import org.apache.cassandra.db.ColumnFamily;
 import org.apache.cassandra.db.composites.CellName;
 import org.apache.cassandra.db.composites.Composite;
+import org.apache.cassandra.db.composites.CompositesBuilder;
 import org.apache.cassandra.db.marshal.Int32Type;
 import org.apache.cassandra.db.marshal.ListType;
 import org.apache.cassandra.exceptions.InvalidRequestException;
@@ -349,7 +352,7 @@ public abstract class Lists
 if (index == ByteBufferUtil.UNSET_BYTE_BUFFER)
 throw new InvalidRequestException("Invalid unset value for 
list index");
 
-List existingList = params.getPrefetchedList(rowKey, 
column.name);
+List existingList = params.getPrefetchedList(rowKey, 
column.name, cf);
 int idx = ByteBufferUtil.toInt(index);
 if (existingList == null || existingList.size() == 0)
 throw new InvalidRequestException("Attempted to set an element 
on a list which is null");
@@ -458,7 +461,7 @@ public abstract class Lists
 public void execute(ByteBuffer rowKey, ColumnFamily cf, Composite 
prefix, UpdateParameters params) throws InvalidRequestException
 {
 assert column.type.isMultiCell() : "Attempted to delete from a 
frozen list";
-List existingList = params.getPrefetchedList(rowKey, 
column.name);
+List existingList = params.getPrefetchedList(rowKey, 
column.name, cf);
 // We want to call bind before possibly returning to reject 
queries where the value provided is not a list.
 Term.Terminal value = t.bind(params.options);
 
@@ -505,7 +508,8 @@ public abstract class Lists
 if (index == Constants.UNSET_VALUE)
 return;
 
-List existingList = params.getPrefetchedList(rowKey, 
column.name);
+List existingList = params.getPrefetchedList(rowKey, 
column.name, cf);
+
 int idx = 
ByteBufferUtil.toInt(index.get(params.options.getProtocolVersion()));
 if (existingList == null || existingList.size() == 0)
 throw new InvalidRequestException("Attempted to delete an 
element from a list which is null");

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ef8a8b4/src/java/org/apache/cassandra/cql3/UpdateParameters.java
--
diff --git 

[01/10] cassandra git commit: Fix queries updating multiple time the same list

2017-03-10 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 e4be2d06b -> 5ef8a8b40
  refs/heads/cassandra-3.0 31dec3d54 -> aeca1d2bd
  refs/heads/cassandra-3.11 dc65a5765 -> 7b3415d0b
  refs/heads/trunk 9e8e8914d -> 753d004cd


Fix queries updating multiple time the same list

patch by Benjamin Lerer; reviewed by Sylvain Lebresne for CASSANDRA-13130


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5ef8a8b4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5ef8a8b4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5ef8a8b4

Branch: refs/heads/cassandra-2.2
Commit: 5ef8a8b408d4c492f7f2ffbbbe6fce237140c7cb
Parents: e4be2d0
Author: Benjamin Lerer 
Authored: Fri Mar 10 09:57:20 2017 +0100
Committer: Benjamin Lerer 
Committed: Fri Mar 10 09:57:20 2017 +0100

--
 CHANGES.txt |   1 +
 src/java/org/apache/cassandra/cql3/Lists.java   |  10 +-
 .../apache/cassandra/cql3/UpdateParameters.java |  31 +-
 .../validation/entities/CollectionsTest.java| 100 +++
 4 files changed, 135 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ef8a8b4/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0982de9..09e4039 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.10
+ * Fix queries updating multiple time the same list (CASSANDRA-13130)
  * Fix GRANT/REVOKE when keyspace isn't specified (CASSANDRA-13053)
  * Avoid race on receiver by starting streaming sender thread after sending 
init message (CASSANDRA-12886)
  * Fix "multiple versions of ant detected..." when running ant test 
(CASSANDRA-13232)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ef8a8b4/src/java/org/apache/cassandra/cql3/Lists.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Lists.java 
b/src/java/org/apache/cassandra/cql3/Lists.java
index da8c48a..cc75476 100644
--- a/src/java/org/apache/cassandra/cql3/Lists.java
+++ b/src/java/org/apache/cassandra/cql3/Lists.java
@@ -21,15 +21,18 @@ import static 
org.apache.cassandra.cql3.Constants.UNSET_VALUE;
 
 import java.nio.ByteBuffer;
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.List;
 import java.util.concurrent.atomic.AtomicReference;
 
+import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.config.ColumnDefinition;
 import org.apache.cassandra.cql3.functions.Function;
 import org.apache.cassandra.db.Cell;
 import org.apache.cassandra.db.ColumnFamily;
 import org.apache.cassandra.db.composites.CellName;
 import org.apache.cassandra.db.composites.Composite;
+import org.apache.cassandra.db.composites.CompositesBuilder;
 import org.apache.cassandra.db.marshal.Int32Type;
 import org.apache.cassandra.db.marshal.ListType;
 import org.apache.cassandra.exceptions.InvalidRequestException;
@@ -349,7 +352,7 @@ public abstract class Lists
 if (index == ByteBufferUtil.UNSET_BYTE_BUFFER)
 throw new InvalidRequestException("Invalid unset value for 
list index");
 
-List existingList = params.getPrefetchedList(rowKey, 
column.name);
+List existingList = params.getPrefetchedList(rowKey, 
column.name, cf);
 int idx = ByteBufferUtil.toInt(index);
 if (existingList == null || existingList.size() == 0)
 throw new InvalidRequestException("Attempted to set an element 
on a list which is null");
@@ -458,7 +461,7 @@ public abstract class Lists
 public void execute(ByteBuffer rowKey, ColumnFamily cf, Composite 
prefix, UpdateParameters params) throws InvalidRequestException
 {
 assert column.type.isMultiCell() : "Attempted to delete from a 
frozen list";
-List existingList = params.getPrefetchedList(rowKey, 
column.name);
+List existingList = params.getPrefetchedList(rowKey, 
column.name, cf);
 // We want to call bind before possibly returning to reject 
queries where the value provided is not a list.
 Term.Terminal value = t.bind(params.options);
 
@@ -505,7 +508,8 @@ public abstract class Lists
 if (index == Constants.UNSET_VALUE)
 return;
 
-List existingList = params.getPrefetchedList(rowKey, 
column.name);
+List existingList = params.getPrefetchedList(rowKey, 
column.name, cf);
+
 int idx = 
ByteBufferUtil.toInt(index.get(params.options.getProtocolVersion()));
 if (existingList == null || existingList.size() == 0)
 throw new InvalidRequestException("Attempted to delete an 
element from a 

[10/10] cassandra git commit: Merge branch cassandra-3.11 into trunk

2017-03-10 Thread blerer
Merge branch cassandra-3.11 into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/753d004c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/753d004c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/753d004c

Branch: refs/heads/trunk
Commit: 753d004cda7ac4ed636b5e7f0b712ba0e987d368
Parents: 9e8e891 7b3415d
Author: Benjamin Lerer 
Authored: Fri Mar 10 10:14:13 2017 +0100
Committer: Benjamin Lerer 
Committed: Fri Mar 10 10:15:07 2017 +0100

--
 CHANGES.txt |   1 +
 .../apache/cassandra/cql3/UpdateParameters.java |  24 -
 .../org/apache/cassandra/db/rows/BTreeRow.java  |  43 ++--
 src/java/org/apache/cassandra/db/rows/Row.java  |   6 ++
 .../org/apache/cassandra/utils/btree/BTree.java |  20 
 .../validation/entities/CollectionsTest.java| 100 +++
 .../apache/cassandra/db/rows/RowBuilder.java|   7 ++
 7 files changed, 192 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/753d004c/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/753d004c/src/java/org/apache/cassandra/cql3/UpdateParameters.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/753d004c/src/java/org/apache/cassandra/db/rows/BTreeRow.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/753d004c/src/java/org/apache/cassandra/db/rows/Row.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/753d004c/test/unit/org/apache/cassandra/cql3/validation/entities/CollectionsTest.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/753d004c/test/unit/org/apache/cassandra/db/rows/RowBuilder.java
--
diff --cc test/unit/org/apache/cassandra/db/rows/RowBuilder.java
index 5eed774,ede2ccd..21522a5
--- a/test/unit/org/apache/cassandra/db/rows/RowBuilder.java
+++ b/test/unit/org/apache/cassandra/db/rows/RowBuilder.java
@@@ -37,8 -38,14 +38,14 @@@ public class RowBuilder implements Row.
  public Clustering clustering = null;
  public LivenessInfo livenessInfo = null;
  public Row.Deletion deletionTime = null;
 -public List> complexDeletions = new 
LinkedList<>();
 +public List> complexDeletions = new 
LinkedList<>();
  
+ @Override
+ public Builder copy()
+ {
+ throw new UnsupportedOperationException();
+ }
+ 
  public void addCell(Cell cell)
  {
  cells.add(cell);



[04/10] cassandra git commit: Fix queries updating multiple time the same list

2017-03-10 Thread blerer
Fix queries updating multiple time the same list

patch by Benjamin Lerer; reviewed by Sylvain Lebresne for CASSANDRA-13130


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5ef8a8b4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5ef8a8b4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5ef8a8b4

Branch: refs/heads/trunk
Commit: 5ef8a8b408d4c492f7f2ffbbbe6fce237140c7cb
Parents: e4be2d0
Author: Benjamin Lerer 
Authored: Fri Mar 10 09:57:20 2017 +0100
Committer: Benjamin Lerer 
Committed: Fri Mar 10 09:57:20 2017 +0100

--
 CHANGES.txt |   1 +
 src/java/org/apache/cassandra/cql3/Lists.java   |  10 +-
 .../apache/cassandra/cql3/UpdateParameters.java |  31 +-
 .../validation/entities/CollectionsTest.java| 100 +++
 4 files changed, 135 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ef8a8b4/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0982de9..09e4039 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.10
+ * Fix queries updating multiple time the same list (CASSANDRA-13130)
  * Fix GRANT/REVOKE when keyspace isn't specified (CASSANDRA-13053)
  * Avoid race on receiver by starting streaming sender thread after sending 
init message (CASSANDRA-12886)
  * Fix "multiple versions of ant detected..." when running ant test 
(CASSANDRA-13232)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ef8a8b4/src/java/org/apache/cassandra/cql3/Lists.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Lists.java 
b/src/java/org/apache/cassandra/cql3/Lists.java
index da8c48a..cc75476 100644
--- a/src/java/org/apache/cassandra/cql3/Lists.java
+++ b/src/java/org/apache/cassandra/cql3/Lists.java
@@ -21,15 +21,18 @@ import static 
org.apache.cassandra.cql3.Constants.UNSET_VALUE;
 
 import java.nio.ByteBuffer;
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.List;
 import java.util.concurrent.atomic.AtomicReference;
 
+import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.config.ColumnDefinition;
 import org.apache.cassandra.cql3.functions.Function;
 import org.apache.cassandra.db.Cell;
 import org.apache.cassandra.db.ColumnFamily;
 import org.apache.cassandra.db.composites.CellName;
 import org.apache.cassandra.db.composites.Composite;
+import org.apache.cassandra.db.composites.CompositesBuilder;
 import org.apache.cassandra.db.marshal.Int32Type;
 import org.apache.cassandra.db.marshal.ListType;
 import org.apache.cassandra.exceptions.InvalidRequestException;
@@ -349,7 +352,7 @@ public abstract class Lists
 if (index == ByteBufferUtil.UNSET_BYTE_BUFFER)
 throw new InvalidRequestException("Invalid unset value for 
list index");
 
-List existingList = params.getPrefetchedList(rowKey, 
column.name);
+List existingList = params.getPrefetchedList(rowKey, 
column.name, cf);
 int idx = ByteBufferUtil.toInt(index);
 if (existingList == null || existingList.size() == 0)
 throw new InvalidRequestException("Attempted to set an element 
on a list which is null");
@@ -458,7 +461,7 @@ public abstract class Lists
 public void execute(ByteBuffer rowKey, ColumnFamily cf, Composite 
prefix, UpdateParameters params) throws InvalidRequestException
 {
 assert column.type.isMultiCell() : "Attempted to delete from a 
frozen list";
-List existingList = params.getPrefetchedList(rowKey, 
column.name);
+List existingList = params.getPrefetchedList(rowKey, 
column.name, cf);
 // We want to call bind before possibly returning to reject 
queries where the value provided is not a list.
 Term.Terminal value = t.bind(params.options);
 
@@ -505,7 +508,8 @@ public abstract class Lists
 if (index == Constants.UNSET_VALUE)
 return;
 
-List existingList = params.getPrefetchedList(rowKey, 
column.name);
+List existingList = params.getPrefetchedList(rowKey, 
column.name, cf);
+
 int idx = 
ByteBufferUtil.toInt(index.get(params.options.getProtocolVersion()));
 if (existingList == null || existingList.size() == 0)
 throw new InvalidRequestException("Attempted to delete an 
element from a list which is null");

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ef8a8b4/src/java/org/apache/cassandra/cql3/UpdateParameters.java
--
diff --git 

[02/10] cassandra git commit: Fix queries updating multiple time the same list

2017-03-10 Thread blerer
Fix queries updating multiple time the same list

patch by Benjamin Lerer; reviewed by Sylvain Lebresne for CASSANDRA-13130


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5ef8a8b4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5ef8a8b4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5ef8a8b4

Branch: refs/heads/cassandra-3.0
Commit: 5ef8a8b408d4c492f7f2ffbbbe6fce237140c7cb
Parents: e4be2d0
Author: Benjamin Lerer 
Authored: Fri Mar 10 09:57:20 2017 +0100
Committer: Benjamin Lerer 
Committed: Fri Mar 10 09:57:20 2017 +0100

--
 CHANGES.txt |   1 +
 src/java/org/apache/cassandra/cql3/Lists.java   |  10 +-
 .../apache/cassandra/cql3/UpdateParameters.java |  31 +-
 .../validation/entities/CollectionsTest.java| 100 +++
 4 files changed, 135 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ef8a8b4/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0982de9..09e4039 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.10
+ * Fix queries updating multiple time the same list (CASSANDRA-13130)
  * Fix GRANT/REVOKE when keyspace isn't specified (CASSANDRA-13053)
  * Avoid race on receiver by starting streaming sender thread after sending 
init message (CASSANDRA-12886)
  * Fix "multiple versions of ant detected..." when running ant test 
(CASSANDRA-13232)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ef8a8b4/src/java/org/apache/cassandra/cql3/Lists.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Lists.java 
b/src/java/org/apache/cassandra/cql3/Lists.java
index da8c48a..cc75476 100644
--- a/src/java/org/apache/cassandra/cql3/Lists.java
+++ b/src/java/org/apache/cassandra/cql3/Lists.java
@@ -21,15 +21,18 @@ import static 
org.apache.cassandra.cql3.Constants.UNSET_VALUE;
 
 import java.nio.ByteBuffer;
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.List;
 import java.util.concurrent.atomic.AtomicReference;
 
+import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.config.ColumnDefinition;
 import org.apache.cassandra.cql3.functions.Function;
 import org.apache.cassandra.db.Cell;
 import org.apache.cassandra.db.ColumnFamily;
 import org.apache.cassandra.db.composites.CellName;
 import org.apache.cassandra.db.composites.Composite;
+import org.apache.cassandra.db.composites.CompositesBuilder;
 import org.apache.cassandra.db.marshal.Int32Type;
 import org.apache.cassandra.db.marshal.ListType;
 import org.apache.cassandra.exceptions.InvalidRequestException;
@@ -349,7 +352,7 @@ public abstract class Lists
 if (index == ByteBufferUtil.UNSET_BYTE_BUFFER)
 throw new InvalidRequestException("Invalid unset value for 
list index");
 
-List existingList = params.getPrefetchedList(rowKey, 
column.name);
+List existingList = params.getPrefetchedList(rowKey, 
column.name, cf);
 int idx = ByteBufferUtil.toInt(index);
 if (existingList == null || existingList.size() == 0)
 throw new InvalidRequestException("Attempted to set an element 
on a list which is null");
@@ -458,7 +461,7 @@ public abstract class Lists
 public void execute(ByteBuffer rowKey, ColumnFamily cf, Composite 
prefix, UpdateParameters params) throws InvalidRequestException
 {
 assert column.type.isMultiCell() : "Attempted to delete from a 
frozen list";
-List existingList = params.getPrefetchedList(rowKey, 
column.name);
+List existingList = params.getPrefetchedList(rowKey, 
column.name, cf);
 // We want to call bind before possibly returning to reject 
queries where the value provided is not a list.
 Term.Terminal value = t.bind(params.options);
 
@@ -505,7 +508,8 @@ public abstract class Lists
 if (index == Constants.UNSET_VALUE)
 return;
 
-List existingList = params.getPrefetchedList(rowKey, 
column.name);
+List existingList = params.getPrefetchedList(rowKey, 
column.name, cf);
+
 int idx = 
ByteBufferUtil.toInt(index.get(params.options.getProtocolVersion()));
 if (existingList == null || existingList.size() == 0)
 throw new InvalidRequestException("Attempted to delete an 
element from a list which is null");

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ef8a8b4/src/java/org/apache/cassandra/cql3/UpdateParameters.java
--
diff --git 

[jira] [Updated] (CASSANDRA-13321) Add a checksum component for the sstable metadata (-Statistics.db) file

2017-03-10 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-13321:

Status: Patch Available  (was: Open)

https://github.com/krummas/cassandra/tree/marcuse/metadatachecksum_trunk

http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-metadatachecksum_trunk-dtest/
http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-metadatachecksum_trunk-testall/

> Add a checksum component for the sstable metadata (-Statistics.db) file
> ---
>
> Key: CASSANDRA-13321
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13321
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 4.x
>
>
> Since we keep important information in the sstable metadata file now, we 
> should add a checksum component for it. One danger being if a bit gets 
> flipped in repairedAt we could consider the sstable repaired when it is not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CASSANDRA-13321) Add a checksum component for the sstable metadata (-Statistics.db) file

2017-03-10 Thread Marcus Eriksson (JIRA)
Marcus Eriksson created CASSANDRA-13321:
---

 Summary: Add a checksum component for the sstable metadata 
(-Statistics.db) file
 Key: CASSANDRA-13321
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13321
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 4.x


Since we keep important information in the sstable metadata file now, we should 
add a checksum component for it. One danger being if a bit gets flipped in 
repairedAt we could consider the sstable repaired when it is not.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CASSANDRA-13320) upgradesstables fails after upgrading from 2.1.x to 3.0.11

2017-03-10 Thread Zhongxiang Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhongxiang Zheng updated CASSANDRA-13320:
-
Attachment: 13320.patch

> upgradesstables fails after upgrading from 2.1.x to 3.0.11
> --
>
> Key: CASSANDRA-13320
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13320
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Zhongxiang Zheng
> Attachments: 13320.patch
>
>
> I tried to execute {{nodetool upgradesstables}} after upgrading cluster from 
> 2.1.16 to 3.0.11, but it fails when upgrading a table with 2i.
> This problem can be reproduced as follows.
> {code}
> $ ccm create test -v 2.1.16 -n 1 -s
> $ ccm node1 cqlsh  -e "CREATE KEYSPACE test WITH replication = 
> {'class':'SimpleStrategy', 'replication_factor':1}"
> $ ccm node1 cqlsh  -e "CREATE TABLE test.test(k1 text, k2 text, PRIMARY KEY( 
> k1 ));"
> $ ccm node1 cqlsh  -e "CREATE INDEX k2 ON test.test(k2);"
>  
> $ ccm node1 cqlsh  -e "INSERT INTO test.test (k1, k2 ) VALUES ( 'a', 'a') ;"
> $ ccm node1 cqlsh  -e "INSERT INTO test.test (k1, k2 ) VALUES ( 'a', 'b') ;"
>  
> $ ccm node1 nodetool flush
>  
> $ for i in `seq 1 `; do ccm node${i} stop; ccm node${i} setdir -v3.0.11;ccm 
> node${i} start; done
> $ ccm node1 nodetool upgradesstables test test
> Traceback (most recent call last):
>   File "/home/y/bin/ccm", line 86, in 
> cmd.run()
>   File "/home/y/lib/python2.7/site-packages/ccmlib/cmds/node_cmds.py", line 
> 267, in run
> stdout, stderr = self.node.nodetool(" ".join(self.args[1:]))
>   File "/home/y/lib/python2.7/site-packages/ccmlib/node.py", line 742, in 
> nodetool
> raise NodetoolError(" ".join(args), exit_status, stdout, stderr)
> ccmlib.node.NodetoolError: Nodetool command 
> '/home/zzheng/.ccm/repository/3.0.11/bin/nodetool -h localhost -p 7100 
> upgradesstables test test' failed; exit status: 2; stderr: WARN  06:29:08 
> Only 10476 MB free across all data volumes. Consider adding more capacity to 
> your cluster or removing obsolete snapshots
> error: null
> -- StackTrace --
> java.lang.AssertionError
>   at org.apache.cassandra.db.rows.Rows.collectStats(Rows.java:70)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter$StatsCollector.applyToRow(BigTableWriter.java:197)
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:116)
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:107)
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:41)
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:156)
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:122)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:147)
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:125)
>   at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:57)
>   at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:109)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:195)
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:89)
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$5.execute(CompactionManager.java:415)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:307)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> The result of dumping the 2i sstable is as follows.
> {code}
> [
> {"key": "a",
>  "cells": [["61",1488961273,1488961269822817,"d"]]},
> {"key": "b",
>  "cells": [["61","",1488961273015759]]}
> ]
> {code}
> This problem is occurred by the tombstone row. When this row is processed in 
> {{LegacyLayout.java}}, it will be treated as a row maker.
> 

[jira] [Updated] (CASSANDRA-13320) upgradesstables fails after upgrading from 2.1.x to 3.0.11

2017-03-10 Thread Zhongxiang Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhongxiang Zheng updated CASSANDRA-13320:
-
Status: Patch Available  (was: Open)

> upgradesstables fails after upgrading from 2.1.x to 3.0.11
> --
>
> Key: CASSANDRA-13320
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13320
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Zhongxiang Zheng
>
> I tried to execute {{nodetool upgradesstables}} after upgrading cluster from 
> 2.1.16 to 3.0.11, but it fails when upgrading a table with 2i.
> This problem can be reproduced as follows.
> {code}
> $ ccm create test -v 2.1.16 -n 1 -s
> $ ccm node1 cqlsh  -e "CREATE KEYSPACE test WITH replication = 
> {'class':'SimpleStrategy', 'replication_factor':1}"
> $ ccm node1 cqlsh  -e "CREATE TABLE test.test(k1 text, k2 text, PRIMARY KEY( 
> k1 ));"
> $ ccm node1 cqlsh  -e "CREATE INDEX k2 ON test.test(k2);"
>  
> $ ccm node1 cqlsh  -e "INSERT INTO test.test (k1, k2 ) VALUES ( 'a', 'a') ;"
> $ ccm node1 cqlsh  -e "INSERT INTO test.test (k1, k2 ) VALUES ( 'a', 'b') ;"
>  
> $ ccm node1 nodetool flush
>  
> $ for i in `seq 1 `; do ccm node${i} stop; ccm node${i} setdir -v3.0.11;ccm 
> node${i} start; done
> $ ccm node1 nodetool upgradesstables test test
> Traceback (most recent call last):
>   File "/home/y/bin/ccm", line 86, in 
> cmd.run()
>   File "/home/y/lib/python2.7/site-packages/ccmlib/cmds/node_cmds.py", line 
> 267, in run
> stdout, stderr = self.node.nodetool(" ".join(self.args[1:]))
>   File "/home/y/lib/python2.7/site-packages/ccmlib/node.py", line 742, in 
> nodetool
> raise NodetoolError(" ".join(args), exit_status, stdout, stderr)
> ccmlib.node.NodetoolError: Nodetool command 
> '/home/zzheng/.ccm/repository/3.0.11/bin/nodetool -h localhost -p 7100 
> upgradesstables test test' failed; exit status: 2; stderr: WARN  06:29:08 
> Only 10476 MB free across all data volumes. Consider adding more capacity to 
> your cluster or removing obsolete snapshots
> error: null
> -- StackTrace --
> java.lang.AssertionError
>   at org.apache.cassandra.db.rows.Rows.collectStats(Rows.java:70)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter$StatsCollector.applyToRow(BigTableWriter.java:197)
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:116)
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:107)
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:41)
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:156)
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:122)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:147)
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:125)
>   at 
> org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:57)
>   at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:109)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:195)
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:89)
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$5.execute(CompactionManager.java:415)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:307)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> The result of dumping the 2i sstable is as follows.
> {code}
> [
> {"key": "a",
>  "cells": [["61",1488961273,1488961269822817,"d"]]},
> {"key": "b",
>  "cells": [["61","",1488961273015759]]}
> ]
> {code}
> This problem is occurred by the tombstone row. When this row is processed in 
> {{LegacyLayout.java}}, it will be treated as a row maker.
> https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/db/LegacyLayout.java#L1195
> 

[jira] [Created] (CASSANDRA-13320) upgradesstables fails after upgrading from 2.1.x to 3.0.11

2017-03-10 Thread Zhongxiang Zheng (JIRA)
Zhongxiang Zheng created CASSANDRA-13320:


 Summary: upgradesstables fails after upgrading from 2.1.x to 3.0.11
 Key: CASSANDRA-13320
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13320
 Project: Cassandra
  Issue Type: Bug
Reporter: Zhongxiang Zheng


I tried to execute {{nodetool upgradesstables}} after upgrading cluster from 
2.1.16 to 3.0.11, but it fails when upgrading a table with 2i.

This problem can be reproduced as follows.
{code}
$ ccm create test -v 2.1.16 -n 1 -s
$ ccm node1 cqlsh  -e "CREATE KEYSPACE test WITH replication = 
{'class':'SimpleStrategy', 'replication_factor':1}"
$ ccm node1 cqlsh  -e "CREATE TABLE test.test(k1 text, k2 text, PRIMARY KEY( k1 
));"
$ ccm node1 cqlsh  -e "CREATE INDEX k2 ON test.test(k2);"
 
$ ccm node1 cqlsh  -e "INSERT INTO test.test (k1, k2 ) VALUES ( 'a', 'a') ;"
$ ccm node1 cqlsh  -e "INSERT INTO test.test (k1, k2 ) VALUES ( 'a', 'b') ;"
 
$ ccm node1 nodetool flush
 
$ for i in `seq 1 `; do ccm node${i} stop; ccm node${i} setdir -v3.0.11;ccm 
node${i} start; done
$ ccm node1 nodetool upgradesstables test test
Traceback (most recent call last):
  File "/home/y/bin/ccm", line 86, in 
cmd.run()
  File "/home/y/lib/python2.7/site-packages/ccmlib/cmds/node_cmds.py", line 
267, in run
stdout, stderr = self.node.nodetool(" ".join(self.args[1:]))
  File "/home/y/lib/python2.7/site-packages/ccmlib/node.py", line 742, in 
nodetool
raise NodetoolError(" ".join(args), exit_status, stdout, stderr)
ccmlib.node.NodetoolError: Nodetool command 
'/home/zzheng/.ccm/repository/3.0.11/bin/nodetool -h localhost -p 7100 
upgradesstables test test' failed; exit status: 2; stderr: WARN  06:29:08 Only 
10476 MB free across all data volumes. Consider adding more capacity to your 
cluster or removing obsolete snapshots
error: null
-- StackTrace --
java.lang.AssertionError
at org.apache.cassandra.db.rows.Rows.collectStats(Rows.java:70)
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter$StatsCollector.applyToRow(BigTableWriter.java:197)
at 
org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:116)
at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:107)
at 
org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:41)
at 
org.apache.cassandra.db.transform.Transformation.add(Transformation.java:156)
at 
org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:122)
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter.append(BigTableWriter.java:147)
at 
org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:125)
at 
org.apache.cassandra.db.compaction.writers.DefaultCompactionWriter.realAppend(DefaultCompactionWriter.java:57)
at 
org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:109)
at 
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:195)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:89)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
at 
org.apache.cassandra.db.compaction.CompactionManager$5.execute(CompactionManager.java:415)
at 
org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:307)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at 
org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
at java.lang.Thread.run(Thread.java:745)
{code}

The result of dumping the 2i sstable is as follows.
{code}
[
{"key": "a",
 "cells": [["61",1488961273,1488961269822817,"d"]]},
{"key": "b",
 "cells": [["61","",1488961273015759]]}
]
{code}

This problem is occurred by the tombstone row. When this row is processed in 
{{LegacyLayout.java}}, it will be treated as a row maker.
https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/db/LegacyLayout.java#L1195
Then the deletion info will be lost.

As a result, the row will be a empty row, which causes the assertion error.

To avoid this, I added the code to add row deletion info when the row is a 
tombstone and *not* a row marker, and it works as I expect, which means that 
{{upgradesstables}} succeeds and row deletion info is remained.

However I don't