[jira] [Commented] (CASSANDRA-9884) Error on encrypted node communication upgrading from 2.1.6 to 2.2.0

2015-07-23 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639887#comment-14639887
 ] 

Yuki Morishita commented on CASSANDRA-9884:
---

Can you check if the patch is correctly applied?

{code}
OutboundTcpConnection.connect(OutboundTcpConnection.java:404)
{code}

is the same line number as your original report. Patch added null check so NPE 
shouldn't happen in there.

> Error on encrypted node communication upgrading from 2.1.6 to 2.2.0
> ---
>
> Key: CASSANDRA-9884
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9884
> Project: Cassandra
>  Issue Type: Bug
>  Components: Config, Core
> Environment: Ubuntu 14.04.2 LTS 64 bits.
> Java version "1.8.0_45"
> Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
>Reporter: Carlos Scheidecker
>Priority: Critical
>  Labels: security
> Fix For: 2.2.0
>
>
> After updating to Cassandra 2.2.0 from 2.1.6 I am having SSL issues.
> The configuration had not changed from one version to the other, the JVM is 
> still the same however on 2.2.0 it is erroring. I am yet to investigate the 
> source code for it. But for now, this is the information I have to share on 
> it:
> My JVM is java version "1.8.0_45"
> Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
> Ubuntu 14.04.2 LTS is on all nodes, they are the same.
> Below is the encryption settings from cassandra.yaml of all nodes.
> I am using the same keystore and trustore as I had used before on 2.1.6
> # Enable or disable inter-node encryption
> # Default settings are TLS v1, RSA 1024-bit keys (it is imperative that
> # users generate their own keys) TLS_RSA_WITH_AES_128_CBC_SHA as the cipher
> # suite for authentication, key exchange and encryption of the actual data 
> transfers.
> # Use the DHE/ECDHE ciphers if running in FIPS 140 compliant mode.
> # NOTE: No custom encryption options are enabled at the moment
> # The available internode options are : all, none, dc, rack
> #
> # If set to dc cassandra will encrypt the traffic between the DCs
> # If set to rack cassandra will encrypt the traffic between the racks
> #
> # The passwords used in these options must match the passwords used when 
> generating
> # the keystore and truststore.  For instructions on generating these files, 
> see:
> # 
> http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore
> #
> server_encryption_options:
> internode_encryption: all
> keystore: /etc/cassandra/certs/node.keystore
> keystore_password: mypasswd
> truststore: /etc/cassandra/certs/global.truststore
> truststore_password: mypasswd
> # More advanced defaults below:
> # protocol: TLS
> # algorithm: SunX509
> # store_type: JKS
> cipher_suites: 
> [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
> require_client_auth: false
> # enable or disable client/server encryption.
> Nodes cannot talk to each other as per SSL errors bellow.
> WARN  [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
> SSLFactory.java:163 - Filtering out 
> TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
>  as it isnt supported by the socket
> ERROR [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
> OutboundTcpConnection.java:229 - error processing a message intended for 
> /192.168.1.31
> java.lang.NullPointerException: null
>   at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:213) 
> ~[guava-16.0.jar:na]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.(BufferedDataOutputStreamPlus.java:74)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:404)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:218)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
> ERROR [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
> OutboundTcpConnection.java:316 - error writing to /192.168.1.31
> java.lang.NullPointerException: null
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.writeInternal(OutboundTcpConnection.java:323)
>  [apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:285)
>  [apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.ne

[jira] [Commented] (CASSANDRA-9402) Implement proper sandboxing for UDFs

2015-07-23 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639859#comment-14639859
 ] 

T Jake Luciani commented on CASSANDRA-9402:
---

+1

> Implement proper sandboxing for UDFs
> 
>
> Key: CASSANDRA-9402
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9402
> Project: Cassandra
>  Issue Type: Task
>Reporter: T Jake Luciani
>Assignee: Robert Stupp
>Priority: Critical
>  Labels: docs-impacting, security
> Fix For: 3.0 beta 1
>
> Attachments: 9402-warning.txt
>
>
> We want to avoid a security exploit for our users.  We need to make sure we 
> ship 2.2 UDFs with good defaults so someone exposing it to the internet 
> accidentally doesn't open themselves up to having arbitrary code run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7066) Simplify (and unify) cleanup of compaction leftovers

2015-07-23 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639822#comment-14639822
 ] 

Stefania commented on CASSANDRA-7066:
-

Your changes look good, I've rebased and pushed one [small 
commit|https://github.com/stef1927/cassandra/commit/9acb46ab1e2d7a470df88d9479a49fa0bf0ceb1e]
 fixing the test for {{maybeFail}} and cleaning up review comments. 

CI will be available here:

http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-7066-dtest/
http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-7066-testall/

> Simplify (and unify) cleanup of compaction leftovers
> 
>
> Key: CASSANDRA-7066
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7066
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Stefania
>Priority: Minor
>  Labels: benedict-to-commit, compaction
> Fix For: 3.x
>
> Attachments: 7066.txt
>
>
> Currently we manage a list of in-progress compactions in a system table, 
> which we use to cleanup incomplete compactions when we're done. The problem 
> with this is that 1) it's a bit clunky (and leaves us in positions where we 
> can unnecessarily cleanup completed files, or conversely not cleanup files 
> that have been superceded); and 2) it's only used for a regular compaction - 
> no other compaction types are guarded in the same way, so can result in 
> duplication if we fail before deleting the replacements.
> I'd like to see each sstable store in its metadata its direct ancestors, and 
> on startup we simply delete any sstables that occur in the union of all 
> ancestor sets. This way as soon as we finish writing we're capable of 
> cleaning up any leftovers, so we never get duplication. It's also much easier 
> to reason about.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9793) Log when messages are dropped due to cross_node_timeout

2015-07-23 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639802#comment-14639802
 ] 

Stefania commented on CASSANDRA-9793:
-

Attached [2.1|https://github.com/stef1927/cassandra/commits/9793-2.1] and 
[2.2|https://github.com/stef1927/cassandra/commits/9793-2.2] patches and 
verified that the 2.2 patch applies to trunk. 

We still [log tcpstats on 
drop|https://github.com/stef1927/cassandra/commit/a00754d4dddb47bc4a4865131f282eb34fe8680b#diff-af09288f448c37a525e831ee90ea49f9L851]
 in 2.2.

> Log when messages are dropped due to cross_node_timeout
> ---
>
> Key: CASSANDRA-9793
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9793
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Stefania
> Fix For: 2.1.x, 2.0.x
>
>
> When a node has clock skew and cross node timeouts are enabled, there's no 
> indication that the messages were dropped due to the cross timeout, just that 
> messages were dropped.  This can errantly lead you down a path of 
> troubleshooting a load shedding situation when really you just have clock 
> drift on one node.  This is also not simple to troubleshoot, since you have 
> to determine that this node will answer requests, but other nodes won't 
> answer requests from it.  If the problem goes away on a reboot (and the 
> machine does one-shot time sync, not continuous) it becomes even harder to 
> detect because you're left with a weird piece of evidence such as "it's fine 
> after a reboot, but comes back in about X days every time."
> It would help tremendously if there were a log message indicating how many 
> messages (don't need them broken down by type) were eagerly dropped due to 
> the cross node timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9884) Error on encrypted node communication upgrading from 2.1.6 to 2.2.0

2015-07-23 Thread Carlos Scheidecker (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639787#comment-14639787
 ] 

Carlos Scheidecker commented on CASSANDRA-9884:
---

Maybe I should get access to the repository and start contributing/helping 
more. Sorry about this guys.

> Error on encrypted node communication upgrading from 2.1.6 to 2.2.0
> ---
>
> Key: CASSANDRA-9884
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9884
> Project: Cassandra
>  Issue Type: Bug
>  Components: Config, Core
> Environment: Ubuntu 14.04.2 LTS 64 bits.
> Java version "1.8.0_45"
> Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
>Reporter: Carlos Scheidecker
>Priority: Critical
>  Labels: security
> Fix For: 2.2.0
>
>
> After updating to Cassandra 2.2.0 from 2.1.6 I am having SSL issues.
> The configuration had not changed from one version to the other, the JVM is 
> still the same however on 2.2.0 it is erroring. I am yet to investigate the 
> source code for it. But for now, this is the information I have to share on 
> it:
> My JVM is java version "1.8.0_45"
> Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
> Ubuntu 14.04.2 LTS is on all nodes, they are the same.
> Below is the encryption settings from cassandra.yaml of all nodes.
> I am using the same keystore and trustore as I had used before on 2.1.6
> # Enable or disable inter-node encryption
> # Default settings are TLS v1, RSA 1024-bit keys (it is imperative that
> # users generate their own keys) TLS_RSA_WITH_AES_128_CBC_SHA as the cipher
> # suite for authentication, key exchange and encryption of the actual data 
> transfers.
> # Use the DHE/ECDHE ciphers if running in FIPS 140 compliant mode.
> # NOTE: No custom encryption options are enabled at the moment
> # The available internode options are : all, none, dc, rack
> #
> # If set to dc cassandra will encrypt the traffic between the DCs
> # If set to rack cassandra will encrypt the traffic between the racks
> #
> # The passwords used in these options must match the passwords used when 
> generating
> # the keystore and truststore.  For instructions on generating these files, 
> see:
> # 
> http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore
> #
> server_encryption_options:
> internode_encryption: all
> keystore: /etc/cassandra/certs/node.keystore
> keystore_password: mypasswd
> truststore: /etc/cassandra/certs/global.truststore
> truststore_password: mypasswd
> # More advanced defaults below:
> # protocol: TLS
> # algorithm: SunX509
> # store_type: JKS
> cipher_suites: 
> [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
> require_client_auth: false
> # enable or disable client/server encryption.
> Nodes cannot talk to each other as per SSL errors bellow.
> WARN  [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
> SSLFactory.java:163 - Filtering out 
> TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
>  as it isnt supported by the socket
> ERROR [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
> OutboundTcpConnection.java:229 - error processing a message intended for 
> /192.168.1.31
> java.lang.NullPointerException: null
>   at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:213) 
> ~[guava-16.0.jar:na]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.(BufferedDataOutputStreamPlus.java:74)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:404)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:218)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
> ERROR [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
> OutboundTcpConnection.java:316 - error writing to /192.168.1.31
> java.lang.NullPointerException: null
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.writeInternal(OutboundTcpConnection.java:323)
>  [apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:285)
>  [apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:219)
>  [apache-cassandra-2.2.0.jar:2.2.0]
> WARN  [MessagingServi

[jira] [Commented] (CASSANDRA-9884) Error on encrypted node communication upgrading from 2.1.6 to 2.2.0

2015-07-23 Thread Carlos Scheidecker (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639781#comment-14639781
 ] 

Carlos Scheidecker commented on CASSANDRA-9884:
---

Still not corrected Yuki, but thanks. I had it recompiled and tested on all 4 
nodes. Still has a null pointer error. This time, up on the code at 
writeConnected(QueuedMessage qm, boolean flush) and run() functions.

ERROR [MessagingService-Outgoing-/192.168.1.34] 2015-07-23 19:18:34,686 
OutboundTcpConnection.java:229 - error processing a message intended for 
/192.168.1.34
java.lang.NullPointerException: null
at 
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:213) 
~[guava-16.0.jar:na]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.(BufferedDataOutputStreamPlus.java:74)
 ~[apache-cassandra-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
at 
org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:404)
 ~[apache-cassandra-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
at 
org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:218)
 ~[apache-cassandra-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
ERROR [MessagingService-Outgoing-/192.168.1.34] 2015-07-23 19:18:35,683 
OutboundTcpConnection.java:316 - error writing to /192.168.1.34
java.lang.NullPointerException: null
at 
org.apache.cassandra.net.OutboundTcpConnection.writeInternal(OutboundTcpConnection.java:323)
 [apache-cassandra-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
at 
org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:285)
 [apache-cassandra-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
at 
org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:219)
 [apache-cassandra-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
ERROR [MessagingService-Outgoing-/192.168.1.33] 2015-07-23 19:18:35,683 
OutboundTcpConnection.java:316 - error writing to /192.168.1.33
java.lang.NullPointerException: null
at 
org.apache.cassandra.net.OutboundTcpConnection.writeInternal(OutboundTcpConnection.java:323)
 [apache-cassandra-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
at 
org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:285)
 [apache-cassandra-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]
at 
org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:219)
 [apache-cassandra-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT]


> Error on encrypted node communication upgrading from 2.1.6 to 2.2.0
> ---
>
> Key: CASSANDRA-9884
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9884
> Project: Cassandra
>  Issue Type: Bug
>  Components: Config, Core
> Environment: Ubuntu 14.04.2 LTS 64 bits.
> Java version "1.8.0_45"
> Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
>Reporter: Carlos Scheidecker
>Priority: Critical
>  Labels: security
> Fix For: 2.2.0
>
>
> After updating to Cassandra 2.2.0 from 2.1.6 I am having SSL issues.
> The configuration had not changed from one version to the other, the JVM is 
> still the same however on 2.2.0 it is erroring. I am yet to investigate the 
> source code for it. But for now, this is the information I have to share on 
> it:
> My JVM is java version "1.8.0_45"
> Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
> Ubuntu 14.04.2 LTS is on all nodes, they are the same.
> Below is the encryption settings from cassandra.yaml of all nodes.
> I am using the same keystore and trustore as I had used before on 2.1.6
> # Enable or disable inter-node encryption
> # Default settings are TLS v1, RSA 1024-bit keys (it is imperative that
> # users generate their own keys) TLS_RSA_WITH_AES_128_CBC_SHA as the cipher
> # suite for authentication, key exchange and encryption of the actual data 
> transfers.
> # Use the DHE/ECDHE ciphers if running in FIPS 140 compliant mode.
> # NOTE: No custom encryption options are enabled at the moment
> # The available internode options are : all, none, dc, rack
> #
> # If set to dc cassandra will encrypt the traffic between the DCs
> # If set to rack cassandra will encrypt the traffic between the racks
> #
> # The passwords used in these options must match the passwords used when 
> generating
> # the keystore and truststore.  For instructions on generating these files, 
> see:
> # 
> http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore
> #
> server_encryption_options:
> internode_encryption: all
> keystore: /etc/cassandra/certs/node.keystore
> keystore_password: mypasswd
> truststore:

[jira] [Updated] (CASSANDRA-9884) Error on encrypted node communication upgrading from 2.1.6 to 2.2.0

2015-07-23 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-9884:

Tester: Russ Hatch

> Error on encrypted node communication upgrading from 2.1.6 to 2.2.0
> ---
>
> Key: CASSANDRA-9884
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9884
> Project: Cassandra
>  Issue Type: Bug
>  Components: Config, Core
> Environment: Ubuntu 14.04.2 LTS 64 bits.
> Java version "1.8.0_45"
> Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
>Reporter: Carlos Scheidecker
>Priority: Critical
>  Labels: security
> Fix For: 2.2.0
>
>
> After updating to Cassandra 2.2.0 from 2.1.6 I am having SSL issues.
> The configuration had not changed from one version to the other, the JVM is 
> still the same however on 2.2.0 it is erroring. I am yet to investigate the 
> source code for it. But for now, this is the information I have to share on 
> it:
> My JVM is java version "1.8.0_45"
> Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
> Ubuntu 14.04.2 LTS is on all nodes, they are the same.
> Below is the encryption settings from cassandra.yaml of all nodes.
> I am using the same keystore and trustore as I had used before on 2.1.6
> # Enable or disable inter-node encryption
> # Default settings are TLS v1, RSA 1024-bit keys (it is imperative that
> # users generate their own keys) TLS_RSA_WITH_AES_128_CBC_SHA as the cipher
> # suite for authentication, key exchange and encryption of the actual data 
> transfers.
> # Use the DHE/ECDHE ciphers if running in FIPS 140 compliant mode.
> # NOTE: No custom encryption options are enabled at the moment
> # The available internode options are : all, none, dc, rack
> #
> # If set to dc cassandra will encrypt the traffic between the DCs
> # If set to rack cassandra will encrypt the traffic between the racks
> #
> # The passwords used in these options must match the passwords used when 
> generating
> # the keystore and truststore.  For instructions on generating these files, 
> see:
> # 
> http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore
> #
> server_encryption_options:
> internode_encryption: all
> keystore: /etc/cassandra/certs/node.keystore
> keystore_password: mypasswd
> truststore: /etc/cassandra/certs/global.truststore
> truststore_password: mypasswd
> # More advanced defaults below:
> # protocol: TLS
> # algorithm: SunX509
> # store_type: JKS
> cipher_suites: 
> [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
> require_client_auth: false
> # enable or disable client/server encryption.
> Nodes cannot talk to each other as per SSL errors bellow.
> WARN  [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
> SSLFactory.java:163 - Filtering out 
> TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
>  as it isnt supported by the socket
> ERROR [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
> OutboundTcpConnection.java:229 - error processing a message intended for 
> /192.168.1.31
> java.lang.NullPointerException: null
>   at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:213) 
> ~[guava-16.0.jar:na]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.(BufferedDataOutputStreamPlus.java:74)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:404)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:218)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
> ERROR [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
> OutboundTcpConnection.java:316 - error writing to /192.168.1.31
> java.lang.NullPointerException: null
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.writeInternal(OutboundTcpConnection.java:323)
>  [apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:285)
>  [apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:219)
>  [apache-cassandra-2.2.0.jar:2.2.0]
> WARN  [MessagingService-Outgoing-/192.168.1.33] 2015-07-22 17:29:49,764 
> SSLFactory.java:163 - Filtering out 
> TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_S

[jira] [Commented] (CASSANDRA-9446) Failure detector should ignore local pauses per endpoint

2015-07-23 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639666#comment-14639666
 ] 

sankalp kohli commented on CASSANDRA-9446:
--

The problem is that it will only help the first node after the pause since 
lastInterpret is reset after the pause. 

> Failure detector should ignore local pauses per endpoint
> 
>
> Key: CASSANDRA-9446
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9446
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Brandon Williams
>Priority: Minor
> Attachments: 9446.txt
>
>
> In CASSANDRA-9183, we added a feature to ignore local pauses. But it will 
> only not mark 2 endpoints as down. 
> We should do this per endpoint as suggested by Brandon in CASSANDRA-9183. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9884) Error on encrypted node communication upgrading from 2.1.6 to 2.2.0

2015-07-23 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-9884:
---
Reviewer: Jason Brown

+1

> Error on encrypted node communication upgrading from 2.1.6 to 2.2.0
> ---
>
> Key: CASSANDRA-9884
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9884
> Project: Cassandra
>  Issue Type: Bug
>  Components: Config, Core
> Environment: Ubuntu 14.04.2 LTS 64 bits.
> Java version "1.8.0_45"
> Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
>Reporter: Carlos Scheidecker
>Priority: Critical
>  Labels: security
> Fix For: 2.2.0
>
>
> After updating to Cassandra 2.2.0 from 2.1.6 I am having SSL issues.
> The configuration had not changed from one version to the other, the JVM is 
> still the same however on 2.2.0 it is erroring. I am yet to investigate the 
> source code for it. But for now, this is the information I have to share on 
> it:
> My JVM is java version "1.8.0_45"
> Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
> Ubuntu 14.04.2 LTS is on all nodes, they are the same.
> Below is the encryption settings from cassandra.yaml of all nodes.
> I am using the same keystore and trustore as I had used before on 2.1.6
> # Enable or disable inter-node encryption
> # Default settings are TLS v1, RSA 1024-bit keys (it is imperative that
> # users generate their own keys) TLS_RSA_WITH_AES_128_CBC_SHA as the cipher
> # suite for authentication, key exchange and encryption of the actual data 
> transfers.
> # Use the DHE/ECDHE ciphers if running in FIPS 140 compliant mode.
> # NOTE: No custom encryption options are enabled at the moment
> # The available internode options are : all, none, dc, rack
> #
> # If set to dc cassandra will encrypt the traffic between the DCs
> # If set to rack cassandra will encrypt the traffic between the racks
> #
> # The passwords used in these options must match the passwords used when 
> generating
> # the keystore and truststore.  For instructions on generating these files, 
> see:
> # 
> http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore
> #
> server_encryption_options:
> internode_encryption: all
> keystore: /etc/cassandra/certs/node.keystore
> keystore_password: mypasswd
> truststore: /etc/cassandra/certs/global.truststore
> truststore_password: mypasswd
> # More advanced defaults below:
> # protocol: TLS
> # algorithm: SunX509
> # store_type: JKS
> cipher_suites: 
> [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
> require_client_auth: false
> # enable or disable client/server encryption.
> Nodes cannot talk to each other as per SSL errors bellow.
> WARN  [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
> SSLFactory.java:163 - Filtering out 
> TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
>  as it isnt supported by the socket
> ERROR [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
> OutboundTcpConnection.java:229 - error processing a message intended for 
> /192.168.1.31
> java.lang.NullPointerException: null
>   at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:213) 
> ~[guava-16.0.jar:na]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.(BufferedDataOutputStreamPlus.java:74)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:404)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:218)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
> ERROR [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
> OutboundTcpConnection.java:316 - error writing to /192.168.1.31
> java.lang.NullPointerException: null
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.writeInternal(OutboundTcpConnection.java:323)
>  [apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:285)
>  [apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:219)
>  [apache-cassandra-2.2.0.jar:2.2.0]
> WARN  [MessagingService-Outgoing-/192.168.1.33] 2015-07-22 17:29:49,764 
> SSLFactory.java:163 - Filtering out 
> TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_

[jira] [Commented] (CASSANDRA-8014) NPE in Message.java line 324

2015-07-23 Thread Jiri Kremser (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639639#comment-14639639
 ] 

Jiri Kremser commented on CASSANDRA-8014:
-

I replicate the issue consistently on C* 2.1.6 and Titan 0.5.4

> NPE in Message.java line 324
> 
>
> Key: CASSANDRA-8014
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8014
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Cassandra 2.0.9, Cassandra 2.0.11
>Reporter: Peter Haggerty
>Assignee: Pavel Yaskevich
> Attachments: NPE_Message.java_line-324.txt
>
>
> We received this when a server was rebooting and attempted to shut Cassandra 
> down while it was still quite busy. While it's normal for us to have a 
> handful of the RejectedExecution exceptions on a sudden shutdown like this 
> these NPEs in Message.java are new.
> The attached file include the logs from "StorageServiceShutdownHook" to the 
> "Logging initialized" after the server restarts and Cassandra comes back up.
> {code}ERROR [pool-10-thread-2] 2014-09-29 08:33:44,055 Message.java (line 
> 324) Unexpected throwable while invoking!
> java.lang.NullPointerException
> at com.thinkaurelius.thrift.util.mem.Buffer.size(Buffer.java:83)
> at 
> com.thinkaurelius.thrift.util.mem.FastMemoryOutputTransport.expand(FastMemoryOutputTransport.java:84)
> at 
> com.thinkaurelius.thrift.util.mem.FastMemoryOutputTransport.write(FastMemoryOutputTransport.java:167)
> at 
> org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:156)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:55)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at com.thinkaurelius.thrift.Message.invoke(Message.java:314)
> at 
> com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:638)
> at 
> com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:632)
> at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9446) Failure detector should ignore local pauses per endpoint

2015-07-23 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-9446:

Attachment: 9446.txt

> Failure detector should ignore local pauses per endpoint
> 
>
> Key: CASSANDRA-9446
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9446
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Brandon Williams
>Priority: Minor
> Attachments: 9446.txt
>
>
> In CASSANDRA-9183, we added a feature to ignore local pauses. But it will 
> only not mark 2 endpoints as down. 
> We should do this per endpoint as suggested by Brandon in CASSANDRA-9183. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9448) Metrics should use up to date nomenclature

2015-07-23 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639622#comment-14639622
 ] 

Yuki Morishita edited comment on CASSANDRA-9448 at 7/23/15 10:35 PM:
-

Thanks, Stefania.
Committed the followup.


was (Author: yukim):
Thanks, Stefaina.
Committed the followup.

> Metrics should use up to date nomenclature
> --
>
> Key: CASSANDRA-9448
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9448
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Sam Tunnicliffe
>Assignee: Stefania
>  Labels: docs-impacting, jmx
> Fix For: 3.0 beta 1
>
>
> There are a number of exposed metrics that currently are named using the old 
> nomenclature of columnfamily and rows (meaning partitions).
> It would be good to audit all metrics and update any names to match what they 
> actually represent; we should probably do that in a single sweep to avoid a 
> confusing mixture of old and new terminology. 
> As we'd need to do this in a major release, I've initially set the fixver for 
> 3.0 beta1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Reverted partitionCache metric names to rowCache, CASSANDRA-9448

2015-07-23 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/trunk 5786b3204 -> eae3b0264


Reverted partitionCache metric names to rowCache, CASSANDRA-9448


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/eae3b026
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/eae3b026
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/eae3b026

Branch: refs/heads/trunk
Commit: eae3b02649789f1993147d5580a7b20794212319
Parents: 5786b32
Author: Stefania Alborghetti 
Authored: Thu Jul 23 17:34:35 2015 -0500
Committer: Yuki Morishita 
Committed: Thu Jul 23 17:34:35 2015 -0500

--
 .../db/SinglePartitionReadCommand.java  |  8 +++---
 .../apache/cassandra/metrics/TableMetrics.java  | 18 ++--
 .../org/apache/cassandra/db/RowCacheTest.java   | 30 ++--
 3 files changed, 28 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/eae3b026/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java 
b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
index 80711d6..3d4e42e 100644
--- a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
+++ b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
@@ -256,24 +256,24 @@ public abstract class SinglePartitionReadCommandhttp://git-wip-us.apache.org/repos/asf/cassandra/blob/eae3b026/src/java/org/apache/cassandra/metrics/TableMetrics.java
--
diff --git a/src/java/org/apache/cassandra/metrics/TableMetrics.java 
b/src/java/org/apache/cassandra/metrics/TableMetrics.java
index d708ac4..1b4293f 100644
--- a/src/java/org/apache/cassandra/metrics/TableMetrics.java
+++ b/src/java/org/apache/cassandra/metrics/TableMetrics.java
@@ -116,12 +116,12 @@ public class TableMetrics
 public final TableHistogram colUpdateTimeDeltaHistogram;
 /** Disk space used by snapshot files which */
 public final Gauge trueSnapshotsSize;
-/** Partition cache hits, but result out of range */
-public final Counter partitionCacheHitOutOfRange;
-/** Number of partition cache hits */
-public final Counter partitionCacheHit;
-/** Number of partition cache misses */
-public final Counter partitionCacheMiss;
+/** Row cache hits, but result out of range */
+public final Counter rowCacheHitOutOfRange;
+/** Number of row cache hits */
+public final Counter rowCacheHit;
+/** Number of row cache misses */
+public final Counter rowCacheMiss;
 /** CAS Prepare metrics */
 public final LatencyMetrics casPrepare;
 /** CAS Propose metrics */
@@ -620,9 +620,9 @@ public class TableMetrics
 return cfs.trueSnapshotsSize();
 }
 });
-partitionCacheHitOutOfRange = 
createTableCounter("PartitionCacheHitOutOfRange", "RowCacheHitOutOfRange");
-partitionCacheHit = createTableCounter("PartitionCacheHit", 
"RowCacheHit");
-partitionCacheMiss = createTableCounter("PartitionCacheMiss", 
"RowCacheMiss");
+rowCacheHitOutOfRange = createTableCounter("RowCacheHitOutOfRange");
+rowCacheHit = createTableCounter("RowCacheHit");
+rowCacheMiss = createTableCounter("RowCacheMiss");
 
 casPrepare = new LatencyMetrics(factory, "CasPrepare", 
cfs.keyspace.metric.casPrepare);
 casPropose = new LatencyMetrics(factory, "CasPropose", 
cfs.keyspace.metric.casPropose);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eae3b026/test/unit/org/apache/cassandra/db/RowCacheTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/RowCacheTest.java 
b/test/unit/org/apache/cassandra/db/RowCacheTest.java
index 883149f..b53f62c 100644
--- a/test/unit/org/apache/cassandra/db/RowCacheTest.java
+++ b/test/unit/org/apache/cassandra/db/RowCacheTest.java
@@ -79,8 +79,8 @@ public class RowCacheTest
 Keyspace keyspace = Keyspace.open(KEYSPACE_CACHED);
 String cf = "CachedIntCF";
 ColumnFamilyStore cachedStore  = keyspace.getColumnFamilyStore(cf);
-long startRowCacheHits = 
cachedStore.metric.partitionCacheHit.getCount();
-long startRowCacheOutOfRange = 
cachedStore.metric.partitionCacheHitOutOfRange.getCount();
+long startRowCacheHits = cachedStore.metric.rowCacheHit.getCount();
+long startRowCacheOutOfRange = 
cachedStore.metric.rowCacheHitOutOfRange.getCount();
 // empty the row cache
 CacheService.instance.invalidateRowCache();
 
@@ -98,12 +98,12 @@ public class RowCacheTest
 
 // populate row cache, we should not get a 

[2/2] cassandra git commit: Serialize ClusteringPrefix in microbatches, using vint encoding

2015-07-23 Thread benedict
Serialize ClusteringPrefix in microbatches, using vint encoding

patch by benedict; reviewed by sylvain for CASSANDRA-9708


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5786b320
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5786b320
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5786b320

Branch: refs/heads/trunk
Commit: 5786b3204d6da352124338c0130451e27dd056b0
Parents: c4c9eae
Author: Benedict Elliott Smith 
Authored: Wed Jun 17 09:58:41 2015 +0100
Committer: Benedict Elliott Smith 
Committed: Thu Jul 23 23:26:24 2015 +0100

--
 .../apache/cassandra/db/ClusteringPrefix.java   | 129 +--
 src/java/org/apache/cassandra/db/TypeSizes.java |   5 +
 .../cassandra/db/rows/UnfilteredSerializer.java |   2 +-
 .../cassandra/cql3/SerializationMirrorTest.java |  63 +
 4 files changed, 133 insertions(+), 66 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5786b320/src/java/org/apache/cassandra/db/ClusteringPrefix.java
--
diff --git a/src/java/org/apache/cassandra/db/ClusteringPrefix.java 
b/src/java/org/apache/cassandra/db/ClusteringPrefix.java
index 7b9d582..713ad1b 100644
--- a/src/java/org/apache/cassandra/db/ClusteringPrefix.java
+++ b/src/java/org/apache/cassandra/db/ClusteringPrefix.java
@@ -286,109 +286,103 @@ public interface ClusteringPrefix extends 
IMeasurableMemory, Clusterable
 
 void serializeValuesWithoutSize(ClusteringPrefix clustering, 
DataOutputPlus out, int version, List> types) throws IOException
 {
-if (clustering.size() == 0)
-return;
-
-writeHeader(clustering, out);
-for (int i = 0; i < clustering.size(); i++)
+int offset = 0;
+int clusteringSize = clustering.size();
+// serialize in batches of 32, to avoid garbage when deserializing 
headers
+while (offset < clusteringSize)
 {
-ByteBuffer v = clustering.get(i);
-if (v == null || !v.hasRemaining())
-continue; // handled in the header
-
-types.get(i).writeValue(v, out);
+// we micro-batch the headers, so that we can incur fewer 
method calls,
+// and generate no garbage on deserialization;
+// we piggyback on vint encoding so that, typically, only 1 
byte is used per 32 clustering values,
+// i.e. more than we ever expect to see
+int limit = Math.min(clusteringSize, offset + 32);
+out.writeUnsignedVInt(makeHeader(clustering, offset, limit));
+while (offset < limit)
+{
+ByteBuffer v = clustering.get(offset);
+if (v != null && v.hasRemaining())
+types.get(offset).writeValue(v, out);
+offset++;
+}
 }
 }
 
 long valuesWithoutSizeSerializedSize(ClusteringPrefix clustering, int 
version, List> types)
 {
-if (clustering.size() == 0)
-return 0;
-
-long size = headerBytesCount(clustering.size());
-for (int i = 0; i < clustering.size(); i++)
+long result = 0;
+int offset = 0;
+int clusteringSize = clustering.size();
+while (offset < clusteringSize)
+{
+int limit = Math.min(clusteringSize, offset + 32);
+result += TypeSizes.sizeofUnsignedVInt(makeHeader(clustering, 
offset, limit));
+offset = limit;
+}
+for (int i = 0; i < clusteringSize; i++)
 {
 ByteBuffer v = clustering.get(i);
 if (v == null || !v.hasRemaining())
 continue; // handled in the header
 
-size += types.get(i).writtenLength(v);
+result += types.get(i).writtenLength(v);
 }
-return size;
+return result;
 }
 
 ByteBuffer[] deserializeValuesWithoutSize(DataInputPlus in, int size, 
int version, List> types) throws IOException
 {
 // Callers of this method should handle the case where size = 0 
(in all case we want to return a special value anyway).
 assert size > 0;
-
 ByteBuffer[] values = new ByteBuffer[size];
-int[] header = readHeader(size, in);
-for (int i = 0; i < size; i++)
+int offset = 0;
+while (offset < size)
 {
-values[i] = isNull(header, i)
-  ? null
-  : (isEmpty(header, i) ? 
ByteBuffer

[1/2] cassandra git commit: Fix NIODataInputStream varint decoding and EOF behavior

2015-07-23 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/trunk 7b35e3e84 -> 5786b3204


Fix NIODataInputStream varint decoding and EOF behavior

patch by ariel; reviewed by benedict for CASSANDRA-9863


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c4c9eaeb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c4c9eaeb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c4c9eaeb

Branch: refs/heads/trunk
Commit: c4c9eaeb131d4db2c4be3316611efb1ac2b17b23
Parents: 7b35e3e
Author: Ariel Weisberg 
Authored: Wed Jul 22 17:08:16 2015 -0400
Committer: Benedict Elliott Smith 
Committed: Thu Jul 23 23:23:16 2015 +0100

--
 .../org/apache/cassandra/cache/OHCProvider.java |   3 +-
 .../apache/cassandra/db/BatchlogManager.java|   6 +-
 .../cassandra/db/HintedHandOffManager.java  |   5 +-
 .../org/apache/cassandra/db/ReadResponse.java   |   3 +-
 .../org/apache/cassandra/db/SystemKeyspace.java |   3 +-
 .../db/commitlog/CommitLogReplayer.java |   4 +-
 .../db/partitions/PartitionUpdate.java  |   4 +-
 .../cassandra/io/util/DataInputBuffer.java  |  68 +
 .../cassandra/io/util/NIODataInputStream.java   | 102 +--
 .../db/commitlog/CommitLogStressTest.java   |   5 +-
 .../org/apache/cassandra/db/PartitionTest.java  |   6 +-
 .../apache/cassandra/db/ReadMessageTest.java|   4 +-
 .../db/commitlog/CommitLogTestReplayer.java |   3 +-
 .../apache/cassandra/gms/GossipDigestTest.java  |   4 +-
 .../io/util/NIODataInputStreamTest.java | 100 ++
 .../cassandra/utils/IntervalTreeTest.java   |   4 +-
 .../apache/cassandra/utils/MerkleTreeTest.java  |   3 +-
 .../cassandra/utils/StreamingHistogramTest.java |   6 +-
 18 files changed, 228 insertions(+), 105 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c4c9eaeb/src/java/org/apache/cassandra/cache/OHCProvider.java
--
diff --git a/src/java/org/apache/cassandra/cache/OHCProvider.java 
b/src/java/org/apache/cassandra/cache/OHCProvider.java
index 21fc7c7..b0b4521 100644
--- a/src/java/org/apache/cassandra/cache/OHCProvider.java
+++ b/src/java/org/apache/cassandra/cache/OHCProvider.java
@@ -25,6 +25,7 @@ import java.util.UUID;
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.db.TypeSizes;
 import org.apache.cassandra.db.partitions.CachedPartition;
+import org.apache.cassandra.io.util.DataInputBuffer;
 import org.apache.cassandra.io.util.DataOutputBufferFixed;
 import org.apache.cassandra.io.util.NIODataInputStream;
 import org.caffinitas.ohc.OHCache;
@@ -171,7 +172,7 @@ public class OHCProvider implements 
CacheProvider
 {
 try
 {
-NIODataInputStream in = new NIODataInputStream(buf, false);
+NIODataInputStream in = new DataInputBuffer(buf, false);
 boolean isSentinel = in.readBoolean();
 if (isSentinel)
 return new RowCacheSentinel(in.readLong());

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c4c9eaeb/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/db/BatchlogManager.java 
b/src/java/org/apache/cassandra/db/BatchlogManager.java
index b6c658b..e8b76be 100644
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@ -17,7 +17,6 @@
  */
 package org.apache.cassandra.db;
 
-import java.io.DataInputStream;
 import java.io.IOException;
 import java.lang.management.ManagementFactory;
 import java.net.InetAddress;
@@ -47,16 +46,15 @@ import 
org.apache.cassandra.exceptions.WriteTimeoutException;
 import org.apache.cassandra.gms.FailureDetector;
 import org.apache.cassandra.io.sstable.Descriptor;
 import org.apache.cassandra.io.sstable.format.SSTableReader;
+import org.apache.cassandra.io.util.DataInputBuffer;
 import org.apache.cassandra.io.util.DataInputPlus;
 import org.apache.cassandra.io.util.DataOutputBuffer;
-import org.apache.cassandra.io.util.NIODataInputStream;
 import org.apache.cassandra.net.MessageIn;
 import org.apache.cassandra.net.MessageOut;
 import org.apache.cassandra.net.MessagingService;
 import org.apache.cassandra.service.StorageProxy;
 import org.apache.cassandra.service.StorageService;
 import org.apache.cassandra.service.WriteResponseHandler;
-import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.FBUtilities;
 import org.apache.cassandra.utils.WrappedRunnable;
 
@@ -318,7 +316,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 
 private List replayingMutations() throws IOExc

[jira] [Updated] (CASSANDRA-9884) Error on encrypted node communication upgrading from 2.1.6 to 2.2.0

2015-07-23 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-9884:
--
Priority: Critical  (was: Major)

> Error on encrypted node communication upgrading from 2.1.6 to 2.2.0
> ---
>
> Key: CASSANDRA-9884
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9884
> Project: Cassandra
>  Issue Type: Bug
>  Components: Config, Core
> Environment: Ubuntu 14.04.2 LTS 64 bits.
> Java version "1.8.0_45"
> Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
>Reporter: Carlos Scheidecker
>Priority: Critical
>  Labels: security
> Fix For: 2.2.0
>
>
> After updating to Cassandra 2.2.0 from 2.1.6 I am having SSL issues.
> The configuration had not changed from one version to the other, the JVM is 
> still the same however on 2.2.0 it is erroring. I am yet to investigate the 
> source code for it. But for now, this is the information I have to share on 
> it:
> My JVM is java version "1.8.0_45"
> Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
> Ubuntu 14.04.2 LTS is on all nodes, they are the same.
> Below is the encryption settings from cassandra.yaml of all nodes.
> I am using the same keystore and trustore as I had used before on 2.1.6
> # Enable or disable inter-node encryption
> # Default settings are TLS v1, RSA 1024-bit keys (it is imperative that
> # users generate their own keys) TLS_RSA_WITH_AES_128_CBC_SHA as the cipher
> # suite for authentication, key exchange and encryption of the actual data 
> transfers.
> # Use the DHE/ECDHE ciphers if running in FIPS 140 compliant mode.
> # NOTE: No custom encryption options are enabled at the moment
> # The available internode options are : all, none, dc, rack
> #
> # If set to dc cassandra will encrypt the traffic between the DCs
> # If set to rack cassandra will encrypt the traffic between the racks
> #
> # The passwords used in these options must match the passwords used when 
> generating
> # the keystore and truststore.  For instructions on generating these files, 
> see:
> # 
> http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore
> #
> server_encryption_options:
> internode_encryption: all
> keystore: /etc/cassandra/certs/node.keystore
> keystore_password: mypasswd
> truststore: /etc/cassandra/certs/global.truststore
> truststore_password: mypasswd
> # More advanced defaults below:
> # protocol: TLS
> # algorithm: SunX509
> # store_type: JKS
> cipher_suites: 
> [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
> require_client_auth: false
> # enable or disable client/server encryption.
> Nodes cannot talk to each other as per SSL errors bellow.
> WARN  [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
> SSLFactory.java:163 - Filtering out 
> TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
>  as it isnt supported by the socket
> ERROR [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
> OutboundTcpConnection.java:229 - error processing a message intended for 
> /192.168.1.31
> java.lang.NullPointerException: null
>   at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:213) 
> ~[guava-16.0.jar:na]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.(BufferedDataOutputStreamPlus.java:74)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:404)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:218)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
> ERROR [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
> OutboundTcpConnection.java:316 - error writing to /192.168.1.31
> java.lang.NullPointerException: null
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.writeInternal(OutboundTcpConnection.java:323)
>  [apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:285)
>  [apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:219)
>  [apache-cassandra-2.2.0.jar:2.2.0]
> WARN  [MessagingService-Outgoing-/192.168.1.33] 2015-07-22 17:29:49,764 
> SSLFactory.java:163 - Filtering out 
> TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_

[jira] [Commented] (CASSANDRA-9884) Error on encrypted node communication upgrading from 2.1.6 to 2.2.0

2015-07-23 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639608#comment-14639608
 ] 

Yuki Morishita commented on CASSANDRA-9884:
---

Sine SSLSocket is used, it does not have SocketChannel associated with.
We need to check null and use wrapped WritableByteChannel instead.

patch: https://github.com/yukim/cassandra/tree/9884

> Error on encrypted node communication upgrading from 2.1.6 to 2.2.0
> ---
>
> Key: CASSANDRA-9884
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9884
> Project: Cassandra
>  Issue Type: Bug
>  Components: Config, Core
> Environment: Ubuntu 14.04.2 LTS 64 bits.
> Java version "1.8.0_45"
> Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
>Reporter: Carlos Scheidecker
>  Labels: security
> Fix For: 2.2.0
>
>
> After updating to Cassandra 2.2.0 from 2.1.6 I am having SSL issues.
> The configuration had not changed from one version to the other, the JVM is 
> still the same however on 2.2.0 it is erroring. I am yet to investigate the 
> source code for it. But for now, this is the information I have to share on 
> it:
> My JVM is java version "1.8.0_45"
> Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
> Ubuntu 14.04.2 LTS is on all nodes, they are the same.
> Below is the encryption settings from cassandra.yaml of all nodes.
> I am using the same keystore and trustore as I had used before on 2.1.6
> # Enable or disable inter-node encryption
> # Default settings are TLS v1, RSA 1024-bit keys (it is imperative that
> # users generate their own keys) TLS_RSA_WITH_AES_128_CBC_SHA as the cipher
> # suite for authentication, key exchange and encryption of the actual data 
> transfers.
> # Use the DHE/ECDHE ciphers if running in FIPS 140 compliant mode.
> # NOTE: No custom encryption options are enabled at the moment
> # The available internode options are : all, none, dc, rack
> #
> # If set to dc cassandra will encrypt the traffic between the DCs
> # If set to rack cassandra will encrypt the traffic between the racks
> #
> # The passwords used in these options must match the passwords used when 
> generating
> # the keystore and truststore.  For instructions on generating these files, 
> see:
> # 
> http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore
> #
> server_encryption_options:
> internode_encryption: all
> keystore: /etc/cassandra/certs/node.keystore
> keystore_password: mypasswd
> truststore: /etc/cassandra/certs/global.truststore
> truststore_password: mypasswd
> # More advanced defaults below:
> # protocol: TLS
> # algorithm: SunX509
> # store_type: JKS
> cipher_suites: 
> [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
> require_client_auth: false
> # enable or disable client/server encryption.
> Nodes cannot talk to each other as per SSL errors bellow.
> WARN  [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
> SSLFactory.java:163 - Filtering out 
> TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
>  as it isnt supported by the socket
> ERROR [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
> OutboundTcpConnection.java:229 - error processing a message intended for 
> /192.168.1.31
> java.lang.NullPointerException: null
>   at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:213) 
> ~[guava-16.0.jar:na]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.(BufferedDataOutputStreamPlus.java:74)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:404)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:218)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
> ERROR [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
> OutboundTcpConnection.java:316 - error writing to /192.168.1.31
> java.lang.NullPointerException: null
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.writeInternal(OutboundTcpConnection.java:323)
>  [apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:285)
>  [apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:219)
>  [apache-cas

[jira] [Resolved] (CASSANDRA-9498) If more than 65K columns, sparse layout will break

2015-07-23 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-9498.
---
   Resolution: Duplicate
 Assignee: (was: Benedict)
Fix Version/s: (was: 3.0 beta 1)

> If more than 65K columns, sparse layout will break
> --
>
> Key: CASSANDRA-9498
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9498
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Priority: Minor
>
> Follow up to CASSANDRA-8099. It is a relatively small bug, since the exposed 
> population of users is likely to be very low, but fixing it in a good way is 
> a bit tricky. I'm filing a separate JIRA, because I would like us to address 
> this by introducing a writeVInt method to DataOutputStreamPlus, that we can 
> also exploit to improve the encoding of timestamps and deletion times, and 
> this JIRA will help to track the dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9884) Error on encrypted node communication upgrading from 2.1.6 to 2.2.0

2015-07-23 Thread Carlos Scheidecker (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639586#comment-14639586
 ] 

Carlos Scheidecker commented on CASSANDRA-9884:
---

The issue happens on the connect method of the OutboundTcpConnection class.

On 2.1.8 the outbound connection is initialized as out = new 
DataOutputStreamPlus(new BufferedOutputStream(socket.getOutputStream(), 
BUFFER_SIZE));

While on 2.2.0 it is:

out = new BufferedDataOutputStreamPlus(socket.getChannel(), BUFFER_SIZE);

Mostly due to refactoring.

Possibly the issue is before that on Google's guava-16.0 library which I would 
need more time to investigate and might be able to do it after work.

> Error on encrypted node communication upgrading from 2.1.6 to 2.2.0
> ---
>
> Key: CASSANDRA-9884
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9884
> Project: Cassandra
>  Issue Type: Bug
>  Components: Config, Core
> Environment: Ubuntu 14.04.2 LTS 64 bits.
> Java version "1.8.0_45"
> Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
>Reporter: Carlos Scheidecker
>  Labels: security
> Fix For: 2.2.0
>
>
> After updating to Cassandra 2.2.0 from 2.1.6 I am having SSL issues.
> The configuration had not changed from one version to the other, the JVM is 
> still the same however on 2.2.0 it is erroring. I am yet to investigate the 
> source code for it. But for now, this is the information I have to share on 
> it:
> My JVM is java version "1.8.0_45"
> Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
> Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
> Ubuntu 14.04.2 LTS is on all nodes, they are the same.
> Below is the encryption settings from cassandra.yaml of all nodes.
> I am using the same keystore and trustore as I had used before on 2.1.6
> # Enable or disable inter-node encryption
> # Default settings are TLS v1, RSA 1024-bit keys (it is imperative that
> # users generate their own keys) TLS_RSA_WITH_AES_128_CBC_SHA as the cipher
> # suite for authentication, key exchange and encryption of the actual data 
> transfers.
> # Use the DHE/ECDHE ciphers if running in FIPS 140 compliant mode.
> # NOTE: No custom encryption options are enabled at the moment
> # The available internode options are : all, none, dc, rack
> #
> # If set to dc cassandra will encrypt the traffic between the DCs
> # If set to rack cassandra will encrypt the traffic between the racks
> #
> # The passwords used in these options must match the passwords used when 
> generating
> # the keystore and truststore.  For instructions on generating these files, 
> see:
> # 
> http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore
> #
> server_encryption_options:
> internode_encryption: all
> keystore: /etc/cassandra/certs/node.keystore
> keystore_password: mypasswd
> truststore: /etc/cassandra/certs/global.truststore
> truststore_password: mypasswd
> # More advanced defaults below:
> # protocol: TLS
> # algorithm: SunX509
> # store_type: JKS
> cipher_suites: 
> [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
> require_client_auth: false
> # enable or disable client/server encryption.
> Nodes cannot talk to each other as per SSL errors bellow.
> WARN  [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
> SSLFactory.java:163 - Filtering out 
> TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
>  as it isnt supported by the socket
> ERROR [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
> OutboundTcpConnection.java:229 - error processing a message intended for 
> /192.168.1.31
> java.lang.NullPointerException: null
>   at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:213) 
> ~[guava-16.0.jar:na]
>   at 
> org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.(BufferedDataOutputStreamPlus.java:74)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:404)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:218)
>  ~[apache-cassandra-2.2.0.jar:2.2.0]
> ERROR [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
> OutboundTcpConnection.java:316 - error writing to /192.168.1.31
> java.lang.NullPointerException: null
>   at 
> org.apache.cassandra.net.OutboundTcpConnection.writeInternal(Outbound

[jira] [Created] (CASSANDRA-9886) TIMESTAMP - allow USING TIMESTAMP at end of mutation CQL

2015-07-23 Thread Constance Eustace (JIRA)
Constance Eustace created CASSANDRA-9886:


 Summary: TIMESTAMP - allow USING TIMESTAMP at end of mutation CQL 
 Key: CASSANDRA-9886
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9886
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Constance Eustace
 Fix For: 2.1.x


I was doing performance testing to get off of using batches for our persistence 
engine, and instead use "async spray" with timestamps. 

First of all, it seems fairly insane that the USING TIMESTAMP clause is in a 
different location for INSERT (before WHERE) and the UPDATE (before SET)  and 
the DELETE (before WHERE) statements... thus is in the middle of the statement 
for no real apparently good reason, although maybe there is some PostGresql 
compatibility. 

This means that if some code produces a large list of statements without the 
USING TIMESTAMP already in it, because the actual method of execution of a list 
of statements, which may use batches (if we were grouping by partition key) or 
not (single statement) may be determined later...

Then for single statement updates, the statement needs  to properly place the 
USING TIMESTAMP clause. It would be MUCH EASIER to all a simple append of 
"USING TIMESTAMP xxx" at the end of the CQL statement.

BATCH is easier, you just wrap the statements. Pretty basic.

I have done performance testing with single-statement BATCH USING TIMESTAMP and 
their performance is awful, worse that "NEVER EVER DO THIS" sync batches with 
cross-partition updates.

Can we either allow a USING TIMESTAMP to be at the end of all the mutation 
statements in the same place, or have a check in the BATCH statement processing 
to check if its a single statement and reduce it to non-batch execution?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9885) Changes for driver 2.2.0-rc2+ support

2015-07-23 Thread Andy Tolbert (JIRA)
Andy Tolbert created CASSANDRA-9885:
---

 Summary: Changes for driver 2.2.0-rc2+ support
 Key: CASSANDRA-9885
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9885
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Andy Tolbert
Priority: Minor


I'm jumping the gun on this a bit, in java-driver 2.2.0-rc2 support for custom 
codecs for column serialization/deserialization is being added 
([#387|https://github.com/datastax/java-driver/pull/387]).  This introduces a 
few breaking API changes in serialization/deserialization and Row that require 
changes in CqlRecordReader and some UDF code.   These changes have no 
functional impact and intend to not bring any breaking changes to the UDF code 
(currently validating this).

I have a 
[commit|https://github.com/tolbertam/cassandra/commit/2e2248b1c40ab15819dcd02642df4e2d7565e923]
 on a [personal 
branch|https://github.com/tolbertam/cassandra/tree/cassandra-2.2-driver22-rc2] 
that passes all tests.  I'll update it when 2.2.0-rc2 is released (update ant 
build to pull jar from maven central instead of local).

I figure when the time comes where C* needs to be updated to use a newer 
version of the driver these changes could come in handy to making the upgrade 
less painful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9883) How can I reset admin password on Opscenter 5.1.2?

2015-07-23 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp resolved CASSANDRA-9883.
-
Resolution: Invalid

[~gina.luc...@pearson.com] please ask this question in the user mailing list.
This is the issue tracking system for Apache Cassandra.

> How can I reset admin password on Opscenter 5.1.2?
> --
>
> Key: CASSANDRA-9883
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9883
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Ubuntu with OpsCenter 5.1.2
>Reporter: Gina Lucero
>Priority: Minor
>  Labels: newbie
>
> I have taken over for another DBA that left and I have limited user access 
> but not admin access to OpsCenter.  I cannot log in with any privileged 
> account.  I have root on the VM hosting OpsCenter, is there a way to reset 
> the admin account without destroying the existing setup?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9884) Error on encrypted node communication upgrading from 2.1.6 to 2.2.0

2015-07-23 Thread Carlos Scheidecker (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlos Scheidecker updated CASSANDRA-9884:
--
Description: 
After updating to Cassandra 2.2.0 from 2.1.6 I am having SSL issues.

The configuration had not changed from one version to the other, the JVM is 
still the same however on 2.2.0 it is erroring. I am yet to investigate the 
source code for it. But for now, this is the information I have to share on it:

My JVM is java version "1.8.0_45"
Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)

Ubuntu 14.04.2 LTS is on all nodes, they are the same.

Below is the encryption settings from cassandra.yaml of all nodes.

I am using the same keystore and trustore as I had used before on 2.1.6


# Enable or disable inter-node encryption
# Default settings are TLS v1, RSA 1024-bit keys (it is imperative that
# users generate their own keys) TLS_RSA_WITH_AES_128_CBC_SHA as the cipher
# suite for authentication, key exchange and encryption of the actual data 
transfers.
# Use the DHE/ECDHE ciphers if running in FIPS 140 compliant mode.
# NOTE: No custom encryption options are enabled at the moment
# The available internode options are : all, none, dc, rack
#
# If set to dc cassandra will encrypt the traffic between the DCs
# If set to rack cassandra will encrypt the traffic between the racks
#
# The passwords used in these options must match the passwords used when 
generating
# the keystore and truststore.  For instructions on generating these files, see:
# 
http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore
#
server_encryption_options:
internode_encryption: all
keystore: /etc/cassandra/certs/node.keystore
keystore_password: mypasswd
truststore: /etc/cassandra/certs/global.truststore
truststore_password: mypasswd
# More advanced defaults below:
# protocol: TLS
# algorithm: SunX509
# store_type: JKS
cipher_suites: 
[TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
require_client_auth: false

# enable or disable client/server encryption.


Nodes cannot talk to each other as per SSL errors bellow.

WARN  [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
SSLFactory.java:163 - Filtering out 
TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
 as it isnt supported by the socket
ERROR [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
OutboundTcpConnection.java:229 - error processing a message intended for 
/192.168.1.31
java.lang.NullPointerException: null
at 
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:213) 
~[guava-16.0.jar:na]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.(BufferedDataOutputStreamPlus.java:74)
 ~[apache-cassandra-2.2.0.jar:2.2.0]
at 
org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:404)
 ~[apache-cassandra-2.2.0.jar:2.2.0]
at 
org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:218)
 ~[apache-cassandra-2.2.0.jar:2.2.0]
ERROR [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
OutboundTcpConnection.java:316 - error writing to /192.168.1.31
java.lang.NullPointerException: null
at 
org.apache.cassandra.net.OutboundTcpConnection.writeInternal(OutboundTcpConnection.java:323)
 [apache-cassandra-2.2.0.jar:2.2.0]
at 
org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:285)
 [apache-cassandra-2.2.0.jar:2.2.0]
at 
org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:219)
 [apache-cassandra-2.2.0.jar:2.2.0]
WARN  [MessagingService-Outgoing-/192.168.1.33] 2015-07-22 17:29:49,764 
SSLFactory.java:163 - Filtering out 
TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
 as it isnt supported by the socket
WARN  [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:49,764 
SSLFactory.java:163 - Filtering out 
TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
 as it isnt supported by the socket
ERROR [MessagingService-Outgoing-/192.168.1.33] 2015-07-22 17:29:49,764 
OutboundTcpConnection.java:229 - error processing a message intended for 
/192.168.1.33
java.lang.NullPointerException: null
at 
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:213) 
~[guava-16.0.jar:na]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.(BufferedDataOutputStreamPlus.java:74)
 ~[apache-cassandra-2.2.0.jar:2.2.0]
at 
org.apache.cassandra.net.OutboundTcpCon

[jira] [Commented] (CASSANDRA-9883) How can I reset admin password on Opscenter 5.1.2?

2015-07-23 Thread Gina Lucero (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639493#comment-14639493
 ] 

Gina Lucero commented on CASSANDRA-9883:


If I alter:  opscenterd.conf file. 
[authentication]
enabled=True to False 

Does this delete all existing Roles and Users?  

> How can I reset admin password on Opscenter 5.1.2?
> --
>
> Key: CASSANDRA-9883
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9883
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Ubuntu with OpsCenter 5.1.2
>Reporter: Gina Lucero
>Priority: Minor
>  Labels: newbie
>
> I have taken over for another DBA that left and I have limited user access 
> but not admin access to OpsCenter.  I cannot log in with any privileged 
> account.  I have root on the VM hosting OpsCenter, is there a way to reset 
> the admin account without destroying the existing setup?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9402) Implement proper sandboxing for UDFs

2015-07-23 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639494#comment-14639494
 ] 

Robert Stupp commented on CASSANDRA-9402:
-

Pushed another commit, that fixes the failing dtests.
These failed because they (thankfully) only execute a JavaScript UDF (so 
unveiled a bug in this patch, if a JavaScript UDF is executed first).
This commit also fixes the related issues and adds a separate utest to test 
that (only JavaScript UDFs).
It also maintains restricted access to java.nio and java.net - but has to 
initialize the driver classes to ensure that.


> Implement proper sandboxing for UDFs
> 
>
> Key: CASSANDRA-9402
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9402
> Project: Cassandra
>  Issue Type: Task
>Reporter: T Jake Luciani
>Assignee: Robert Stupp
>Priority: Critical
>  Labels: docs-impacting, security
> Fix For: 3.0 beta 1
>
> Attachments: 9402-warning.txt
>
>
> We want to avoid a security exploit for our users.  We need to make sure we 
> ship 2.2 UDFs with good defaults so someone exposing it to the internet 
> accidentally doesn't open themselves up to having arbitrary code run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7342) CAS writes does not have hint functionality.

2015-07-23 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639491#comment-14639491
 ] 

sankalp kohli commented on CASSANDRA-7342:
--

sure. 

> CAS writes does not have hint functionality. 
> -
>
> Key: CASSANDRA-7342
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7342
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: sankalp kohli
>Assignee: sankalp kohli
> Attachments: 7342_2.0.txt, 7342_2.1.txt
>
>
> When a dead node comes up, it gets the last commit but not anything which it 
> has missed. 
> This reduces the durability of those writes compared to other writes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9884) Error on encrypted node communication upgrading from 2.1.6 to 2.2.0

2015-07-23 Thread Carlos Scheidecker (JIRA)
Carlos Scheidecker created CASSANDRA-9884:
-

 Summary: Error on encrypted node communication upgrading from 
2.1.6 to 2.2.0
 Key: CASSANDRA-9884
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9884
 Project: Cassandra
  Issue Type: Bug
  Components: Config, Core
 Environment: Ubuntu 14.04.2 LTS 64 bits.
Java version "1.8.0_45"
Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
Reporter: Carlos Scheidecker
 Fix For: 2.2.0


After updating to Cassandra 2.2.0 from 2.1.6 I am having SSL issues.

The configuration had not changed from one version to the other, the JVM is 
still the same however on 2.2.0 it is erroring. I am yet to investigate the 
source code for it. But for now, this is the information I have to share on it:

My JVM is java version "1.8.0_45"
Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)

Ubuntu 14.04.2 LTS is on all nodes, they are the same.

Below is the encryption settings from cassandra.yaml of all nodes.

I am using the same keystore and trustore as I had used before on 2.1.6


# Enable or disable inter-node encryption
# Default settings are TLS v1, RSA 1024-bit keys (it is imperative that
# users generate their own keys) TLS_RSA_WITH_AES_128_CBC_SHA as the cipher
# suite for authentication, key exchange and encryption of the actual data 
transfers.
# Use the DHE/ECDHE ciphers if running in FIPS 140 compliant mode.
# NOTE: No custom encryption options are enabled at the moment
# The available internode options are : all, none, dc, rack
#
# If set to dc cassandra will encrypt the traffic between the DCs
# If set to rack cassandra will encrypt the traffic between the racks
#
# The passwords used in these options must match the passwords used when 
generating
# the keystore and truststore.  For instructions on generating these files, see:
# 
http://download.oracle.com/javase/6/docs/technotes/guides/security/jsse/JSSERefGuide.html#CreateKeystore
#
server_encryption_options:
internode_encryption: all
keystore: /etc/cassandra/certs/node.keystore
keystore_password: mypasswd
truststore: /etc/cassandra/certs/global.truststore
truststore_password: mypasswd
# More advanced defaults below:
# protocol: TLS
# algorithm: SunX509
# store_type: JKS
cipher_suites: 
[TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
require_client_auth: false

# enable or disable client/server encryption.


Nodes cannot talk to each other as per SSL errors bellow.

WARN  [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
SSLFactory.java:163 - Filtering out 
TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
 as it isnt supported by the socket
ERROR [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
OutboundTcpConnection.java:229 - error processing a message intended for 
/192.168.1.31
java.lang.NullPointerException: null
at 
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:213) 
~[guava-16.0.jar:na]
at 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.(BufferedDataOutputStreamPlus.java:74)
 ~[apache-cassandra-2.2.0.jar:2.2.0]
at 
org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:404)
 ~[apache-cassandra-2.2.0.jar:2.2.0]
at 
org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:218)
 ~[apache-cassandra-2.2.0.jar:2.2.0]
ERROR [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:48,764 
OutboundTcpConnection.java:316 - error writing to /192.168.1.31
java.lang.NullPointerException: null
at 
org.apache.cassandra.net.OutboundTcpConnection.writeInternal(OutboundTcpConnection.java:323)
 [apache-cassandra-2.2.0.jar:2.2.0]
at 
org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:285)
 [apache-cassandra-2.2.0.jar:2.2.0]
at 
org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:219)
 [apache-cassandra-2.2.0.jar:2.2.0]
WARN  [MessagingService-Outgoing-/192.168.1.33] 2015-07-22 17:29:49,764 
SSLFactory.java:163 - Filtering out 
TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
 as it isnt supported by the socket
WARN  [MessagingService-Outgoing-/192.168.1.31] 2015-07-22 17:29:49,764 
SSLFactory.java:163 - Filtering out 
TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
 as it isnt supported by the socket
ERROR [MessagingService-Outgoing-/192.168.1.33] 2015-07-22 17:29:49,764 
OutboundTcpConnect

[jira] [Commented] (CASSANDRA-9873) Windows dtest: ignore_failure_policy_test fails

2015-07-23 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639481#comment-14639481
 ] 

Joshua McKenzie commented on CASSANDRA-9873:


[PR for dtest here|https://github.com/riptano/cassandra-dtest/pull/415]

os.chmod in python doesn't work on Windows except to set a file as read-only 
via stat.S_IWRITE and stat.S_IREAD. Our test expected further mutations to 
timeout due to CLS allocation failures, but Windows will happily continue 
allocating and swapping segments - we can't even delete the segments on Windows 
thanks to the whole "mmap'ed segments can't be deleted" thing.

Tweaked the test so that the flow on linux is unchanged, and on Windows the 
logical flow is 1) confirm error occurred in CL management, 2) confirm we 
didn't terminate the node, and 3) confirm we didn't stop the CL processing on 
the node.

I expect we may have some other failures where we use 
{{_provoke_commitlog_failure}} on Windows since, while it provokes the failure, 
it doesn't fail quite as hard as it does on linux.

> Windows dtest: ignore_failure_policy_test fails
> ---
>
> Key: CASSANDRA-9873
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9873
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>Priority: Minor
>  Labels: Windows
> Fix For: 2.2.x
>
>
> {noformat}
> ==
> FAIL: ignore_failure_policy_test (commitlog_test.TestCommitLog)
> --
> Traceback (most recent call last):
>   File "C:\src\cassandra-dtest\commitlog_test.py", line 251, in 
> ignore_failure_policy_test
> """)
> AssertionError: (,  'cassandra.WriteTimeout'>) not raised
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: c:\temp\dtest-fzrrz1
> - >> end captured logging << -
> --
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9883) How can I reset admin password on Opscenter 5.1.2?

2015-07-23 Thread Gina Lucero (JIRA)
Gina Lucero created CASSANDRA-9883:
--

 Summary: How can I reset admin password on Opscenter 5.1.2?
 Key: CASSANDRA-9883
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9883
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Ubuntu with OpsCenter 5.1.2
Reporter: Gina Lucero
Priority: Minor


I have taken over for another DBA that left and I have limited user access but 
not admin access to OpsCenter.  I cannot log in with any privileged account.  I 
have root on the VM hosting OpsCenter, is there a way to reset the admin 
account without destroying the existing setup?




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9498) If more than 65K columns, sparse layout will break

2015-07-23 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639470#comment-14639470
 ] 

Sylvain Lebresne commented on CASSANDRA-9498:
-

Pretty sure the few places in question are already handled by the patch 
attached in CASSANDRA-9801 (which also handles {{Columns}} and more), so I 
think we can close this as duplicate.

> If more than 65K columns, sparse layout will break
> --
>
> Key: CASSANDRA-9498
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9498
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
>Priority: Minor
> Fix For: 3.0 beta 1
>
>
> Follow up to CASSANDRA-8099. It is a relatively small bug, since the exposed 
> population of users is likely to be very low, but fixing it in a good way is 
> a bit tricky. I'm filing a separate JIRA, because I would like us to address 
> this by introducing a writeVInt method to DataOutputStreamPlus, that we can 
> also exploit to improve the encoding of timestamps and deletion times, and 
> this JIRA will help to track the dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9498) If more than 65K columns, sparse layout will break

2015-07-23 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9498:
--
Assignee: Benedict

Let's limit this to not imposing any new backwards compatibility challenges for 
b1.  We can do more in 3.x.

> If more than 65K columns, sparse layout will break
> --
>
> Key: CASSANDRA-9498
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9498
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
>Priority: Minor
> Fix For: 3.0 beta 1
>
>
> Follow up to CASSANDRA-8099. It is a relatively small bug, since the exposed 
> population of users is likely to be very low, but fixing it in a good way is 
> a bit tricky. I'm filing a separate JIRA, because I would like us to address 
> this by introducing a writeVInt method to DataOutputStreamPlus, that we can 
> also exploit to improve the encoding of timestamps and deletion times, and 
> this JIRA will help to track the dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9881) Rows with negative-sized keys can't be skipped by sstablescrub

2015-07-23 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9881:
--
Assignee: Stefania

> Rows with negative-sized keys can't be skipped by sstablescrub
> --
>
> Key: CASSANDRA-9881
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9881
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Stefania
>Priority: Minor
> Fix For: 2.1.x
>
>
> It is possible to have corruption in such a way that scrub (on or offline) 
> can't skip the row, so you end up in a loop where this just keeps repeating:
> {noformat}
> WARNING: Row starting at position 2087453 is unreadable; skipping to next 
> Reading row at 2087453 
> row (unreadable key) is -1 bytes
> {noformat}
> The workaround is to just delete the problem sstable since you were going to 
> have to repair anyway, but it would still be nice to salvage the rest of the 
> sstable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7342) CAS writes does not have hint functionality.

2015-07-23 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639405#comment-14639405
 ] 

Aleksey Yeschenko commented on CASSANDRA-7342:
--

[~kohlisankalp] I'll review, but I'd rather not be adding it to 2.0 at this 
stage (2.1 too, but I can make an exception for 2.1).

> CAS writes does not have hint functionality. 
> -
>
> Key: CASSANDRA-7342
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7342
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: sankalp kohli
>Assignee: sankalp kohli
> Attachments: 7342_2.0.txt, 7342_2.1.txt
>
>
> When a dead node comes up, it gets the last commit but not anything which it 
> has missed. 
> This reduces the durability of those writes compared to other writes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7342) CAS writes does not have hint functionality.

2015-07-23 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-7342:
-
Reviewer: Aleksey Yeschenko

> CAS writes does not have hint functionality. 
> -
>
> Key: CASSANDRA-7342
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7342
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: sankalp kohli
>Assignee: sankalp kohli
> Attachments: 7342_2.0.txt, 7342_2.1.txt
>
>
> When a dead node comes up, it gets the last commit but not anything which it 
> has missed. 
> This reduces the durability of those writes compared to other writes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9863) NIODataInputStream has problems on trunk

2015-07-23 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639379#comment-14639379
 ] 

Benedict commented on CASSANDRA-9863:
-

Thanks. I've pushed a single line change to use {{DataInputBuffer}} in 
{{ReadResponse}}. I'll wait on CI before committing this.

> NIODataInputStream has problems on trunk
> 
>
> Key: CASSANDRA-9863
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9863
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Ariel Weisberg
>Priority: Blocker
> Fix For: 3.0 beta 1
>
>
> As the title says, there is cases where method calls to NIODataInputStream, 
> at least {{readVInt}} calls can loop forever. This is possibly only a problem 
> for vints where the code tries to read 8 bytes minimum but there is less than 
> that available, and in that sense is related to [~benedict]'s observation in 
> CASSANDRA-9708, but it is more serious than said observation because:
> # this can happen even if the buffer passed to NIODataInputStream ctor has 
> more than 8 bytes available, and hence I'm relatively confident [~benedict]'s 
> fix in CASSANDRA-9708 is not enough.
> # this doesn't necessarily fail cleanly by raising assertions, this can loop 
> forever (which is much harder to debug).
> Due of that, and because that is at least one of the cause of CASSANDRA-9764, 
> I think the problem warrants a specific ticket (from CASSANDRA-9708 that is).
> Now, the exact reason of this is looping is if {{readVInt}} is called but the 
> buffer has less than 8 byte remaining (again, the buffer had more initially). 
> In that case, {{readMinimum(8, 1)}} is called and it calls {{readNext()}} in 
> a loop. Within {{readNext()}}, the buffer (which has {{buf.position() == 0 && 
> buf.hasRemaining()}}) is actually unchanged (through a very weird dance of 
> setting the position to the limit, then the limit to the capacity, and then 
> flipping the buffer which resets everything to what it was), and because 
> {{rbc}} is the {{emptyReadableByteChannel}}, {{rbc.read(buf)}} does nothing 
> and always return {{-1}}. Back in {{readMinimum}}, {{read == -1}} but 
> {{remaining >= require}} (and {{remaining}} never changes), and hence the 
> forever looping.
> Now, not sure what the best fix is because I'm not fully familiar with that 
> code, but that does leads me to a 2nd point: {{NIODataInputSttream}} can IMHO 
> use a bit of additional/better comments. I won't pretend having tried very 
> hard to understand the whole class, so there is probably some lack of effort, 
> but at least a few things felt like they should clarified:
> * Provided I understand {{readNext()}} correctly, it only make sense when we 
> do have a {{ReadableByteChannel}} (and the fact that it's not the case sounds 
> like the bug). If that's the case, this should be explicitly documented and 
> probably asserted. As as an aside, I wonder if using {{rbc == null}} when we 
> don't have wouldn't be better: if we don't have one, we shouldn't try to use 
> it, and having a {{null}} would make things fail loudly if we do.
> * I'm not exactly sure what {{readMinimum}} arguments do. I'd have expected 
> at least one to be called "minimum", and an explanation of the meaning of the 
> other one.
> * {{prepareReadPaddedPrimitive}} says that it "Add padding if requested" but 
> there is seemingly no argument that trigger the "if requested part". Also 
> unclear what that padding is about in the first place.
> As a final point, it looks like the case where {{NIODataInputStream}} is 
> constructed with a {{ByteBuffer}} (rather than a {{ReadableByteChannel}}) 
> seems to be completely untested by the unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9878) Rolling Updates 2.0 to 2.1 "unable to gossip"

2015-07-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639374#comment-14639374
 ] 

Axel Kämpfe commented on CASSANDRA-9878:


https://issues.apache.org/jira/browse/CASSANDRA-8768?focusedCommentId=14538131&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14538131

would have been nice, and would have saved a lot of work trying the impossible 
:-)

but thanks for pointing this out :-)

> Rolling Updates 2.0 to 2.1 "unable to gossip"
> -
>
> Key: CASSANDRA-9878
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9878
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Axel Kämpfe
> Fix For: 2.1.x
>
>
> Hi there,
> we are currently testing an upgrade of our servers from Cassandra 2.0.16 to 
> 2.1.8 on the EC2 service of Amazon
> Usually, we launch a new server which gets the newest version and then joins 
> the existing ring, bootstrapping, and after some time we kill one of the old 
> nodes.
> But with the upgrade to 2.1 the new server will not join the existing ring.
> {code}
> INFO  [HANDSHAKE-/10.] 2015-07-23 11:06:14,997 OutboundTcpConnection.java:485 
> - Handshaking version with /10.xx.yy.zz
> INFO  [HANDSHAKE-/10.] 2015-07-23 11:06:14,999 OutboundTcpConnection.java:485 
> - Handshaking version with /10.yy.zz.aa
> INFO  [HANDSHAKE-/10.] 2015-07-23 11:06:15,000 OutboundTcpConnection.java:485 
> - Handshaking version with /10.aa.bb.cc
> ERROR [main] 2015-07-23 11:06:46,016 CassandraDaemon.java:541 - Exception 
> encountered during startup
> java.lang.RuntimeException: Unable to gossip with any seeds
> at 
> org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1307) 
> ~[apache-cassandra-2.1.8.jar:2.1.8]
> at 
> org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:533)
>  ~[apache-cassandra-2.1.8.jar:2.1.8]
> at 
> org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:777)
>  ~[apache-cassandra-2.1.8.jar:2.1.8]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:714)
>  ~[apache-cassandra-2.1.8.jar:2.1.8]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:605)
>  ~[apache-cassandra-2.1.8.jar:2.1.8]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378) 
> [apache-cassandra-2.1.8.jar:2.1.8]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:524)
>  [apache-cassandra-2.1.8.jar:2.1.8]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:613) 
> [apache-cassandra-2.1.8.jar:2.1.8]
> WARN  [StorageServiceShutdownHook] 2015-07-23 11:06:46,023 Gossiper.java:1418 
> - No local state or state is in silent shutdown, not announcing shutdown
> INFO  [StorageServiceShutdownHook] 2015-07-23 11:06:46,023 
> MessagingService.java:708 - Waiting for messaging service to quiesce
> INFO  [ACCEPT-/10.] 2015-07-23 11:06:46,045 MessagingService.java:958 - 
> MessagingService has terminated the accept() thread
> {code}
> our config uses the "RandomPartitioner", "SimpleSnitch" and the internal 
> nodes ip for communication.
> when i use the same config but ONLY 2.1.x Servers, everthing works perfectly, 
> but as soon as we start to "mix in" a new version into an "old" ring, the new 
> servers will not "gossip"... ( so it cannot be a firewall issue, or such)
> If you need any more information from my side, please let me know.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9841) trunk pig-test fails

2015-07-23 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-9841.
--
   Resolution: Fixed
Fix Version/s: (was: 3.0.0 rc1)
   3.0 beta 1

> trunk pig-test fails
> 
>
> Key: CASSANDRA-9841
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9841
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop, Tests
> Environment: trunk HEAD
> Debian Jessie 64-bin
> AWS m3-2xlarge
>Reporter: Michael Shuler
>Assignee: Aleksey Yeschenko
>  Labels: test-failure
> Fix For: 3.0 beta 1
>
>
> {noformat}
> pig-test:
> [mkdir] Created dir: 
> /var/lib/jenkins/jobs/trunk_pigtest/workspace/build/test/cassandra
> [mkdir] Created dir: 
> /var/lib/jenkins/jobs/trunk_pigtest/workspace/build/test/output
> [junit] WARNING: multiple versions of ant detected in path for junit 
> [junit]  
> jar:file:/usr/share/ant/lib/ant.jar!/org/apache/tools/ant/Project.class
> [junit]  and 
> jar:file:/var/lib/jenkins/jobs/trunk_pigtest/workspace/build/lib/jars/ant-1.8.3.jar!/org/apache/tools/ant/Project.class
> [junit] Testsuite: org.apache.cassandra.pig.CqlRecordReaderTest
> [junit] Testsuite: org.apache.cassandra.pig.CqlRecordReaderTest Tests 
> run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.799 sec
> [junit] 
> [junit] Testsuite: org.apache.cassandra.pig.CqlTableDataTypeTest
> [junit] Testsuite: org.apache.cassandra.pig.CqlTableDataTypeTest Tests 
> run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.627 sec
> [junit] 
> [junit] Testsuite: org.apache.cassandra.pig.CqlTableTest
> [junit] 
> [junit] Exception: java.lang.IllegalStateException thrown from the 
> UncaughtExceptionHandler in thread "cluster15357-connection-reaper-0"
> [junit] Testsuite: org.apache.cassandra.pig.CqlTableTest
> [junit] Testsuite: org.apache.cassandra.pig.CqlTableTest Tests run: 1, 
> Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec
> [junit] 
> [junit] Testcase: 
> org.apache.cassandra.pig.CqlTableTest:testCqlNativeStorageSingleKeyTable: 
>   Caused an ERROR
> [junit] Forked Java VM exited abnormally. Please note the time in the 
> report does not reflect the time until the VM exit.
> [junit] junit.framework.AssertionFailedError: Forked Java VM exited 
> abnormally. Please note the time in the report does not reflect the time 
> until the VM exit.
> [junit] 
> [junit] 
> [junit] Test org.apache.cassandra.pig.CqlTableTest FAILED (crashed)
> [junit] Testsuite: org.apache.cassandra.pig.ThriftColumnFamilyDataTypeTest
> [junit] Testsuite: 
> org.apache.cassandra.pig.ThriftColumnFamilyDataTypeTest Tests run: 1, 
> Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 19.889 sec
> [junit] 
> [junit] Testcase: 
> testCassandraStorageDataType(org.apache.cassandra.pig.ThriftColumnFamilyDataTypeTest):
>   Caused an ERROR
> [junit] Unable to open iterator for alias rows
> [junit] org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: 
> Unable to open iterator for alias rows
> [junit]   at org.apache.pig.PigServer.openIterator(PigServer.java:882)
> [junit]   at 
> org.apache.cassandra.pig.ThriftColumnFamilyDataTypeTest.testCassandraStorageDataType(ThriftColumnFamilyDataTypeTest.java:81)
> [junit] Caused by: java.io.IOException: Job terminated with anomalous 
> status FAILED
> [junit]   at org.apache.pig.PigServer.openIterator(PigServer.java:874)
> [junit] 
> [junit] 
> [junit] Test org.apache.cassandra.pig.ThriftColumnFamilyDataTypeTest 
> FAILED
> [junit] Testsuite: org.apache.cassandra.pig.ThriftColumnFamilyTest
> [junit] Testsuite: org.apache.cassandra.pig.ThriftColumnFamilyTest
> [junit] Testsuite: org.apache.cassandra.pig.ThriftColumnFamilyTest Tests 
> run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec
> [junit] 
> [junit] Testcase: 
> org.apache.cassandra.pig.ThriftColumnFamilyTest:testCqlNativeStorageCompositeKeyCF:
>  Caused an ERROR
> [junit] Forked Java VM exited abnormally. Please note the time in the 
> report does not reflect the time until the VM exit.
> [junit] junit.framework.AssertionFailedError: Forked Java VM exited 
> abnormally. Please note the time in the report does not reflect the time 
> until the VM exit.
> [junit] 
> [junit] 
> [junit] Test org.apache.cassandra.pig.ThriftColumnFamilyTest FAILED 
> (crashed)
> [junitreport] Processing 
> /var/lib/jenkins/jobs/trunk_pigtest/workspace/build/test/TESTS-TestSuites.xml 
> to /tmp/null1591595172
> [junitreport] Loading stylesheet 
> jar:file:/usr/share/ant/lib/ant-junit.jar!/org/apache/tools/ant/taskdefs/optional/junit/xsl/junit-frames.xsl
> [junit

[jira] [Commented] (CASSANDRA-9841) trunk pig-test fails

2015-07-23 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639372#comment-14639372
 ] 

Aleksey Yeschenko commented on CASSANDRA-9841:
--

The updated driver fixes all the tests broken by CASSANDRA-8099 
{{STATIC}}/{{REGULAR}} schema changes. Committed to trunk as 
{{7b35e3e843bb3a8e1858051054e00a612e32774c}}.

> trunk pig-test fails
> 
>
> Key: CASSANDRA-9841
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9841
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop, Tests
> Environment: trunk HEAD
> Debian Jessie 64-bin
> AWS m3-2xlarge
>Reporter: Michael Shuler
>Assignee: Aleksey Yeschenko
>  Labels: test-failure
> Fix For: 3.0 beta 1
>
>
> {noformat}
> pig-test:
> [mkdir] Created dir: 
> /var/lib/jenkins/jobs/trunk_pigtest/workspace/build/test/cassandra
> [mkdir] Created dir: 
> /var/lib/jenkins/jobs/trunk_pigtest/workspace/build/test/output
> [junit] WARNING: multiple versions of ant detected in path for junit 
> [junit]  
> jar:file:/usr/share/ant/lib/ant.jar!/org/apache/tools/ant/Project.class
> [junit]  and 
> jar:file:/var/lib/jenkins/jobs/trunk_pigtest/workspace/build/lib/jars/ant-1.8.3.jar!/org/apache/tools/ant/Project.class
> [junit] Testsuite: org.apache.cassandra.pig.CqlRecordReaderTest
> [junit] Testsuite: org.apache.cassandra.pig.CqlRecordReaderTest Tests 
> run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.799 sec
> [junit] 
> [junit] Testsuite: org.apache.cassandra.pig.CqlTableDataTypeTest
> [junit] Testsuite: org.apache.cassandra.pig.CqlTableDataTypeTest Tests 
> run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.627 sec
> [junit] 
> [junit] Testsuite: org.apache.cassandra.pig.CqlTableTest
> [junit] 
> [junit] Exception: java.lang.IllegalStateException thrown from the 
> UncaughtExceptionHandler in thread "cluster15357-connection-reaper-0"
> [junit] Testsuite: org.apache.cassandra.pig.CqlTableTest
> [junit] Testsuite: org.apache.cassandra.pig.CqlTableTest Tests run: 1, 
> Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec
> [junit] 
> [junit] Testcase: 
> org.apache.cassandra.pig.CqlTableTest:testCqlNativeStorageSingleKeyTable: 
>   Caused an ERROR
> [junit] Forked Java VM exited abnormally. Please note the time in the 
> report does not reflect the time until the VM exit.
> [junit] junit.framework.AssertionFailedError: Forked Java VM exited 
> abnormally. Please note the time in the report does not reflect the time 
> until the VM exit.
> [junit] 
> [junit] 
> [junit] Test org.apache.cassandra.pig.CqlTableTest FAILED (crashed)
> [junit] Testsuite: org.apache.cassandra.pig.ThriftColumnFamilyDataTypeTest
> [junit] Testsuite: 
> org.apache.cassandra.pig.ThriftColumnFamilyDataTypeTest Tests run: 1, 
> Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 19.889 sec
> [junit] 
> [junit] Testcase: 
> testCassandraStorageDataType(org.apache.cassandra.pig.ThriftColumnFamilyDataTypeTest):
>   Caused an ERROR
> [junit] Unable to open iterator for alias rows
> [junit] org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: 
> Unable to open iterator for alias rows
> [junit]   at org.apache.pig.PigServer.openIterator(PigServer.java:882)
> [junit]   at 
> org.apache.cassandra.pig.ThriftColumnFamilyDataTypeTest.testCassandraStorageDataType(ThriftColumnFamilyDataTypeTest.java:81)
> [junit] Caused by: java.io.IOException: Job terminated with anomalous 
> status FAILED
> [junit]   at org.apache.pig.PigServer.openIterator(PigServer.java:874)
> [junit] 
> [junit] 
> [junit] Test org.apache.cassandra.pig.ThriftColumnFamilyDataTypeTest 
> FAILED
> [junit] Testsuite: org.apache.cassandra.pig.ThriftColumnFamilyTest
> [junit] Testsuite: org.apache.cassandra.pig.ThriftColumnFamilyTest
> [junit] Testsuite: org.apache.cassandra.pig.ThriftColumnFamilyTest Tests 
> run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec
> [junit] 
> [junit] Testcase: 
> org.apache.cassandra.pig.ThriftColumnFamilyTest:testCqlNativeStorageCompositeKeyCF:
>  Caused an ERROR
> [junit] Forked Java VM exited abnormally. Please note the time in the 
> report does not reflect the time until the VM exit.
> [junit] junit.framework.AssertionFailedError: Forked Java VM exited 
> abnormally. Please note the time in the report does not reflect the time 
> until the VM exit.
> [junit] 
> [junit] 
> [junit] Test org.apache.cassandra.pig.ThriftColumnFamilyTest FAILED 
> (crashed)
> [junitreport] Processing 
> /var/lib/jenkins/jobs/trunk_pigtest/workspace/build/test/TESTS-TestSuites.xml 
> to /tmp/null1591595172
> [junitreport] Loadi

[jira] [Created] (CASSANDRA-9882) DTCS (maybe other strategies) can block flushing when there are lots of sstables

2015-07-23 Thread Jeremiah Jordan (JIRA)
Jeremiah Jordan created CASSANDRA-9882:
--

 Summary: DTCS (maybe other strategies) can block flushing when 
there are lots of sstables
 Key: CASSANDRA-9882
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9882
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jeremiah Jordan


MemtableFlushWriter tasks can get blocked by Compaction getNextBackgroundTask.  
This is in a wonky cluster with 200k sstables in the CF, but seems bad for 
flushing to be blocked by getNextBackgroundTask when we are trying to make 
these new "smart" strategies that may take some time to calculate what to do.

{noformat}
"MemtableFlushWriter:21" daemon prio=10 tid=0x7ff7ad965000 nid=0x6693 
waiting for monitor entry [0x7ff78a667000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
org.apache.cassandra.db.compaction.WrappingCompactionStrategy.handleNotification(WrappingCompactionStrategy.java:237)
- waiting to lock <0x0006fcdbbf60> (a 
org.apache.cassandra.db.compaction.WrappingCompactionStrategy)
at org.apache.cassandra.db.DataTracker.notifyAdded(DataTracker.java:518)
at 
org.apache.cassandra.db.DataTracker.replaceFlushed(DataTracker.java:178)
at 
org.apache.cassandra.db.compaction.AbstractCompactionStrategy.replaceFlushed(AbstractCompactionStrategy.java:234)
at 
org.apache.cassandra.db.ColumnFamilyStore.replaceFlushed(ColumnFamilyStore.java:1475)
at 
org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:336)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
at 
org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

   Locked ownable synchronizers:
- <0x000743b3ac38> (a 
java.util.concurrent.ThreadPoolExecutor$Worker)

"MemtableFlushWriter:19" daemon prio=10 tid=0x7ff7ac57a000 nid=0x649b 
waiting for monitor entry [0x7ff78b8ee000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
org.apache.cassandra.db.compaction.WrappingCompactionStrategy.handleNotification(WrappingCompactionStrategy.java:237)
- waiting to lock <0x0006fcdbbf60> (a 
org.apache.cassandra.db.compaction.WrappingCompactionStrategy)
at org.apache.cassandra.db.DataTracker.notifyAdded(DataTracker.java:518)
at 
org.apache.cassandra.db.DataTracker.replaceFlushed(DataTracker.java:178)
at 
org.apache.cassandra.db.compaction.AbstractCompactionStrategy.replaceFlushed(AbstractCompactionStrategy.java:234)
at 
org.apache.cassandra.db.ColumnFamilyStore.replaceFlushed(ColumnFamilyStore.java:1475)
at 
org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:336)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
at 
org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

"CompactionExecutor:14" daemon prio=10 tid=0x7ff7ad359800 nid=0x4d59 
runnable [0x7fecce3ea000]
   java.lang.Thread.State: RUNNABLE
at 
org.apache.cassandra.io.sstable.SSTableReader.equals(SSTableReader.java:628)
at 
com.google.common.collect.ImmutableSet.construct(ImmutableSet.java:206)
at 
com.google.common.collect.ImmutableSet.construct(ImmutableSet.java:220)
at 
com.google.common.collect.ImmutableSet.access$000(ImmutableSet.java:74)
at 
com.google.common.collect.ImmutableSet$Builder.build(ImmutableSet.java:531)
at com.google.common.collect.Sets$1.immutableCopy(Sets.java:606)
at 
org.apache.cassandra.db.ColumnFamilyStore.getOverlappingSSTables(ColumnFamilyStore.java:1352)
at 
org.apache.cassandra.db.compaction.DateTieredCompactionStrategy.getNextBackgroundSSTables(DateTieredCompactionStrategy.java:88)
at 
org.apache.cassandra.db.compaction.DateTieredCompactionStrategy.getNextBackgroundTask(DateTieredCompactionStrategy.java:65)
- locked <0x0006fcdbbf00> (a 
org.apache.cassandra.db.compaction.DateTieredCompactionStrategy)
at 
org.apache.cassandra.db.compaction.WrappingCompactionStrategy.getNextBackgroundTask(WrappingCompactionStrategy.java:72)
- l

[jira] [Commented] (CASSANDRA-9498) If more than 65K columns, sparse layout will break

2015-07-23 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639369#comment-14639369
 ] 

Benedict commented on CASSANDRA-9498:
-

We need to switch {{writeShort}} to {{writeUnsignedVInt}} in a few places. This 
actually affects more than sparse layout, with {{Columns.Serializer}} also 
needing updating.

We should probably reconsider all of our uses of {{writeShort}}, as it may be 
they would all do better vint encoded.

> If more than 65K columns, sparse layout will break
> --
>
> Key: CASSANDRA-9498
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9498
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Priority: Minor
> Fix For: 3.0 beta 1
>
>
> Follow up to CASSANDRA-8099. It is a relatively small bug, since the exposed 
> population of users is likely to be very low, but fixing it in a good way is 
> a bit tricky. I'm filing a separate JIRA, because I would like us to address 
> this by introducing a writeVInt method to DataOutputStreamPlus, that we can 
> also exploit to improve the encoding of timestamps and deletion times, and 
> this JIRA will help to track the dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9402) Implement proper sandboxing for UDFs

2015-07-23 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639364#comment-14639364
 ] 

Robert Stupp commented on CASSANDRA-9402:
-

Pushed two more commits:
* some more tests (against NIO classes)
* separate package for each UDF with random component in the name + added 
random part to class name


> Implement proper sandboxing for UDFs
> 
>
> Key: CASSANDRA-9402
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9402
> Project: Cassandra
>  Issue Type: Task
>Reporter: T Jake Luciani
>Assignee: Robert Stupp
>Priority: Critical
>  Labels: docs-impacting, security
> Fix For: 3.0 beta 1
>
> Attachments: 9402-warning.txt
>
>
> We want to avoid a security exploit for our users.  We need to make sure we 
> ship 2.2 UDFs with good defaults so someone exposing it to the internet 
> accidentally doesn't open themselves up to having arbitrary code run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Update the bundled java driver to fix failing pig tests

2015-07-23 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 1f5c4f7ba -> 7b35e3e84


Update the bundled java driver to fix failing pig tests

patch by Aleksey Yeschenko for CASSANDRA-9841


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7b35e3e8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7b35e3e8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7b35e3e8

Branch: refs/heads/trunk
Commit: 7b35e3e843bb3a8e1858051054e00a612e32774c
Parents: 1f5c4f7
Author: Aleksey Yeschenko 
Authored: Thu Jul 23 21:59:25 2015 +0300
Committer: Aleksey Yeschenko 
Committed: Thu Jul 23 22:00:17 2015 +0300

--
 ...ra-driver-core-2.2.0-rc2-SNAPSHOT-shaded.jar | Bin 2163222 -> 2163939 bytes
 1 file changed, 0 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7b35e3e8/lib/cassandra-driver-core-2.2.0-rc2-SNAPSHOT-shaded.jar
--
diff --git a/lib/cassandra-driver-core-2.2.0-rc2-SNAPSHOT-shaded.jar 
b/lib/cassandra-driver-core-2.2.0-rc2-SNAPSHOT-shaded.jar
index 0d626f5..edb926d 100644
Binary files a/lib/cassandra-driver-core-2.2.0-rc2-SNAPSHOT-shaded.jar and 
b/lib/cassandra-driver-core-2.2.0-rc2-SNAPSHOT-shaded.jar differ



[jira] [Comment Edited] (CASSANDRA-9871) Cannot replace token does not exist - DN node removed as Fat Client

2015-07-23 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639334#comment-14639334
 ] 

Jason Brown edited comment on CASSANDRA-9871 at 7/23/15 6:45 PM:
-

FWIW, I used [~Stefania]'s dtest on 2.1 and was able to reproduce the error 
(new node could not replace the dead one); however, 2.0 did not reproduce the 
error (the new node successfully replaced the dead one). UPDATE: This was done 
with the gentle shutdown option.


was (Author: jasobrown):
FWIW, I used [~Stefania]'s dtest on 2.1 and was able to reproduce the error 
(new node could not replace the dead one); however, 2.0 did not reproduce the 
error (the new node successfully replaced the dead one)

> Cannot replace token does not exist - DN node removed as Fat Client
> ---
>
> Key: CASSANDRA-9871
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9871
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sebastian Estevez
>Assignee: Stefania
> Fix For: 2.1.x
>
>
> We lost a node due to disk failure, we tried to replace it via 
> -Dcassandra.replace_address per -- 
> http://docs.datastax.com/en/cassandra/2.1/cassandra/operations/opsReplaceNode.html
> The node would not come up with these errors in the system.log:
> {code}
> INFO  [main] 2015-07-22 03:20:06,722  StorageService.java:500 - Gathering 
> node replacement information for /10.171.115.233
> ...
> INFO  [SharedPool-Worker-1] 2015-07-22 03:22:34,281  Gossiper.java:954 - 
> InetAddress /10.111.183.101 is now UP
> INFO  [GossipTasks:1] 2015-07-22 03:22:59,300  Gossiper.java:735 - FatClient 
> /10.171.115.233 has been silent for 3ms, removing from gossip
> ERROR [main] 2015-07-22 03:23:28,485  CassandraDaemon.java:541 - Exception 
> encountered during startup
> java.lang.UnsupportedOperationException: Cannot replace token 
> -1013652079972151677 which does not exist!
> {code}
> It is not clear why Gossiper removed the node as a FatClient, given that it 
> was a full node before it died and it had tokens assigned to it (including 
> -1013652079972151677) in system.peers and nodetool ring. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9871) Cannot replace token does not exist - DN node removed as Fat Client

2015-07-23 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639334#comment-14639334
 ] 

Jason Brown edited comment on CASSANDRA-9871 at 7/23/15 6:45 PM:
-

FWIW, I used [~Stefania]'s dtest on 2.1 and was able to reproduce the error 
(new node could not replace the dead one); however, 2.0 did not reproduce the 
error (the new node successfully replaced the dead one). UPDATE: This was done 
with the gently = true shutdown option.


was (Author: jasobrown):
FWIW, I used [~Stefania]'s dtest on 2.1 and was able to reproduce the error 
(new node could not replace the dead one); however, 2.0 did not reproduce the 
error (the new node successfully replaced the dead one). UPDATE: This was done 
with the gentle shutdown option.

> Cannot replace token does not exist - DN node removed as Fat Client
> ---
>
> Key: CASSANDRA-9871
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9871
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sebastian Estevez
>Assignee: Stefania
> Fix For: 2.1.x
>
>
> We lost a node due to disk failure, we tried to replace it via 
> -Dcassandra.replace_address per -- 
> http://docs.datastax.com/en/cassandra/2.1/cassandra/operations/opsReplaceNode.html
> The node would not come up with these errors in the system.log:
> {code}
> INFO  [main] 2015-07-22 03:20:06,722  StorageService.java:500 - Gathering 
> node replacement information for /10.171.115.233
> ...
> INFO  [SharedPool-Worker-1] 2015-07-22 03:22:34,281  Gossiper.java:954 - 
> InetAddress /10.111.183.101 is now UP
> INFO  [GossipTasks:1] 2015-07-22 03:22:59,300  Gossiper.java:735 - FatClient 
> /10.171.115.233 has been silent for 3ms, removing from gossip
> ERROR [main] 2015-07-22 03:23:28,485  CassandraDaemon.java:541 - Exception 
> encountered during startup
> java.lang.UnsupportedOperationException: Cannot replace token 
> -1013652079972151677 which does not exist!
> {code}
> It is not clear why Gossiper removed the node as a FatClient, given that it 
> was a full node before it died and it had tokens assigned to it (including 
> -1013652079972151677) in system.peers and nodetool ring. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9881) Rows with negative-sized keys can't be skipped by sstablescrub

2015-07-23 Thread Brandon Williams (JIRA)
Brandon Williams created CASSANDRA-9881:
---

 Summary: Rows with negative-sized keys can't be skipped by 
sstablescrub
 Key: CASSANDRA-9881
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9881
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Brandon Williams
Priority: Minor
 Fix For: 2.1.x


It is possible to have corruption in such a way that scrub (on or offline) 
can't skip the row, so you end up in a loop where this just keeps repeating:

{noformat}
WARNING: Row starting at position 2087453 is unreadable; skipping to next 
Reading row at 2087453 
row (unreadable key) is -1 bytes
{noformat}

The workaround is to just delete the problem sstable since you were going to 
have to repair anyway, but it would still be nice to salvage the rest of the 
sstable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9871) Cannot replace token does not exist - DN node removed as Fat Client

2015-07-23 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639334#comment-14639334
 ] 

Jason Brown commented on CASSANDRA-9871:


FWIW, I used [~Stefania]'s dtest on 2.1 and was able to reproduce the error 
(new node could not replace the dead one); however, 2.0 did not reproduce the 
error (the new node successfully replaced the dead one)

> Cannot replace token does not exist - DN node removed as Fat Client
> ---
>
> Key: CASSANDRA-9871
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9871
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sebastian Estevez
>Assignee: Stefania
> Fix For: 2.1.x
>
>
> We lost a node due to disk failure, we tried to replace it via 
> -Dcassandra.replace_address per -- 
> http://docs.datastax.com/en/cassandra/2.1/cassandra/operations/opsReplaceNode.html
> The node would not come up with these errors in the system.log:
> {code}
> INFO  [main] 2015-07-22 03:20:06,722  StorageService.java:500 - Gathering 
> node replacement information for /10.171.115.233
> ...
> INFO  [SharedPool-Worker-1] 2015-07-22 03:22:34,281  Gossiper.java:954 - 
> InetAddress /10.111.183.101 is now UP
> INFO  [GossipTasks:1] 2015-07-22 03:22:59,300  Gossiper.java:735 - FatClient 
> /10.171.115.233 has been silent for 3ms, removing from gossip
> ERROR [main] 2015-07-22 03:23:28,485  CassandraDaemon.java:541 - Exception 
> encountered during startup
> java.lang.UnsupportedOperationException: Cannot replace token 
> -1013652079972151677 which does not exist!
> {code}
> It is not clear why Gossiper removed the node as a FatClient, given that it 
> was a full node before it died and it had tokens assigned to it (including 
> -1013652079972151677) in system.peers and nodetool ring. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9871) Cannot replace token does not exist - DN node removed as Fat Client

2015-07-23 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639322#comment-14639322
 ] 

Jason Brown commented on CASSANDRA-9871:


As part of understanding this, when we do the shadow round from 
{{SS.prepareReplacementInfo}}, Gossiper will not update TMD (via {{SS.onJoin}}, 
which calls {{SS.onChange}} and so) as there are no registered subscriber yet. 
So I can see how we would not have any previous entry in TMD for the node being 
replaced, thus causing the failure in {{SS.joinTokenRing}}. Still digging in 
deeper...

> Cannot replace token does not exist - DN node removed as Fat Client
> ---
>
> Key: CASSANDRA-9871
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9871
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sebastian Estevez
>Assignee: Stefania
> Fix For: 2.1.x
>
>
> We lost a node due to disk failure, we tried to replace it via 
> -Dcassandra.replace_address per -- 
> http://docs.datastax.com/en/cassandra/2.1/cassandra/operations/opsReplaceNode.html
> The node would not come up with these errors in the system.log:
> {code}
> INFO  [main] 2015-07-22 03:20:06,722  StorageService.java:500 - Gathering 
> node replacement information for /10.171.115.233
> ...
> INFO  [SharedPool-Worker-1] 2015-07-22 03:22:34,281  Gossiper.java:954 - 
> InetAddress /10.111.183.101 is now UP
> INFO  [GossipTasks:1] 2015-07-22 03:22:59,300  Gossiper.java:735 - FatClient 
> /10.171.115.233 has been silent for 3ms, removing from gossip
> ERROR [main] 2015-07-22 03:23:28,485  CassandraDaemon.java:541 - Exception 
> encountered during startup
> java.lang.UnsupportedOperationException: Cannot replace token 
> -1013652079972151677 which does not exist!
> {code}
> It is not clear why Gossiper removed the node as a FatClient, given that it 
> was a full node before it died and it had tokens assigned to it (including 
> -1013652079972151677) in system.peers and nodetool ring. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9880) ScrubTest.testScrubOutOfOrder should generate test file on the fly

2015-07-23 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639324#comment-14639324
 ] 

Yuki Morishita commented on CASSANDRA-9880:
---

testall: http://cassci.datastax.com/job/yukim-9880-testall/lastBuild/testReport/

ScrubTest passed.

> ScrubTest.testScrubOutOfOrder should generate test file on the fly
> --
>
> Key: CASSANDRA-9880
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9880
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Yuki Morishita
>Assignee: Yuki Morishita
>Priority: Blocker
>  Labels: test-failure
> Fix For: 3.0 beta 1
>
>
> ScrubTest#testScrubOutOfOrder is failing on trunk due to the serialization 
> format change from pre-generated out-of-order SSTable.
> We should change that to generate out-of-order SSTable on the fly so that we 
> don't need to bother generating SSTable by hand again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9871) Cannot replace token does not exist - DN node removed as Fat Client

2015-07-23 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639322#comment-14639322
 ] 

Jason Brown edited comment on CASSANDRA-9871 at 7/23/15 6:30 PM:
-

As part of understanding this, when we do the shadow round from 
{{SS.prepareReplacementInfo}}, Gossiper will not update TMD (via {{SS.onJoin}}, 
which calls {{SS.onChange}} and so down to {{TMD.updateNormalTokens}}) as there 
are no registered subscriber yet. So I can see how we would not have any 
previous entry in TMD for the node being replaced, thus causing the failure in 
{{SS.joinTokenRing}}. Still digging in deeper...


was (Author: jasobrown):
As part of understanding this, when we do the shadow round from 
{{SS.prepareReplacementInfo}}, Gossiper will not update TMD (via {{SS.onJoin}}, 
which calls {{SS.onChange}} and so) as there are no registered subscriber yet. 
So I can see how we would not have any previous entry in TMD for the node being 
replaced, thus causing the failure in {{SS.joinTokenRing}}. Still digging in 
deeper...

> Cannot replace token does not exist - DN node removed as Fat Client
> ---
>
> Key: CASSANDRA-9871
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9871
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sebastian Estevez
>Assignee: Stefania
> Fix For: 2.1.x
>
>
> We lost a node due to disk failure, we tried to replace it via 
> -Dcassandra.replace_address per -- 
> http://docs.datastax.com/en/cassandra/2.1/cassandra/operations/opsReplaceNode.html
> The node would not come up with these errors in the system.log:
> {code}
> INFO  [main] 2015-07-22 03:20:06,722  StorageService.java:500 - Gathering 
> node replacement information for /10.171.115.233
> ...
> INFO  [SharedPool-Worker-1] 2015-07-22 03:22:34,281  Gossiper.java:954 - 
> InetAddress /10.111.183.101 is now UP
> INFO  [GossipTasks:1] 2015-07-22 03:22:59,300  Gossiper.java:735 - FatClient 
> /10.171.115.233 has been silent for 3ms, removing from gossip
> ERROR [main] 2015-07-22 03:23:28,485  CassandraDaemon.java:541 - Exception 
> encountered during startup
> java.lang.UnsupportedOperationException: Cannot replace token 
> -1013652079972151677 which does not exist!
> {code}
> It is not clear why Gossiper removed the node as a FatClient, given that it 
> was a full node before it died and it had tokens assigned to it (including 
> -1013652079972151677) in system.peers and nodetool ring. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9402) Implement proper sandboxing for UDFs

2015-07-23 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639316#comment-14639316
 ] 

Robert Stupp commented on CASSANDRA-9402:
-

All java UDFs land in 
{{org.apache.cassandra.cql3.functions.JavaBasedUDFunction#GENERATED_PACKAGE}} 
({{org.apache.cassandra.cql3.udf.gen}}) - so it's not a big problem.
But I'll cross check, whether Java-UDFs could access each other.

> Implement proper sandboxing for UDFs
> 
>
> Key: CASSANDRA-9402
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9402
> Project: Cassandra
>  Issue Type: Task
>Reporter: T Jake Luciani
>Assignee: Robert Stupp
>Priority: Critical
>  Labels: docs-impacting, security
> Fix For: 3.0 beta 1
>
> Attachments: 9402-warning.txt
>
>
> We want to avoid a security exploit for our users.  We need to make sure we 
> ship 2.2 UDFs with good defaults so someone exposing it to the internet 
> accidentally doesn't open themselves up to having arbitrary code run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9402) Implement proper sandboxing for UDFs

2015-07-23 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639310#comment-14639310
 ] 

T Jake Luciani commented on CASSANDRA-9402:
---

One other issue is the fact that we don't seal our jars.  So someone could 
implement a bad method in a whitelisted package name

http://docs.oracle.com/javase/tutorial/deployment/jar/sealman.html

> Implement proper sandboxing for UDFs
> 
>
> Key: CASSANDRA-9402
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9402
> Project: Cassandra
>  Issue Type: Task
>Reporter: T Jake Luciani
>Assignee: Robert Stupp
>Priority: Critical
>  Labels: docs-impacting, security
> Fix For: 3.0 beta 1
>
> Attachments: 9402-warning.txt
>
>
> We want to avoid a security exploit for our users.  We need to make sure we 
> ship 2.2 UDFs with good defaults so someone exposing it to the internet 
> accidentally doesn't open themselves up to having arbitrary code run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9828) Minor improvements to RowStats

2015-07-23 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9828:
--
Reviewer: Joshua McKenzie

> Minor improvements to RowStats
> --
>
> Key: CASSANDRA-9828
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9828
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
>Priority: Minor
> Fix For: 3.0 beta 1
>
>
> There is some small improvements/refactor I'd like to do for {{RowStats}}. 
> More specifically, I'm attaching 3 commits:
> # the first one merely rename {{RowStats}} to {{EncodingStats}}. {{RowStats}} 
> was not a terribly helpful name while {{EncodingStats}} at least give a sense 
> of why the thing exists.
> # the 2nd one improve the serialization of those {{EncodingStats}}. 
> {{EncodingStats}} holds both a {{minTimestamp}} and a 
> {{minLocalDeletionTime}}, both of which are unix timestamp (or at least 
> should be almost all the time for the timestamp by convention) and so are 
> fairly big numbers that don't get much love (if any) from vint encoding. So 
> the patch introducing hard-coded epoch numbers for both that roughly 
> correspond to now, and substract that to the actual {{EncodingStats}} number 
> to make it more rip for vint encoding. It does mean the exact encoding size 
> will deteriorate over time, but it'll take a while before it becomes useless 
> and we'll probably have more more change to the encodings by then anyway 
> (and/or we can change the epoch number regularly with new versions of the 
> messaging protocol if we so wish).
> # the last patch is just a small simple cleanup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9847) Don't serialize CFMetaData in read responses

2015-07-23 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9847:
--
Reviewer: Joshua McKenzie

> Don't serialize CFMetaData in read responses
> 
>
> Key: CASSANDRA-9847
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9847
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 3.0 beta 1
>
>
> Our CFMetaData ids are 16 bytes long, which for small messages is a non 
> trivial part of the size (we're further currently unnecessarily serialize it 
> with every partition). At least for read response, we don't really need to 
> serialize it at all since we always to which query this is a response of.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9799) RangeTombstonListTest sometimes fails on trunk

2015-07-23 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9799:
--
Reviewer: Joshua McKenzie

> RangeTombstonListTest sometimes fails on trunk
> --
>
> Key: CASSANDRA-9799
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9799
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
>  Labels: test
> Fix For: 3.0 beta 1
>
>
> I've seen random failures with {{RangeTombstoneList.addAllRandomTest}}. The 
> problem is 2 inequalities in {{RangeTombstoneList.insertFrom}} that should be 
> inclusive rather than strict when we deal with boundaries between range. In 
> practice, that makes us consider range like {{[3, 3)}} during addition, which 
> is non-sensical.
> Attaching patch as well as a test that reproduce (extracted from 
> {{addAllRandomTest}} with a failing seed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9717) TestCommitLog segment size dtests fail on trunk

2015-07-23 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9717:
--
Assignee: Jim Witschey  (was: Branimir Lambov)
Reviewer: Ariel Weisberg

> TestCommitLog segment size dtests fail on trunk
> ---
>
> Key: CASSANDRA-9717
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9717
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Jim Witschey
>Priority: Blocker
> Fix For: 3.0 beta 1
>
>
> The test for the commit log segment size when the specified size is 32MB. It 
> fails for me locally and on on cassci. ([cassci 
> link|http://cassci.datastax.com/view/trunk/job/trunk_dtest/305/testReport/commitlog_test/TestCommitLog/default_segment_size_test/])
> The command to run the test by itself is {{CASSANDRA_VERSION=git:trunk 
> nosetests commitlog_test.py:TestCommitLog.default_segment_size_test}}.
> EDIT: a similar test, 
> {{commitlog_test.py:TestCommitLog.small_segment_size_test}}, also fails with 
> a similar error.
> The solution here may just be to change the expected size or the acceptable 
> error -- the result isn't far off. I'm happy to make the dtest change if 
> that's the solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9841) trunk pig-test fails

2015-07-23 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639237#comment-14639237
 ] 

Aleksey Yeschenko commented on CASSANDRA-9841:
--

Assigning to myself, this being too cassci-annoying.

> trunk pig-test fails
> 
>
> Key: CASSANDRA-9841
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9841
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop, Tests
> Environment: trunk HEAD
> Debian Jessie 64-bin
> AWS m3-2xlarge
>Reporter: Michael Shuler
>Assignee: Aleksey Yeschenko
>  Labels: test-failure
> Fix For: 3.0.0 rc1
>
>
> {noformat}
> pig-test:
> [mkdir] Created dir: 
> /var/lib/jenkins/jobs/trunk_pigtest/workspace/build/test/cassandra
> [mkdir] Created dir: 
> /var/lib/jenkins/jobs/trunk_pigtest/workspace/build/test/output
> [junit] WARNING: multiple versions of ant detected in path for junit 
> [junit]  
> jar:file:/usr/share/ant/lib/ant.jar!/org/apache/tools/ant/Project.class
> [junit]  and 
> jar:file:/var/lib/jenkins/jobs/trunk_pigtest/workspace/build/lib/jars/ant-1.8.3.jar!/org/apache/tools/ant/Project.class
> [junit] Testsuite: org.apache.cassandra.pig.CqlRecordReaderTest
> [junit] Testsuite: org.apache.cassandra.pig.CqlRecordReaderTest Tests 
> run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.799 sec
> [junit] 
> [junit] Testsuite: org.apache.cassandra.pig.CqlTableDataTypeTest
> [junit] Testsuite: org.apache.cassandra.pig.CqlTableDataTypeTest Tests 
> run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.627 sec
> [junit] 
> [junit] Testsuite: org.apache.cassandra.pig.CqlTableTest
> [junit] 
> [junit] Exception: java.lang.IllegalStateException thrown from the 
> UncaughtExceptionHandler in thread "cluster15357-connection-reaper-0"
> [junit] Testsuite: org.apache.cassandra.pig.CqlTableTest
> [junit] Testsuite: org.apache.cassandra.pig.CqlTableTest Tests run: 1, 
> Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec
> [junit] 
> [junit] Testcase: 
> org.apache.cassandra.pig.CqlTableTest:testCqlNativeStorageSingleKeyTable: 
>   Caused an ERROR
> [junit] Forked Java VM exited abnormally. Please note the time in the 
> report does not reflect the time until the VM exit.
> [junit] junit.framework.AssertionFailedError: Forked Java VM exited 
> abnormally. Please note the time in the report does not reflect the time 
> until the VM exit.
> [junit] 
> [junit] 
> [junit] Test org.apache.cassandra.pig.CqlTableTest FAILED (crashed)
> [junit] Testsuite: org.apache.cassandra.pig.ThriftColumnFamilyDataTypeTest
> [junit] Testsuite: 
> org.apache.cassandra.pig.ThriftColumnFamilyDataTypeTest Tests run: 1, 
> Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 19.889 sec
> [junit] 
> [junit] Testcase: 
> testCassandraStorageDataType(org.apache.cassandra.pig.ThriftColumnFamilyDataTypeTest):
>   Caused an ERROR
> [junit] Unable to open iterator for alias rows
> [junit] org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: 
> Unable to open iterator for alias rows
> [junit]   at org.apache.pig.PigServer.openIterator(PigServer.java:882)
> [junit]   at 
> org.apache.cassandra.pig.ThriftColumnFamilyDataTypeTest.testCassandraStorageDataType(ThriftColumnFamilyDataTypeTest.java:81)
> [junit] Caused by: java.io.IOException: Job terminated with anomalous 
> status FAILED
> [junit]   at org.apache.pig.PigServer.openIterator(PigServer.java:874)
> [junit] 
> [junit] 
> [junit] Test org.apache.cassandra.pig.ThriftColumnFamilyDataTypeTest 
> FAILED
> [junit] Testsuite: org.apache.cassandra.pig.ThriftColumnFamilyTest
> [junit] Testsuite: org.apache.cassandra.pig.ThriftColumnFamilyTest
> [junit] Testsuite: org.apache.cassandra.pig.ThriftColumnFamilyTest Tests 
> run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec
> [junit] 
> [junit] Testcase: 
> org.apache.cassandra.pig.ThriftColumnFamilyTest:testCqlNativeStorageCompositeKeyCF:
>  Caused an ERROR
> [junit] Forked Java VM exited abnormally. Please note the time in the 
> report does not reflect the time until the VM exit.
> [junit] junit.framework.AssertionFailedError: Forked Java VM exited 
> abnormally. Please note the time in the report does not reflect the time 
> until the VM exit.
> [junit] 
> [junit] 
> [junit] Test org.apache.cassandra.pig.ThriftColumnFamilyTest FAILED 
> (crashed)
> [junitreport] Processing 
> /var/lib/jenkins/jobs/trunk_pigtest/workspace/build/test/TESTS-TestSuites.xml 
> to /tmp/null1591595172
> [junitreport] Loading stylesheet 
> jar:file:/usr/share/ant/lib/ant-junit.jar!/org/apache/tools/ant/taskdefs/optional/junit/xsl/junit-frames

[jira] [Assigned] (CASSANDRA-9841) trunk pig-test fails

2015-07-23 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko reassigned CASSANDRA-9841:


Assignee: Aleksey Yeschenko

> trunk pig-test fails
> 
>
> Key: CASSANDRA-9841
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9841
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop, Tests
> Environment: trunk HEAD
> Debian Jessie 64-bin
> AWS m3-2xlarge
>Reporter: Michael Shuler
>Assignee: Aleksey Yeschenko
>  Labels: test-failure
> Fix For: 3.0.0 rc1
>
>
> {noformat}
> pig-test:
> [mkdir] Created dir: 
> /var/lib/jenkins/jobs/trunk_pigtest/workspace/build/test/cassandra
> [mkdir] Created dir: 
> /var/lib/jenkins/jobs/trunk_pigtest/workspace/build/test/output
> [junit] WARNING: multiple versions of ant detected in path for junit 
> [junit]  
> jar:file:/usr/share/ant/lib/ant.jar!/org/apache/tools/ant/Project.class
> [junit]  and 
> jar:file:/var/lib/jenkins/jobs/trunk_pigtest/workspace/build/lib/jars/ant-1.8.3.jar!/org/apache/tools/ant/Project.class
> [junit] Testsuite: org.apache.cassandra.pig.CqlRecordReaderTest
> [junit] Testsuite: org.apache.cassandra.pig.CqlRecordReaderTest Tests 
> run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.799 sec
> [junit] 
> [junit] Testsuite: org.apache.cassandra.pig.CqlTableDataTypeTest
> [junit] Testsuite: org.apache.cassandra.pig.CqlTableDataTypeTest Tests 
> run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.627 sec
> [junit] 
> [junit] Testsuite: org.apache.cassandra.pig.CqlTableTest
> [junit] 
> [junit] Exception: java.lang.IllegalStateException thrown from the 
> UncaughtExceptionHandler in thread "cluster15357-connection-reaper-0"
> [junit] Testsuite: org.apache.cassandra.pig.CqlTableTest
> [junit] Testsuite: org.apache.cassandra.pig.CqlTableTest Tests run: 1, 
> Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec
> [junit] 
> [junit] Testcase: 
> org.apache.cassandra.pig.CqlTableTest:testCqlNativeStorageSingleKeyTable: 
>   Caused an ERROR
> [junit] Forked Java VM exited abnormally. Please note the time in the 
> report does not reflect the time until the VM exit.
> [junit] junit.framework.AssertionFailedError: Forked Java VM exited 
> abnormally. Please note the time in the report does not reflect the time 
> until the VM exit.
> [junit] 
> [junit] 
> [junit] Test org.apache.cassandra.pig.CqlTableTest FAILED (crashed)
> [junit] Testsuite: org.apache.cassandra.pig.ThriftColumnFamilyDataTypeTest
> [junit] Testsuite: 
> org.apache.cassandra.pig.ThriftColumnFamilyDataTypeTest Tests run: 1, 
> Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 19.889 sec
> [junit] 
> [junit] Testcase: 
> testCassandraStorageDataType(org.apache.cassandra.pig.ThriftColumnFamilyDataTypeTest):
>   Caused an ERROR
> [junit] Unable to open iterator for alias rows
> [junit] org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: 
> Unable to open iterator for alias rows
> [junit]   at org.apache.pig.PigServer.openIterator(PigServer.java:882)
> [junit]   at 
> org.apache.cassandra.pig.ThriftColumnFamilyDataTypeTest.testCassandraStorageDataType(ThriftColumnFamilyDataTypeTest.java:81)
> [junit] Caused by: java.io.IOException: Job terminated with anomalous 
> status FAILED
> [junit]   at org.apache.pig.PigServer.openIterator(PigServer.java:874)
> [junit] 
> [junit] 
> [junit] Test org.apache.cassandra.pig.ThriftColumnFamilyDataTypeTest 
> FAILED
> [junit] Testsuite: org.apache.cassandra.pig.ThriftColumnFamilyTest
> [junit] Testsuite: org.apache.cassandra.pig.ThriftColumnFamilyTest
> [junit] Testsuite: org.apache.cassandra.pig.ThriftColumnFamilyTest Tests 
> run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec
> [junit] 
> [junit] Testcase: 
> org.apache.cassandra.pig.ThriftColumnFamilyTest:testCqlNativeStorageCompositeKeyCF:
>  Caused an ERROR
> [junit] Forked Java VM exited abnormally. Please note the time in the 
> report does not reflect the time until the VM exit.
> [junit] junit.framework.AssertionFailedError: Forked Java VM exited 
> abnormally. Please note the time in the report does not reflect the time 
> until the VM exit.
> [junit] 
> [junit] 
> [junit] Test org.apache.cassandra.pig.ThriftColumnFamilyTest FAILED 
> (crashed)
> [junitreport] Processing 
> /var/lib/jenkins/jobs/trunk_pigtest/workspace/build/test/TESTS-TestSuites.xml 
> to /tmp/null1591595172
> [junitreport] Loading stylesheet 
> jar:file:/usr/share/ant/lib/ant-junit.jar!/org/apache/tools/ant/taskdefs/optional/junit/xsl/junit-frames.xsl
> [junitreport] Transform time: 1048ms
> [junitreport] Deleting: /t

[jira] [Commented] (CASSANDRA-9498) If more than 65K columns, sparse layout will break

2015-07-23 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639216#comment-14639216
 ] 

Jonathan Ellis commented on CASSANDRA-9498:
---

With 9499 finished what is left here?

> If more than 65K columns, sparse layout will break
> --
>
> Key: CASSANDRA-9498
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9498
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Priority: Minor
> Fix For: 3.0 beta 1
>
>
> Follow up to CASSANDRA-8099. It is a relatively small bug, since the exposed 
> population of users is likely to be very low, but fixing it in a good way is 
> a bit tricky. I'm filing a separate JIRA, because I would like us to address 
> this by introducing a writeVInt method to DataOutputStreamPlus, that we can 
> also exploit to improve the encoding of timestamps and deletion times, and 
> this JIRA will help to track the dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8844) Change Data Capture (CDC)

2015-07-23 Thread Amit Khare (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639209#comment-14639209
 ] 

Amit Khare commented on CASSANDRA-8844:
---

Similar can also be achieved by Custom Secondary Index path and stream the 
changes to Kafka. https://github.com/adkhare/CassandraKafkaIndex

> Change Data Capture (CDC)
> -
>
> Key: CASSANDRA-8844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8844
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Tupshin Harper
> Fix For: 3.x
>
>
> "In databases, change data capture (CDC) is a set of software design patterns 
> used to determine (and track) the data that has changed so that action can be 
> taken using the changed data. Also, Change data capture (CDC) is an approach 
> to data integration that is based on the identification, capture and delivery 
> of the changes made to enterprise data sources."
> -Wikipedia
> As Cassandra is increasingly being used as the Source of Record (SoR) for 
> mission critical data in large enterprises, it is increasingly being called 
> upon to act as the central hub of traffic and data flow to other systems. In 
> order to try to address the general need, we (cc [~brianmhess]), propose 
> implementing a simple data logging mechanism to enable per-table CDC patterns.
> h2. The goals:
> # Use CQL as the primary ingestion mechanism, in order to leverage its 
> Consistency Level semantics, and in order to treat it as the single 
> reliable/durable SoR for the data.
> # To provide a mechanism for implementing good and reliable 
> (deliver-at-least-once with possible mechanisms for deliver-exactly-once ) 
> continuous semi-realtime feeds of mutations going into a Cassandra cluster.
> # To eliminate the developmental and operational burden of users so that they 
> don't have to do dual writes to other systems.
> # For users that are currently doing batch export from a Cassandra system, 
> give them the opportunity to make that realtime with a minimum of coding.
> h2. The mechanism:
> We propose a durable logging mechanism that functions similar to a commitlog, 
> with the following nuances:
> - Takes place on every node, not just the coordinator, so RF number of copies 
> are logged.
> - Separate log per table.
> - Per-table configuration. Only tables that are specified as CDC_LOG would do 
> any logging.
> - Per DC. We are trying to keep the complexity to a minimum to make this an 
> easy enhancement, but most likely use cases would prefer to only implement 
> CDC logging in one (or a subset) of the DCs that are being replicated to
> - In the critical path of ConsistencyLevel acknowledgment. Just as with the 
> commitlog, failure to write to the CDC log should fail that node's write. If 
> that means the requested consistency level was not met, then clients *should* 
> experience UnavailableExceptions.
> - Be written in a Row-centric manner such that it is easy for consumers to 
> reconstitute rows atomically.
> - Written in a simple format designed to be consumed *directly* by daemons 
> written in non JVM languages
> h2. Nice-to-haves
> I strongly suspect that the following features will be asked for, but I also 
> believe that they can be deferred for a subsequent release, and to guage 
> actual interest.
> - Multiple logs per table. This would make it easy to have multiple 
> "subscribers" to a single table's changes. A workaround would be to create a 
> forking daemon listener, but that's not a great answer.
> - Log filtering. Being able to apply filters, including UDF-based filters 
> would make Casandra a much more versatile feeder into other systems, and 
> again, reduce complexity that would otherwise need to be built into the 
> daemons.
> h2. Format and Consumption
> - Cassandra would only write to the CDC log, and never delete from it. 
> - Cleaning up consumed logfiles would be the client daemon's responibility
> - Logfile size should probably be configurable.
> - Logfiles should be named with a predictable naming schema, making it 
> triivial to process them in order.
> - Daemons should be able to checkpoint their work, and resume from where they 
> left off. This means they would have to leave some file artifact in the CDC 
> log's directory.
> - A sophisticated daemon should be able to be written that could 
> -- Catch up, in written-order, even when it is multiple logfiles behind in 
> processing
> -- Be able to continuously "tail" the most recent logfile and get 
> low-latency(ms?) access to the data as it is written.
> h2. Alternate approach
> In order to make consuming a change log easy and efficient to do with low 
> latency, the following could supplement the approach outlined above
> - Instead of writing to a logfile, by default, Cassandra could expose a 
> sock

[jira] [Updated] (CASSANDRA-9416) 3.x should refuse to start on JVM_VERSION < 1.8

2015-07-23 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9416:
--
Assignee: Philip Thompson

> 3.x should refuse to start on JVM_VERSION < 1.8
> ---
>
> Key: CASSANDRA-9416
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9416
> Project: Cassandra
>  Issue Type: Task
>Reporter: Michael Shuler
>Assignee: Philip Thompson
>Priority: Minor
>  Labels: lhf
> Fix For: 3.0 beta 1
>
> Attachments: trunk-9416.patch
>
>
> When I was looking at CASSANDRA-9408, I noticed that 
> {{conf/cassandra-env.sh}} and {{conf/cassandra-env.ps1}} do JVM version 
> checking and should get updated for 3.x to refuse to start with JVM_VERSION < 
> 1.8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9483) Document incompatibilities with -XX:+PerfDisableSharedMem

2015-07-23 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9483:
--
Assignee: T Jake Luciani  (was: Tyler Hobbs)

> Document incompatibilities with -XX:+PerfDisableSharedMem
> -
>
> Key: CASSANDRA-9483
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9483
> Project: Cassandra
>  Issue Type: Task
>  Components: Config, Documentation & website
>Reporter: Tyler Hobbs
>Assignee: T Jake Luciani
>Priority: Minor
> Fix For: 3.0 beta 1
>
>
> We recently discovered that [the Jolokia agent is incompatible with  the 
> -XX:+PerfDisableSharedMem JVM 
> option|https://github.com/rhuss/jolokia/issues/198].  I assume that this may 
> affect other monitoring tools as well.
> If we are going to leave this enabled by default, we should document the 
> potential problems with it.  A combination of a comment in 
> {{cassandra-env.sh}} (and the Windows equivalent) and a comment in NEWS.txt 
> should suffice, I think.
> If possible, it would be good to figure out what other tools are affected and 
> also mention them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9418) Fix dtests on Windows

2015-07-23 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639184#comment-14639184
 ] 

Joshua McKenzie commented on CASSANDRA-9418:


Rebased and committed the %z json fixing patch to 2.2 and trunk - slipped 
through the cracks.

> Fix dtests on Windows
> -
>
> Key: CASSANDRA-9418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9418
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>  Labels: Windows, docs-impacting
> Fix For: 2.2.x
>
> Attachments: 9418_tz_formatting.txt
>
>
> There's a variety of infrastructural failures within dtest w/regards to 
> windows that are causing tests to fail and those failures to cascade.
> Error: failure to delete commit log after a test / ccm cluster is stopped:
> {noformat}
> Traceback (most recent call last):
>   File "C:\src\cassandra-dtest\dtest.py", line 452, in tearDown
> self._cleanup_cluster()
>   File "C:\src\cassandra-dtest\dtest.py", line 172, in _cleanup_cluster
> self.cluster.remove()
>   File "build\bdist.win-amd64\egg\ccmlib\cluster.py", line 212, in remove
> shutil.rmtree(self.get_path())
>   File "C:\Python27\lib\shutil.py", line 247, in rmtree
> rmtree(fullname, ignore_errors, onerror)
>   File "C:\Python27\lib\shutil.py", line 247, in rmtree
> rmtree(fullname, ignore_errors, onerror)
>   File "C:\Python27\lib\shutil.py", line 252, in rmtree
> onerror(os.remove, fullname, sys.exc_info())
>   File "C:\Python27\lib\shutil.py", line 250, in rmtree
> os.remove(fullname)
> WindowsError: [Error 5] Access is denied: 
> 'c:\\temp\\dtest-4rxq2i\\test\\node1\\commitlogs\\CommitLog-5-1431969131917.log'
> {noformat}
> Cascading error: implication is that tests aren't shutting down correctly and 
> subsequent tests cannot start:
> {noformat}
> 06:00:20 ERROR: test_incr_decr_super_remove (thrift_tests.TestMutations)
> 06:00:20 
> --
> 06:00:20 Traceback (most recent call last):
> 06:00:20   File 
> "D:\jenkins\workspace\trunk_dtest_win32\cassandra-dtest\thrift_tests.py", 
> line 55, in setUp
> 06:00:20 cluster.start()
> 06:00:20   File "build\bdist.win-amd64\egg\ccmlib\cluster.py", line 249, in 
> start
> 06:00:20 p = node.start(update_pid=False, jvm_args=jvm_args, 
> profile_options=profile_options)
> 06:00:20   File "build\bdist.win-amd64\egg\ccmlib\node.py", line 457, in start
> 06:00:20 common.check_socket_available(itf)
> 06:00:20   File "build\bdist.win-amd64\egg\ccmlib\common.py", line 341, in 
> check_socket_available
> 06:00:20 raise UnavailableSocketError("Inet address %s:%s is not 
> available: %s" % (addr, port, msg))
> 06:00:20 UnavailableSocketError: Inet address 127.0.0.1:9042 is not 
> available: [Errno 10013] An attempt was made to access a socket in a way 
> forbidden by its access permissions
> 06:00:20  >> begin captured logging << 
> 
> 06:00:20 dtest: DEBUG: removing ccm cluster test at: d:\temp\dtest-a5iny5
> 06:00:20 dtest: DEBUG: cluster ccm directory: d:\temp\dtest-dalzcy
> 06:00:20 - >> end captured logging << 
> -
> {noformat}
> I've also seen (and am debugging) an error where a node just fails to start 
> via ccm.
> I'll update this ticket with PR's to dtest or other observations of interest.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9318) Bound the number of in-flight requests at the coordinator

2015-07-23 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9318:
--
Assignee: Jacek Lewandowski  (was: Ariel Weisberg)

> Bound the number of in-flight requests at the coordinator
> -
>
> Key: CASSANDRA-9318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9318
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Ariel Weisberg
>Assignee: Jacek Lewandowski
> Fix For: 2.1.x, 2.2.x
>
>
> It's possible to somewhat bound the amount of load accepted into the cluster 
> by bounding the number of in-flight requests and request bytes.
> An implementation might do something like track the number of outstanding 
> bytes and requests and if it reaches a high watermark disable read on client 
> connections until it goes back below some low watermark.
> Need to make sure that disabling read on the client connection won't 
> introduce other issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9863) NIODataInputStream has problems on trunk

2015-07-23 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639160#comment-14639160
 ] 

Ariel Weisberg edited comment on CASSANDRA-9863 at 7/23/15 5:24 PM:


-I don't think I introduced new non-idiomatic lines. Which files do you want me 
to refactor to make them idiomatic?-
Nevermind found what you are talking about.


was (Author: aweisberg):
I don't think I introduced new non-idiomatic lines. Which files do you want me 
to refactor to make them idiomatic?

> NIODataInputStream has problems on trunk
> 
>
> Key: CASSANDRA-9863
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9863
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Ariel Weisberg
>Priority: Blocker
> Fix For: 3.0 beta 1
>
>
> As the title says, there is cases where method calls to NIODataInputStream, 
> at least {{readVInt}} calls can loop forever. This is possibly only a problem 
> for vints where the code tries to read 8 bytes minimum but there is less than 
> that available, and in that sense is related to [~benedict]'s observation in 
> CASSANDRA-9708, but it is more serious than said observation because:
> # this can happen even if the buffer passed to NIODataInputStream ctor has 
> more than 8 bytes available, and hence I'm relatively confident [~benedict]'s 
> fix in CASSANDRA-9708 is not enough.
> # this doesn't necessarily fail cleanly by raising assertions, this can loop 
> forever (which is much harder to debug).
> Due of that, and because that is at least one of the cause of CASSANDRA-9764, 
> I think the problem warrants a specific ticket (from CASSANDRA-9708 that is).
> Now, the exact reason of this is looping is if {{readVInt}} is called but the 
> buffer has less than 8 byte remaining (again, the buffer had more initially). 
> In that case, {{readMinimum(8, 1)}} is called and it calls {{readNext()}} in 
> a loop. Within {{readNext()}}, the buffer (which has {{buf.position() == 0 && 
> buf.hasRemaining()}}) is actually unchanged (through a very weird dance of 
> setting the position to the limit, then the limit to the capacity, and then 
> flipping the buffer which resets everything to what it was), and because 
> {{rbc}} is the {{emptyReadableByteChannel}}, {{rbc.read(buf)}} does nothing 
> and always return {{-1}}. Back in {{readMinimum}}, {{read == -1}} but 
> {{remaining >= require}} (and {{remaining}} never changes), and hence the 
> forever looping.
> Now, not sure what the best fix is because I'm not fully familiar with that 
> code, but that does leads me to a 2nd point: {{NIODataInputSttream}} can IMHO 
> use a bit of additional/better comments. I won't pretend having tried very 
> hard to understand the whole class, so there is probably some lack of effort, 
> but at least a few things felt like they should clarified:
> * Provided I understand {{readNext()}} correctly, it only make sense when we 
> do have a {{ReadableByteChannel}} (and the fact that it's not the case sounds 
> like the bug). If that's the case, this should be explicitly documented and 
> probably asserted. As as an aside, I wonder if using {{rbc == null}} when we 
> don't have wouldn't be better: if we don't have one, we shouldn't try to use 
> it, and having a {{null}} would make things fail loudly if we do.
> * I'm not exactly sure what {{readMinimum}} arguments do. I'd have expected 
> at least one to be called "minimum", and an explanation of the meaning of the 
> other one.
> * {{prepareReadPaddedPrimitive}} says that it "Add padding if requested" but 
> there is seemingly no argument that trigger the "if requested part". Also 
> unclear what that padding is about in the first place.
> As a final point, it looks like the case where {{NIODataInputStream}} is 
> constructed with a {{ByteBuffer}} (rather than a {{ReadableByteChannel}}) 
> seems to be completely untested by the unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/3] cassandra git commit: Fix handling of incorrect %z cqlshlib output on Windows

2015-07-23 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/trunk 15265689f -> 1f5c4f7ba


Fix handling of incorrect %z cqlshlib output on Windows

Patch by jmckenzie; reviewed by aweisberg for CASSANDRA-9418


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/99decd8e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/99decd8e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/99decd8e

Branch: refs/heads/trunk
Commit: 99decd8eface9cd38ddd70542aa28a2773810526
Parents: 51ff499
Author: Joshua McKenzie 
Authored: Thu Jul 23 13:21:10 2015 -0400
Committer: Joshua McKenzie 
Committed: Thu Jul 23 13:21:10 2015 -0400

--
 bin/cqlsh.py |  2 +-
 pylib/cqlshlib/formatting.py | 17 +++--
 2 files changed, 16 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/99decd8e/bin/cqlsh.py
--
diff --git a/bin/cqlsh.py b/bin/cqlsh.py
index 999ddc4..6df1d75 100644
--- a/bin/cqlsh.py
+++ b/bin/cqlsh.py
@@ -945,7 +945,7 @@ class Shell(cmd.Cmd):
 try:
 import readline
 except ImportError:
-if platform.system() == 'Windows':
+if myplatform == 'Windows':
 print "WARNING: pyreadline dependency missing.  Install to 
enable tab completion."
 pass
 else:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/99decd8e/pylib/cqlshlib/formatting.py
--
diff --git a/pylib/cqlshlib/formatting.py b/pylib/cqlshlib/formatting.py
index ff5b118..00d5b40 100644
--- a/pylib/cqlshlib/formatting.py
+++ b/pylib/cqlshlib/formatting.py
@@ -16,16 +16,20 @@
 
 import calendar
 import math
+import platform
 import re
 import sys
 import platform
 import time
 from collections import defaultdict
+
 from . import wcwidth
 from .displaying import colorme, FormattedValue, DEFAULT_VALUE_COLORS
 from datetime import datetime, timedelta
 from cassandra.cqltypes import EMPTY
 
+is_win = platform.system() == 'Windows'
+
 unicode_controlchars_re = re.compile(r'[\x00-\x31\x7f-\xa0]')
 controlchars_re = re.compile(r'[\x00-\x31\x7f-\xff]')
 
@@ -193,15 +197,24 @@ def strftime(time_format, seconds):
 offset = -time.altzone
 else:
 offset = -time.timezone
-if formatted[-4:] != '' or time_format[-2:] != '%z' or offset == 0:
+if not is_win and (formatted[-4:] != '' or time_format[-2:] != '%z' or 
offset == 0):
 return formatted
+elif is_win and time_format[-2:] != '%z':
+return formatted
+
 # deal with %z on platforms where it isn't supported. see CASSANDRA-4746.
 if offset < 0:
 sign = '-'
 else:
 sign = '+'
 hours, minutes = divmod(abs(offset) / 60, 60)
-return formatted[:-5] + sign + '{0:0=2}{1:0=2}'.format(hours, minutes)
+# Need to strip out invalid %z output on Windows. C libs give us 'Eastern 
Standard Time' instead of +/- GMT
+if is_win and time_format[-2:] == '%z':
+# Remove chars and strip trailing spaces left behind
+formatted = re.sub('[A-Za-z]', '', formatted).rstrip()
+return formatted + sign + '{0:0=2}{1:0=2}'.format(hours, minutes)
+else:
+return formatted[:-5] + sign + '{0:0=2}{1:0=2}'.format(hours, minutes)
 
 @formatter_for('Date')
 def format_value_date(val, colormap, **_):



[2/2] cassandra git commit: Add logdir and storagedir to nodetool.bat

2015-07-23 Thread jmckenzie
Add logdir and storagedir to nodetool.bat

Patch by jmckenzie; reviewed by pthompson for CASSANDRA-9696


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9ad13309
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9ad13309
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9ad13309

Branch: refs/heads/cassandra-2.2
Commit: 9ad133097b5120fc6838b62f649c4d058639215e
Parents: 99decd8
Author: Joshua McKenzie 
Authored: Thu Jul 23 13:23:55 2015 -0400
Committer: Joshua McKenzie 
Committed: Thu Jul 23 13:23:55 2015 -0400

--
 bin/nodetool.bat | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9ad13309/bin/nodetool.bat
--
diff --git a/bin/nodetool.bat b/bin/nodetool.bat
index ec64db0..92d5c05 100644
--- a/bin/nodetool.bat
+++ b/bin/nodetool.bat
@@ -23,8 +23,11 @@ call cassandra.in.bat
 if NOT DEFINED CASSANDRA_HOME set CASSANDRA_HOME=%~dp0..
 if NOT DEFINED JAVA_HOME goto :err
 
+set CASSANDRA_PARAMS=%CASSANDRA_PARAMS% 
-Dcassandra.logdir="%CASSANDRA_HOME%\logs"
+set CASSANDRA_PARAMS=%CASSANDRA_PARAMS% 
-Dcassandra.storagedir="%CASSANDRA_HOME%\data"
+
 echo Starting NodeTool
-"%JAVA_HOME%\bin\java" -cp %CASSANDRA_CLASSPATH% 
-Dlogback.configurationFile=logback-tools.xml 
org.apache.cassandra.tools.NodeTool %*
+"%JAVA_HOME%\bin\java" -cp %CASSANDRA_CLASSPATH% %CASSANDRA_PARAMS% 
-Dlogback.configurationFile=logback-tools.xml 
org.apache.cassandra.tools.NodeTool %*
 goto finally
 
 :err



[2/3] cassandra git commit: Add logdir and storagedir to nodetool.bat

2015-07-23 Thread jmckenzie
Add logdir and storagedir to nodetool.bat

Patch by jmckenzie; reviewed by pthompson for CASSANDRA-9696


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9ad13309
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9ad13309
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9ad13309

Branch: refs/heads/trunk
Commit: 9ad133097b5120fc6838b62f649c4d058639215e
Parents: 99decd8
Author: Joshua McKenzie 
Authored: Thu Jul 23 13:23:55 2015 -0400
Committer: Joshua McKenzie 
Committed: Thu Jul 23 13:23:55 2015 -0400

--
 bin/nodetool.bat | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9ad13309/bin/nodetool.bat
--
diff --git a/bin/nodetool.bat b/bin/nodetool.bat
index ec64db0..92d5c05 100644
--- a/bin/nodetool.bat
+++ b/bin/nodetool.bat
@@ -23,8 +23,11 @@ call cassandra.in.bat
 if NOT DEFINED CASSANDRA_HOME set CASSANDRA_HOME=%~dp0..
 if NOT DEFINED JAVA_HOME goto :err
 
+set CASSANDRA_PARAMS=%CASSANDRA_PARAMS% 
-Dcassandra.logdir="%CASSANDRA_HOME%\logs"
+set CASSANDRA_PARAMS=%CASSANDRA_PARAMS% 
-Dcassandra.storagedir="%CASSANDRA_HOME%\data"
+
 echo Starting NodeTool
-"%JAVA_HOME%\bin\java" -cp %CASSANDRA_CLASSPATH% 
-Dlogback.configurationFile=logback-tools.xml 
org.apache.cassandra.tools.NodeTool %*
+"%JAVA_HOME%\bin\java" -cp %CASSANDRA_CLASSPATH% %CASSANDRA_PARAMS% 
-Dlogback.configurationFile=logback-tools.xml 
org.apache.cassandra.tools.NodeTool %*
 goto finally
 
 :err



[3/3] cassandra git commit: Merge branch 'cassandra-2.2' into trunk

2015-07-23 Thread jmckenzie
Merge branch 'cassandra-2.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1f5c4f7b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1f5c4f7b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1f5c4f7b

Branch: refs/heads/trunk
Commit: 1f5c4f7ba55cbb8fada610d1882a616b4000e741
Parents: 1526568 9ad1330
Author: Joshua McKenzie 
Authored: Thu Jul 23 13:24:53 2015 -0400
Committer: Joshua McKenzie 
Committed: Thu Jul 23 13:24:53 2015 -0400

--
 bin/cqlsh.py |  2 +-
 bin/nodetool.bat |  5 -
 pylib/cqlshlib/formatting.py | 17 +++--
 3 files changed, 20 insertions(+), 4 deletions(-)
--




[jira] [Resolved] (CASSANDRA-7937) Apply backpressure gently when overloaded with writes

2015-07-23 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-7937.
---
Resolution: Later
  Assignee: (was: Jacek Lewandowski)

Marking Later, we can reopen if 9318 proves insufficient.

> Apply backpressure gently when overloaded with writes
> -
>
> Key: CASSANDRA-7937
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7937
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: Cassandra 2.0
>Reporter: Piotr Kołaczkowski
>  Labels: performance
>
> When writing huge amounts of data into C* cluster from analytic tools like 
> Hadoop or Apache Spark, we can see that often C* can't keep up with the load. 
> This is because analytic tools typically write data "as fast as they can" in 
> parallel, from many nodes and they are not artificially rate-limited, so C* 
> is the bottleneck here. Also, increasing the number of nodes doesn't really 
> help, because in a collocated setup this also increases number of 
> Hadoop/Spark nodes (writers) and although possible write performance is 
> higher, the problem still remains.
> We observe the following behavior:
> 1. data is ingested at an extreme fast pace into memtables and flush queue 
> fills up
> 2. the available memory limit for memtables is reached and writes are no 
> longer accepted
> 3. the application gets hit by "write timeout", and retries repeatedly, in 
> vain 
> 4. after several failed attempts to write, the job gets aborted 
> Desired behaviour:
> 1. data is ingested at an extreme fast pace into memtables and flush queue 
> fills up
> 2. after exceeding some memtable "fill threshold", C* applies adaptive rate 
> limiting to writes - the more the buffers are filled-up, the less writes/s 
> are accepted, however writes still occur within the write timeout.
> 3. thanks to slowed down data ingestion, now flush can finish before all the 
> memory gets used
> Of course the details how rate limiting could be done are up for a discussion.
> It may be also worth considering putting such logic into the driver, not C* 
> core, but then C* needs to expose at least the following information to the 
> driver, so we could calculate the desired maximum data rate:
> 1. current amount of memory available for writes before they would completely 
> block
> 2. total amount of data queued to be flushed and flush progress (amount of 
> data to flush remaining for the memtable currently being flushed)
> 3. average flush write speed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: Fix handling of incorrect %z cqlshlib output on Windows

2015-07-23 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 51ff49975 -> 9ad133097


Fix handling of incorrect %z cqlshlib output on Windows

Patch by jmckenzie; reviewed by aweisberg for CASSANDRA-9418


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/99decd8e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/99decd8e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/99decd8e

Branch: refs/heads/cassandra-2.2
Commit: 99decd8eface9cd38ddd70542aa28a2773810526
Parents: 51ff499
Author: Joshua McKenzie 
Authored: Thu Jul 23 13:21:10 2015 -0400
Committer: Joshua McKenzie 
Committed: Thu Jul 23 13:21:10 2015 -0400

--
 bin/cqlsh.py |  2 +-
 pylib/cqlshlib/formatting.py | 17 +++--
 2 files changed, 16 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/99decd8e/bin/cqlsh.py
--
diff --git a/bin/cqlsh.py b/bin/cqlsh.py
index 999ddc4..6df1d75 100644
--- a/bin/cqlsh.py
+++ b/bin/cqlsh.py
@@ -945,7 +945,7 @@ class Shell(cmd.Cmd):
 try:
 import readline
 except ImportError:
-if platform.system() == 'Windows':
+if myplatform == 'Windows':
 print "WARNING: pyreadline dependency missing.  Install to 
enable tab completion."
 pass
 else:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/99decd8e/pylib/cqlshlib/formatting.py
--
diff --git a/pylib/cqlshlib/formatting.py b/pylib/cqlshlib/formatting.py
index ff5b118..00d5b40 100644
--- a/pylib/cqlshlib/formatting.py
+++ b/pylib/cqlshlib/formatting.py
@@ -16,16 +16,20 @@
 
 import calendar
 import math
+import platform
 import re
 import sys
 import platform
 import time
 from collections import defaultdict
+
 from . import wcwidth
 from .displaying import colorme, FormattedValue, DEFAULT_VALUE_COLORS
 from datetime import datetime, timedelta
 from cassandra.cqltypes import EMPTY
 
+is_win = platform.system() == 'Windows'
+
 unicode_controlchars_re = re.compile(r'[\x00-\x31\x7f-\xa0]')
 controlchars_re = re.compile(r'[\x00-\x31\x7f-\xff]')
 
@@ -193,15 +197,24 @@ def strftime(time_format, seconds):
 offset = -time.altzone
 else:
 offset = -time.timezone
-if formatted[-4:] != '' or time_format[-2:] != '%z' or offset == 0:
+if not is_win and (formatted[-4:] != '' or time_format[-2:] != '%z' or 
offset == 0):
 return formatted
+elif is_win and time_format[-2:] != '%z':
+return formatted
+
 # deal with %z on platforms where it isn't supported. see CASSANDRA-4746.
 if offset < 0:
 sign = '-'
 else:
 sign = '+'
 hours, minutes = divmod(abs(offset) / 60, 60)
-return formatted[:-5] + sign + '{0:0=2}{1:0=2}'.format(hours, minutes)
+# Need to strip out invalid %z output on Windows. C libs give us 'Eastern 
Standard Time' instead of +/- GMT
+if is_win and time_format[-2:] == '%z':
+# Remove chars and strip trailing spaces left behind
+formatted = re.sub('[A-Za-z]', '', formatted).rstrip()
+return formatted + sign + '{0:0=2}{1:0=2}'.format(hours, minutes)
+else:
+return formatted[:-5] + sign + '{0:0=2}{1:0=2}'.format(hours, minutes)
 
 @formatter_for('Date')
 def format_value_date(val, colormap, **_):



[jira] [Commented] (CASSANDRA-9863) NIODataInputStream has problems on trunk

2015-07-23 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639160#comment-14639160
 ] 

Ariel Weisberg commented on CASSANDRA-9863:
---

I don't think I introduced new non-idiomatic lines. Which files do you want me 
to refactor to make them idiomatic?

> NIODataInputStream has problems on trunk
> 
>
> Key: CASSANDRA-9863
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9863
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Ariel Weisberg
>Priority: Blocker
> Fix For: 3.0 beta 1
>
>
> As the title says, there is cases where method calls to NIODataInputStream, 
> at least {{readVInt}} calls can loop forever. This is possibly only a problem 
> for vints where the code tries to read 8 bytes minimum but there is less than 
> that available, and in that sense is related to [~benedict]'s observation in 
> CASSANDRA-9708, but it is more serious than said observation because:
> # this can happen even if the buffer passed to NIODataInputStream ctor has 
> more than 8 bytes available, and hence I'm relatively confident [~benedict]'s 
> fix in CASSANDRA-9708 is not enough.
> # this doesn't necessarily fail cleanly by raising assertions, this can loop 
> forever (which is much harder to debug).
> Due of that, and because that is at least one of the cause of CASSANDRA-9764, 
> I think the problem warrants a specific ticket (from CASSANDRA-9708 that is).
> Now, the exact reason of this is looping is if {{readVInt}} is called but the 
> buffer has less than 8 byte remaining (again, the buffer had more initially). 
> In that case, {{readMinimum(8, 1)}} is called and it calls {{readNext()}} in 
> a loop. Within {{readNext()}}, the buffer (which has {{buf.position() == 0 && 
> buf.hasRemaining()}}) is actually unchanged (through a very weird dance of 
> setting the position to the limit, then the limit to the capacity, and then 
> flipping the buffer which resets everything to what it was), and because 
> {{rbc}} is the {{emptyReadableByteChannel}}, {{rbc.read(buf)}} does nothing 
> and always return {{-1}}. Back in {{readMinimum}}, {{read == -1}} but 
> {{remaining >= require}} (and {{remaining}} never changes), and hence the 
> forever looping.
> Now, not sure what the best fix is because I'm not fully familiar with that 
> code, but that does leads me to a 2nd point: {{NIODataInputSttream}} can IMHO 
> use a bit of additional/better comments. I won't pretend having tried very 
> hard to understand the whole class, so there is probably some lack of effort, 
> but at least a few things felt like they should clarified:
> * Provided I understand {{readNext()}} correctly, it only make sense when we 
> do have a {{ReadableByteChannel}} (and the fact that it's not the case sounds 
> like the bug). If that's the case, this should be explicitly documented and 
> probably asserted. As as an aside, I wonder if using {{rbc == null}} when we 
> don't have wouldn't be better: if we don't have one, we shouldn't try to use 
> it, and having a {{null}} would make things fail loudly if we do.
> * I'm not exactly sure what {{readMinimum}} arguments do. I'd have expected 
> at least one to be called "minimum", and an explanation of the meaning of the 
> other one.
> * {{prepareReadPaddedPrimitive}} says that it "Add padding if requested" but 
> there is seemingly no argument that trigger the "if requested part". Also 
> unclear what that padding is about in the first place.
> As a final point, it looks like the case where {{NIODataInputStream}} is 
> constructed with a {{ByteBuffer}} (rather than a {{ReadableByteChannel}}) 
> seems to be completely untested by the unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9878) Rolling Updates 2.0 to 2.1 "unable to gossip"

2015-07-23 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639152#comment-14639152
 ] 

Brandon Williams commented on CASSANDRA-9878:
-

See CASSANDRA-8768

> Rolling Updates 2.0 to 2.1 "unable to gossip"
> -
>
> Key: CASSANDRA-9878
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9878
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Axel Kämpfe
> Fix For: 2.1.x
>
>
> Hi there,
> we are currently testing an upgrade of our servers from Cassandra 2.0.16 to 
> 2.1.8 on the EC2 service of Amazon
> Usually, we launch a new server which gets the newest version and then joins 
> the existing ring, bootstrapping, and after some time we kill one of the old 
> nodes.
> But with the upgrade to 2.1 the new server will not join the existing ring.
> {code}
> INFO  [HANDSHAKE-/10.] 2015-07-23 11:06:14,997 OutboundTcpConnection.java:485 
> - Handshaking version with /10.xx.yy.zz
> INFO  [HANDSHAKE-/10.] 2015-07-23 11:06:14,999 OutboundTcpConnection.java:485 
> - Handshaking version with /10.yy.zz.aa
> INFO  [HANDSHAKE-/10.] 2015-07-23 11:06:15,000 OutboundTcpConnection.java:485 
> - Handshaking version with /10.aa.bb.cc
> ERROR [main] 2015-07-23 11:06:46,016 CassandraDaemon.java:541 - Exception 
> encountered during startup
> java.lang.RuntimeException: Unable to gossip with any seeds
> at 
> org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1307) 
> ~[apache-cassandra-2.1.8.jar:2.1.8]
> at 
> org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:533)
>  ~[apache-cassandra-2.1.8.jar:2.1.8]
> at 
> org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:777)
>  ~[apache-cassandra-2.1.8.jar:2.1.8]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:714)
>  ~[apache-cassandra-2.1.8.jar:2.1.8]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:605)
>  ~[apache-cassandra-2.1.8.jar:2.1.8]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378) 
> [apache-cassandra-2.1.8.jar:2.1.8]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:524)
>  [apache-cassandra-2.1.8.jar:2.1.8]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:613) 
> [apache-cassandra-2.1.8.jar:2.1.8]
> WARN  [StorageServiceShutdownHook] 2015-07-23 11:06:46,023 Gossiper.java:1418 
> - No local state or state is in silent shutdown, not announcing shutdown
> INFO  [StorageServiceShutdownHook] 2015-07-23 11:06:46,023 
> MessagingService.java:708 - Waiting for messaging service to quiesce
> INFO  [ACCEPT-/10.] 2015-07-23 11:06:46,045 MessagingService.java:958 - 
> MessagingService has terminated the accept() thread
> {code}
> our config uses the "RandomPartitioner", "SimpleSnitch" and the internal 
> nodes ip for communication.
> when i use the same config but ONLY 2.1.x Servers, everthing works perfectly, 
> but as soon as we start to "mix in" a new version into an "old" ring, the new 
> servers will not "gossip"... ( so it cannot be a firewall issue, or such)
> If you need any more information from my side, please let me know.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9402) Implement proper sandboxing for UDFs

2015-07-23 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639147#comment-14639147
 ] 

Jonathan Ellis commented on CASSANDRA-9402:
---

nio is whitelisted, but my understanding is that's only checked *if* the 
SecurityManager approves.  All i/o (file, socket) is prohibited there.

> Implement proper sandboxing for UDFs
> 
>
> Key: CASSANDRA-9402
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9402
> Project: Cassandra
>  Issue Type: Task
>Reporter: T Jake Luciani
>Assignee: Robert Stupp
>Priority: Critical
>  Labels: docs-impacting, security
> Fix For: 3.0 beta 1
>
> Attachments: 9402-warning.txt
>
>
> We want to avoid a security exploit for our users.  We need to make sure we 
> ship 2.2 UDFs with good defaults so someone exposing it to the internet 
> accidentally doesn't open themselves up to having arbitrary code run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9863) NIODataInputStream has problems on trunk

2015-07-23 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639090#comment-14639090
 ] 

Benedict commented on CASSANDRA-9863:
-

Mostly LGTM, however we can now call buf.get() directly (instead of readByte), 
and you're using non-idiomatic array declarations.

> NIODataInputStream has problems on trunk
> 
>
> Key: CASSANDRA-9863
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9863
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Ariel Weisberg
>Priority: Blocker
> Fix For: 3.0 beta 1
>
>
> As the title says, there is cases where method calls to NIODataInputStream, 
> at least {{readVInt}} calls can loop forever. This is possibly only a problem 
> for vints where the code tries to read 8 bytes minimum but there is less than 
> that available, and in that sense is related to [~benedict]'s observation in 
> CASSANDRA-9708, but it is more serious than said observation because:
> # this can happen even if the buffer passed to NIODataInputStream ctor has 
> more than 8 bytes available, and hence I'm relatively confident [~benedict]'s 
> fix in CASSANDRA-9708 is not enough.
> # this doesn't necessarily fail cleanly by raising assertions, this can loop 
> forever (which is much harder to debug).
> Due of that, and because that is at least one of the cause of CASSANDRA-9764, 
> I think the problem warrants a specific ticket (from CASSANDRA-9708 that is).
> Now, the exact reason of this is looping is if {{readVInt}} is called but the 
> buffer has less than 8 byte remaining (again, the buffer had more initially). 
> In that case, {{readMinimum(8, 1)}} is called and it calls {{readNext()}} in 
> a loop. Within {{readNext()}}, the buffer (which has {{buf.position() == 0 && 
> buf.hasRemaining()}}) is actually unchanged (through a very weird dance of 
> setting the position to the limit, then the limit to the capacity, and then 
> flipping the buffer which resets everything to what it was), and because 
> {{rbc}} is the {{emptyReadableByteChannel}}, {{rbc.read(buf)}} does nothing 
> and always return {{-1}}. Back in {{readMinimum}}, {{read == -1}} but 
> {{remaining >= require}} (and {{remaining}} never changes), and hence the 
> forever looping.
> Now, not sure what the best fix is because I'm not fully familiar with that 
> code, but that does leads me to a 2nd point: {{NIODataInputSttream}} can IMHO 
> use a bit of additional/better comments. I won't pretend having tried very 
> hard to understand the whole class, so there is probably some lack of effort, 
> but at least a few things felt like they should clarified:
> * Provided I understand {{readNext()}} correctly, it only make sense when we 
> do have a {{ReadableByteChannel}} (and the fact that it's not the case sounds 
> like the bug). If that's the case, this should be explicitly documented and 
> probably asserted. As as an aside, I wonder if using {{rbc == null}} when we 
> don't have wouldn't be better: if we don't have one, we shouldn't try to use 
> it, and having a {{null}} would make things fail loudly if we do.
> * I'm not exactly sure what {{readMinimum}} arguments do. I'd have expected 
> at least one to be called "minimum", and an explanation of the meaning of the 
> other one.
> * {{prepareReadPaddedPrimitive}} says that it "Add padding if requested" but 
> there is seemingly no argument that trigger the "if requested part". Also 
> unclear what that padding is about in the first place.
> As a final point, it looks like the case where {{NIODataInputStream}} is 
> constructed with a {{ByteBuffer}} (rather than a {{ReadableByteChannel}}) 
> seems to be completely untested by the unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9880) ScrubTest.testScrubOutOfOrder should generate test file on the fly

2015-07-23 Thread Yuki Morishita (JIRA)
Yuki Morishita created CASSANDRA-9880:
-

 Summary: ScrubTest.testScrubOutOfOrder should generate test file 
on the fly
 Key: CASSANDRA-9880
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9880
 Project: Cassandra
  Issue Type: Bug
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Blocker
 Fix For: 3.0 beta 1


ScrubTest#testScrubOutOfOrder is failing on trunk due to the serialization 
format change from pre-generated out-of-order SSTable.

We should change that to generate out-of-order SSTable on the fly so that we 
don't need to bother generating SSTable by hand again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9790) CommitLogUpgradeTest.test{20,21} failure

2015-07-23 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639062#comment-14639062
 ] 

Sylvain Lebresne commented on CASSANDRA-9790:
-

A first problem was that the test hadn't been properly upgraded post-8099. 
Basically, it's trying to replay a commit log that was generated in 2.0 and 
2.1, but it wasn't using the proper table definition. I've pushed the simple 
fix for that as commit 
[15265689|http://git-wip-us.apache.org/repos/asf/cassandra/diff/15265689].

That said, the test is still failing, but with a more meaningful error.

Now, I'll continue looking, but I'm not all that familiar with the commit log, 
nor with this test (I don't know in particular how the commit log we're trying 
to recover have been generated, making it a bit harder to be sure of what we're 
expecting exactly). What I can tell is that the commit log does seems to be 
replaying properly, and some upgrade tests shows that commit logs from 2.1 can 
be replayed properly, at least to some extent. What fails in that case is that 
it seems more cells are replayed than is expected. If someone more familiar 
with the commit log want to have a look, in particular making sure stuff aren't 
replayed more than one by mistake, I would certainly appreciate.

> CommitLogUpgradeTest.test{20,21} failure
> 
>
> Key: CASSANDRA-9790
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9790
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Michael Shuler
>Assignee: Sylvain Lebresne
>Priority: Blocker
>  Labels: test-failure
> Fix For: 3.0 beta 1
>
>
> These test failures started with the 8099 commit.
> {noformat}
> Stacktrace
> java.lang.IllegalArgumentException
>   at java.nio.Buffer.limit(Buffer.java:275)
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:583)
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.readBytesWithShortLength(ByteBufferUtil.java:592)
>   at 
> org.apache.cassandra.db.marshal.CompositeType.splitName(CompositeType.java:197)
>   at 
> org.apache.cassandra.db.LegacyLayout.decodeClustering(LegacyLayout.java:235)
>   at 
> org.apache.cassandra.db.LegacyLayout.decodeCellName(LegacyLayout.java:127)
>   at 
> org.apache.cassandra.db.LegacyLayout.readLegacyCellBody(LegacyLayout.java:672)
>   at 
> org.apache.cassandra.db.LegacyLayout.readLegacyCell(LegacyLayout.java:643)
>   at 
> org.apache.cassandra.db.LegacyLayout$8.computeNext(LegacyLayout.java:713)
>   at 
> org.apache.cassandra.db.LegacyLayout$8.computeNext(LegacyLayout.java:702)
>   at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>   at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>   at 
> com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1149)
>   at 
> org.apache.cassandra.db.LegacyLayout.toUnfilteredRowIterator(LegacyLayout.java:310)
>   at 
> org.apache.cassandra.db.LegacyLayout.onWireCellstoUnfilteredRowIterator(LegacyLayout.java:298)
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:670)
>   at 
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:276)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogTestReplayer.replayMutation(CommitLogTestReplayer.java:66)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replaySyncSection(CommitLogReplayer.java:464)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:370)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:145)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogUpgradeTest.testRestore(CommitLogUpgradeTest.java:105)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogUpgradeTest.test21(CommitLogUpgradeTest.java:66)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9793) Log when messages are dropped due to cross_node_timeout

2015-07-23 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639057#comment-14639057
 ] 

Brandon Williams commented on CASSANDRA-9793:
-

Also it looks like we won't log tpstats on drop anymore, unless I'm mistaken.

> Log when messages are dropped due to cross_node_timeout
> ---
>
> Key: CASSANDRA-9793
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9793
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Stefania
> Fix For: 2.1.x, 2.0.x
>
>
> When a node has clock skew and cross node timeouts are enabled, there's no 
> indication that the messages were dropped due to the cross timeout, just that 
> messages were dropped.  This can errantly lead you down a path of 
> troubleshooting a load shedding situation when really you just have clock 
> drift on one node.  This is also not simple to troubleshoot, since you have 
> to determine that this node will answer requests, but other nodes won't 
> answer requests from it.  If the problem goes away on a reboot (and the 
> machine does one-shot time sync, not continuous) it becomes even harder to 
> detect because you're left with a weird piece of evidence such as "it's fine 
> after a reboot, but comes back in about X days every time."
> It would help tremendously if there were a log message indicating how many 
> messages (don't need them broken down by type) were eagerly dropped due to 
> the cross node timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9793) Log when messages are dropped due to cross_node_timeout

2015-07-23 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639054#comment-14639054
 ] 

Brandon Williams commented on CASSANDRA-9793:
-

Can you rebase for 2.2?  I'm getting a lot of conflicts there.

> Log when messages are dropped due to cross_node_timeout
> ---
>
> Key: CASSANDRA-9793
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9793
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Stefania
> Fix For: 2.1.x, 2.0.x
>
>
> When a node has clock skew and cross node timeouts are enabled, there's no 
> indication that the messages were dropped due to the cross timeout, just that 
> messages were dropped.  This can errantly lead you down a path of 
> troubleshooting a load shedding situation when really you just have clock 
> drift on one node.  This is also not simple to troubleshoot, since you have 
> to determine that this node will answer requests, but other nodes won't 
> answer requests from it.  If the problem goes away on a reboot (and the 
> machine does one-shot time sync, not continuous) it becomes even harder to 
> detect because you're left with a weird piece of evidence such as "it's fine 
> after a reboot, but comes back in about X days every time."
> It would help tremendously if there were a log message indicating how many 
> messages (don't need them broken down by type) were eagerly dropped due to 
> the cross node timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Upgrade CommitLogUpgradeTest post-8099

2015-07-23 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/trunk 2d55c1e84 -> 15265689f


Upgrade CommitLogUpgradeTest post-8099


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/15265689
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/15265689
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/15265689

Branch: refs/heads/trunk
Commit: 15265689f529f05fd3f4065b3f098864de686edb
Parents: 2d55c1e
Author: Sylvain Lebresne 
Authored: Thu Jul 23 17:55:37 2015 +0200
Committer: Sylvain Lebresne 
Committed: Thu Jul 23 17:56:21 2015 +0200

--
 .../cassandra/db/commitlog/CommitLogUpgradeTest.java   | 13 -
 1 file changed, 12 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/15265689/test/unit/org/apache/cassandra/db/commitlog/CommitLogUpgradeTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/db/commitlog/CommitLogUpgradeTest.java 
b/test/unit/org/apache/cassandra/db/commitlog/CommitLogUpgradeTest.java
index d443b8c..ce4e605 100644
--- a/test/unit/org/apache/cassandra/db/commitlog/CommitLogUpgradeTest.java
+++ b/test/unit/org/apache/cassandra/db/commitlog/CommitLogUpgradeTest.java
@@ -41,8 +41,11 @@ import org.apache.cassandra.config.Schema;
 import org.apache.cassandra.db.Mutation;
 import org.apache.cassandra.db.rows.Cell;
 import org.apache.cassandra.db.rows.Row;
+import org.apache.cassandra.db.marshal.AsciiType;
 import org.apache.cassandra.db.marshal.UTF8Type;
+import org.apache.cassandra.db.marshal.BytesType;
 import org.apache.cassandra.db.partitions.PartitionUpdate;
+import org.apache.cassandra.schema.KeyspaceParams;
 
 public class CommitLogUpgradeTest
 {
@@ -70,8 +73,16 @@ public class CommitLogUpgradeTest
 @BeforeClass
 static public void initialize() throws FileNotFoundException, IOException, 
InterruptedException
 {
+CFMetaData metadata = CFMetaData.Builder.createDense(KEYSPACE, TABLE, 
false, false)
+.addPartitionKey("key", 
AsciiType.instance)
+.addClusteringColumn("col", 
BytesType.instance)
+.addRegularColumn("val", 
BytesType.instance)
+.build()
+
.compressionParameters(SchemaLoader.getCompressionParameters());
 SchemaLoader.loadSchema();
-SchemaLoader.schemaDefinition("");
+SchemaLoader.createKeyspace(KEYSPACE,
+KeyspaceParams.simple(1),
+metadata);
 }
 
 public void testRestore(String location) throws IOException, 
InterruptedException



[jira] [Commented] (CASSANDRA-9704) On-wire backward compatibility for 8099

2015-07-23 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639037#comment-14639037
 ] 

Aleksey Yeschenko commented on CASSANDRA-9704:
--

[~thobbs] Can unrequire {{only_pk_test}} now that CASSANDRA-9874 fixed the 
issue.

> On-wire backward compatibility for 8099
> ---
>
> Key: CASSANDRA-9704
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9704
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Sylvain Lebresne
>Assignee: Tyler Hobbs
> Fix For: 3.0 beta 1
>
> Attachments: 9704-2.1.txt
>
>
> The currently committed patch for CASSANDRA-8099 has left backward 
> compatibility on the wire as a TODO. This ticket is to track the actual doing 
> (of which I know [~thobbs] has already done a good chunk).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9874) Compact value columns aren't being migrated properly in 3.0

2015-07-23 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639034#comment-14639034
 ] 

Aleksey Yeschenko commented on CASSANDRA-9874:
--

Committed to trunk as {{2d55c1e8465015fd18cc71a1228489aaf5c6eea8}}. Cassci 
[testall|http://cassci.datastax.com/view/Dev/view/iamaleksey/job/iamaleksey-9874-testall/]
 and 
[dtest|http://cassci.datastax.com/view/Dev/view/iamaleksey/job/iamaleksey-9874-dtest/].

> Compact value columns aren't being migrated properly in 3.0
> ---
>
> Key: CASSANDRA-9874
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9874
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
> Fix For: 3.0 beta 1
>
>
> To quote 
> [Tyler|https://issues.apache.org/jira/browse/CASSANDRA-6717?focusedCommentId=14626965&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14626965]:
> 2.1 and 3.0 currently have different behavior around default compact value 
> columns. When you create a table like this:
> {code}
> CREATE TABLE foo (
> k int,
> c int,
> PRIMARY KEY (k, c)
> ) WITH COMPACT STORAGE;
> {code}
> 2.1 will add a {{compact_value}} column to {{system.schema_columns}} with an 
> empty {{column_name}} and a {{BytesType}} validator.
> In 3.0, we instead add a {{regular}} column with the default compact value 
> name ({{value}}) and an {{EmptyType}} validator.
> The logic in 3.0 depends on having an {{EmptyType}} column (see 
> {{CompactTables.hasEmptyCompactValue()}}) but current trunk doesn't migrate 
> the column. {{LegacySchemaMigrator.addDefinitionForUpgrade()}} almost does 
> what we want, but doesn't add the {{EmptyType}} column because it sees the 
> existing {{compact_value}} column.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Fix migration of pk-only compact storage tables

2015-07-23 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk d9dfbddbe -> 2d55c1e84


Fix migration of pk-only compact storage tables

patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for
CASSANDRA-9874


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2d55c1e8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2d55c1e8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2d55c1e8

Branch: refs/heads/trunk
Commit: 2d55c1e8465015fd18cc71a1228489aaf5c6eea8
Parents: d9dfbdd
Author: Aleksey Yeschenko 
Authored: Thu Jul 23 16:06:02 2015 +0300
Committer: Aleksey Yeschenko 
Committed: Thu Jul 23 18:44:47 2015 +0300

--
 CHANGES.txt |   2 +-
 .../cassandra/schema/LegacySchemaMigrator.java  | 131 ---
 .../schema/LegacySchemaMigratorTest.java|  15 ++-
 3 files changed, 95 insertions(+), 53 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2d55c1e8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index d405a4d..7f061c1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -4,7 +4,7 @@
  * Change CREATE/ALTER TABLE syntax for compression (CASSANDRA-8384)
  * Cleanup crc and adler code for java 8 (CASSANDRA-9650)
  * Storage engine refactor (CASSANDRA-8099, 9743, 9746, 9759, 9781, 9808, 
9825, 9848,
-   9705, 9859, 9867)
+   9705, 9859, 9867, 9874)
  * Update Guava to 18.0 (CASSANDRA-9653)
  * Bloom filter false positive ratio is not honoured (CASSANDRA-8413)
  * New option for cassandra-stress to leave a ratio of columns null 
(CASSANDRA-9522)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2d55c1e8/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
--
diff --git a/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java 
b/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
index f554ffb..7326fa9 100644
--- a/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
+++ b/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
@@ -296,7 +296,16 @@ public final class LegacySchemaMigrator
 
needsUpgrade);
 
 if (needsUpgrade)
-addDefinitionForUpgrade(columnDefs, ksName, cfName, 
isStaticCompactTable, isSuper, rawComparator, subComparator, defaultValidator);
+{
+addDefinitionForUpgrade(columnDefs,
+ksName,
+cfName,
+isStaticCompactTable,
+isSuper,
+rawComparator,
+subComparator,
+defaultValidator);
+}
 
 CFMetaData cfm = CFMetaData.create(ksName, cfName, cfId, isDense, 
isCompound, isSuper, isCounter, columnDefs);
 
@@ -355,7 +364,33 @@ public final class LegacySchemaMigrator
 return !hasKind(defs, ColumnDefinition.Kind.STATIC);
 
 // For dense compact tables, we need to upgrade if we don't have a 
compact value definition
-return !hasKind(defs, ColumnDefinition.Kind.REGULAR);
+return !hasRegularColumns(defs);
+}
+
+private static boolean hasRegularColumns(UntypedResultSet columnRows)
+{
+for (UntypedResultSet.Row row : columnRows)
+{
+/*
+ * We need to special case and ignore the empty compact column 
(pre-3.0, COMPACT STORAGE, primary-key only tables),
+ * since deserializeKind() will otherwise just return a REGULAR.
+ * We want the proper EmptyType regular column to be added by 
addDefinitionForUpgrade(), so we need
+ * checkNeedsUpgrade() to return true in this case.
+ * See CASSANDRA-9874.
+ */
+if (isEmptyCompactValueColumn(row))
+return false;
+
+if (deserializeKind(row.getString("type")) == 
ColumnDefinition.Kind.REGULAR)
+return true;
+}
+
+return false;
+}
+
+private static boolean isEmptyCompactValueColumn(UntypedResultSet.Row row)
+{
+return "compact_value".equals(row.getString("type")) && 
row.getString("column_name").isEmpty();
 }
 
 private static void addDefinitionForUpgrade(List defs,
@@ -389,10 +424,9 @@ public final class LegacySchemaMigrator
 private static boolean hasKind(UntypedResultSet defs, 
ColumnDefinition.Kind kind)
 {
 for (UntypedResultSet.Row row : defs)
-{
 if (deserializeKind(row.getString("type")) == kind)
 return true;
-

[jira] [Comment Edited] (CASSANDRA-9873) Windows dtest: ignore_failure_policy_test fails

2015-07-23 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14638993#comment-14638993
 ] 

Joshua McKenzie edited comment on CASSANDRA-9873 at 7/23/15 3:38 PM:
-

Slightly more information about the failure:
{noformat}
ERROR [COMMIT-LOG-ALLOCATOR] 2015-07-22 17:01:51,424 CommitLog.java:467 - 
Failed managing commit log segments
org.apache.cassandra.io.FSWriteError: java.nio.file.AccessDeniedException: 
c:\temp\dtest-fzrrz1\test\node1\commitlogs\CommitLog-5-1437598876044.log
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:132) 
~[main/:na]
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:149) 
~[main/:na]
at 
org.apache.cassandra.db.commitlog.CommitLogSegment.discard(CommitLogSegment.java:314)
 ~[main/:na]
at 
org.apache.cassandra.db.commitlog.CommitLogSegmentManager$2.run(CommitLogSegmentManager.java:375)
 ~[main/:na]
at 
org.apache.cassandra.db.commitlog.CommitLogSegmentManager$1.runMayThrow(CommitLogSegmentManager.java:156)
 ~[main/:na]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
Caused by: java.nio.file.AccessDeniedException: 
c:\temp\dtest-fzrrz1\test\node1\commitlogs\CommitLog-5-1437598876044.log
at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83) 
~[na:1.8.0_45]
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) 
~[na:1.8.0_45]
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) 
~[na:1.8.0_45]
at 
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
 ~[na:1.8.0_45]
at 
sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
 ~[na:1.8.0_45]
at java.nio.file.Files.delete(Files.java:1126) ~[na:1.8.0_45]
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:126) 
~[main/:na]
... 6 common frames omitted
{noformat}

Not surprised to see the CLSM showing up w/access violations again after all 
the headaches it gave us w/RecoveryManager tests.

(edit; false alarm, purpose of the test is to cause errors in CL. Ignore my 
being salty about CL handling and carry on.)


was (Author: joshuamckenzie):
Slightly more information about the failure:
{noformat}
ERROR [COMMIT-LOG-ALLOCATOR] 2015-07-22 17:01:51,424 CommitLog.java:467 - 
Failed managing commit log segments
org.apache.cassandra.io.FSWriteError: java.nio.file.AccessDeniedException: 
c:\temp\dtest-fzrrz1\test\node1\commitlogs\CommitLog-5-1437598876044.log
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:132) 
~[main/:na]
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:149) 
~[main/:na]
at 
org.apache.cassandra.db.commitlog.CommitLogSegment.discard(CommitLogSegment.java:314)
 ~[main/:na]
at 
org.apache.cassandra.db.commitlog.CommitLogSegmentManager$2.run(CommitLogSegmentManager.java:375)
 ~[main/:na]
at 
org.apache.cassandra.db.commitlog.CommitLogSegmentManager$1.runMayThrow(CommitLogSegmentManager.java:156)
 ~[main/:na]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
Caused by: java.nio.file.AccessDeniedException: 
c:\temp\dtest-fzrrz1\test\node1\commitlogs\CommitLog-5-1437598876044.log
at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83) 
~[na:1.8.0_45]
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) 
~[na:1.8.0_45]
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) 
~[na:1.8.0_45]
at 
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
 ~[na:1.8.0_45]
at 
sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
 ~[na:1.8.0_45]
at java.nio.file.Files.delete(Files.java:1126) ~[na:1.8.0_45]
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:126) 
~[main/:na]
... 6 common frames omitted
{noformat}

Not surprised to see the CLSM showing up w/access violations again after all 
the headaches it gave us w/RecoveryManager tests.

> Windows dtest: ignore_failure_policy_test fails
> ---
>
> Key: CASSANDRA-9873
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9873
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>Priority: Minor
>  Labels: Windows
>

[jira] [Commented] (CASSANDRA-9873) Windows dtest: ignore_failure_policy_test fails

2015-07-23 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14638993#comment-14638993
 ] 

Joshua McKenzie commented on CASSANDRA-9873:


Slightly more information about the failure:
{noformat}
ERROR [COMMIT-LOG-ALLOCATOR] 2015-07-22 17:01:51,424 CommitLog.java:467 - 
Failed managing commit log segments
org.apache.cassandra.io.FSWriteError: java.nio.file.AccessDeniedException: 
c:\temp\dtest-fzrrz1\test\node1\commitlogs\CommitLog-5-1437598876044.log
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:132) 
~[main/:na]
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:149) 
~[main/:na]
at 
org.apache.cassandra.db.commitlog.CommitLogSegment.discard(CommitLogSegment.java:314)
 ~[main/:na]
at 
org.apache.cassandra.db.commitlog.CommitLogSegmentManager$2.run(CommitLogSegmentManager.java:375)
 ~[main/:na]
at 
org.apache.cassandra.db.commitlog.CommitLogSegmentManager$1.runMayThrow(CommitLogSegmentManager.java:156)
 ~[main/:na]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
[main/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
Caused by: java.nio.file.AccessDeniedException: 
c:\temp\dtest-fzrrz1\test\node1\commitlogs\CommitLog-5-1437598876044.log
at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83) 
~[na:1.8.0_45]
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) 
~[na:1.8.0_45]
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) 
~[na:1.8.0_45]
at 
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
 ~[na:1.8.0_45]
at 
sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
 ~[na:1.8.0_45]
at java.nio.file.Files.delete(Files.java:1126) ~[na:1.8.0_45]
at 
org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:126) 
~[main/:na]
... 6 common frames omitted
{noformat}

Not surprised to see the CLSM showing up w/access violations again after all 
the headaches it gave us w/RecoveryManager tests.

> Windows dtest: ignore_failure_policy_test fails
> ---
>
> Key: CASSANDRA-9873
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9873
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>Priority: Minor
>  Labels: Windows
> Fix For: 2.2.x
>
>
> {noformat}
> ==
> FAIL: ignore_failure_policy_test (commitlog_test.TestCommitLog)
> --
> Traceback (most recent call last):
>   File "C:\src\cassandra-dtest\commitlog_test.py", line 251, in 
> ignore_failure_policy_test
> """)
> AssertionError: (,  'cassandra.WriteTimeout'>) not raised
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: c:\temp\dtest-fzrrz1
> - >> end captured logging << -
> --
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9696) nodetool stopdaemon exception

2015-07-23 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639006#comment-14639006
 ] 

Philip Thompson commented on CASSANDRA-9696:


Looks good to me, +1. Sorry for the delay.

> nodetool stopdaemon exception
> -
>
> Key: CASSANDRA-9696
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9696
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
>Reporter: Andreas Schnitzerling
>Assignee: Joshua McKenzie
>Priority: Minor
> Fix For: 2.2.x
>
> Attachments: 9696_v1.txt, cassandra.bat, cassandra.yaml
>
>
> nodetool stopdaemon produces an exception. I use the default-locations 
> (uncommented in config) for that dirs like explained in cassandra.yaml. 
> Anyway - C* is stopping.
> {code:title=win-console}
> %CASSANDRA_HOME%\bin>nodetool stopdaemon
> Starting NodeTool
> error: commitlog_directory is missing and -Dcassandra.storagedir is not set
> -- StackTrace --
> org.apache.cassandra.exceptions.ConfigurationException: commitlog_directory 
> is missing and -Dcassandra.storagedir is not
>  set
> at 
> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:494)
> at 
> org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:111)
> at 
> org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:54)
> at 
> org.apache.cassandra.tools.nodetool.StopDaemon.execute(StopDaemon.java:37)
> at 
> org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:239)
> at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:153)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9402) Implement proper sandboxing for UDFs

2015-07-23 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14639005#comment-14639005
 ] 

Robert Stupp commented on CASSANDRA-9402:
-

bq. access to Keyspace, Schema, and related classes?

no access to these classes (and thus no way to bypass authz perms)

> Implement proper sandboxing for UDFs
> 
>
> Key: CASSANDRA-9402
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9402
> Project: Cassandra
>  Issue Type: Task
>Reporter: T Jake Luciani
>Assignee: Robert Stupp
>Priority: Critical
>  Labels: docs-impacting, security
> Fix For: 3.0 beta 1
>
> Attachments: 9402-warning.txt
>
>
> We want to avoid a security exploit for our users.  We need to make sure we 
> ship 2.2 UDFs with good defaults so someone exposing it to the internet 
> accidentally doesn't open themselves up to having arbitrary code run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[5/6] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-07-23 Thread yukim
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/51ff4997
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/51ff4997
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/51ff4997

Branch: refs/heads/cassandra-2.2
Commit: 51ff499754b2668ee2425ca72733506f875b2c64
Parents: 1657639 1c80b04
Author: Yuki Morishita 
Authored: Thu Jul 23 10:25:48 2015 -0500
Committer: Yuki Morishita 
Committed: Thu Jul 23 10:25:48 2015 -0500

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/db/BatchlogManager.java   | 4 +++-
 .../cassandra/service/BatchlogEndpointFilterTest.java   | 9 +
 3 files changed, 9 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/51ff4997/CHANGES.txt
--
diff --cc CHANGES.txt
index 0fb392a,69a7b31..b8593c0
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,13 -1,4 +1,14 @@@
 -2.1.9
 +2.2.1
 + * UDF / UDA execution time in trace (CASSANDRA-9723)
 + * Remove repair snapshot leftover on startup (CASSANDRA-7357)
++ * Use random nodes for batch log when only 2 racks (CASSANDRA-8735)
 +
 +2.2.0
 + * Fix cqlsh copy methods and other windows specific issues (CASSANDRA-9795) 
 + * Don't wrap byte arrays in SequentialWriter (CASSANDRA-9797)
 + * sum() and avg() functions missing for smallint and tinyint types 
(CASSANDRA-9671)
 + * Revert CASSANDRA-9542 (allow native functions in UDA) (CASSANDRA-9771)
 +Merged from 2.1:
   * Fix MarshalException when upgrading superColumn family (CASSANDRA-9582)
   * Fix broken logging for "empty" flushes in Memtable (CASSANDRA-9837)
   * Handle corrupt files on startup (CASSANDRA-9686)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/51ff4997/src/java/org/apache/cassandra/db/BatchlogManager.java
--



[3/6] cassandra git commit: Use random nodes for batch log when only 2 racks

2015-07-23 Thread yukim
Use random nodes for batch log when only 2 racks

patch by Mihai Suteu and yukim; reviewed by Jeremiah Jordan for CASSANDRA-8735


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1c80b04b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1c80b04b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1c80b04b

Branch: refs/heads/trunk
Commit: 1c80b04be1d47d03bbde888cea960f5ff8a95d58
Parents: c2142e6
Author: Yuki Morishita 
Authored: Thu Jul 23 10:24:23 2015 -0500
Committer: Yuki Morishita 
Committed: Thu Jul 23 10:24:23 2015 -0500

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/db/BatchlogManager.java   | 4 +++-
 .../cassandra/service/BatchlogEndpointFilterTest.java   | 9 +
 3 files changed, 9 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c80b04b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5d142cc..69a7b31 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -6,6 +6,7 @@
  * (cqlsh) Allow the SSL protocol version to be specified through the
config file or environment variables (CASSANDRA-9544)
  * Remove repair snapshot leftover on startup (CASSANDRA-7357)
+ * Use random nodes for batch log when only 2 racks (CASSANDRA-8735)
 Merged from 2.0:
  * checkForEndpointCollision fails for legitimate collisions (CASSANDRA-9765)
  * Complete CASSANDRA-8448 fix (CASSANDRA-9519)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c80b04b/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/db/BatchlogManager.java 
b/src/java/org/apache/cassandra/db/BatchlogManager.java
index 20f134d..4588156 100644
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@ -495,7 +495,9 @@ public class BatchlogManager implements BatchlogManagerMBean
 if (validated.keySet().size() == 1)
 {
 // we have only 1 `other` rack
-Collection otherRack = 
Iterables.getOnlyElement(validated.asMap().values());
+// pick up to two random nodes from there
+List otherRack = 
validated.get(validated.keySet().iterator().next());
+Collections.shuffle(otherRack);
 return Lists.newArrayList(Iterables.limit(otherRack, 2));
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c80b04b/test/unit/org/apache/cassandra/service/BatchlogEndpointFilterTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/service/BatchlogEndpointFilterTest.java 
b/test/unit/org/apache/cassandra/service/BatchlogEndpointFilterTest.java
index 72e8df5..3a19b75 100644
--- a/test/unit/org/apache/cassandra/service/BatchlogEndpointFilterTest.java
+++ b/test/unit/org/apache/cassandra/service/BatchlogEndpointFilterTest.java
@@ -20,6 +20,8 @@ package org.apache.cassandra.service;
 import java.net.InetAddress;
 import java.net.UnknownHostException;
 import java.util.Collection;
+import java.util.HashSet;
+
 import org.junit.Test;
 import org.junit.matchers.JUnitMatchers;
 
@@ -78,7 +80,7 @@ public class BatchlogEndpointFilterTest
 }
 
 @Test
-public void shouldSelectTwoFirstHostsFromSingleOtherRack() throws 
UnknownHostException
+public void shouldSelectTwoRandomHostsFromSingleOtherRack() throws 
UnknownHostException
 {
 Multimap endpoints = ImmutableMultimap. builder()
 .put(LOCAL, InetAddress.getByName("0"))
@@ -88,9 +90,8 @@ public class BatchlogEndpointFilterTest
 .put("1", InetAddress.getByName("111"))
 .build();
 Collection result = new TestEndpointFilter(LOCAL, 
endpoints).filter();
-assertThat(result.size(), is(2));
-assertThat(result, JUnitMatchers.hasItem(InetAddress.getByName("1")));
-assertThat(result, JUnitMatchers.hasItem(InetAddress.getByName("11")));
+// result should contain random two distinct values
+assertThat(new HashSet<>(result).size(), is(2));
 }
 
 private static class TestEndpointFilter extends 
BatchlogManager.EndpointFilter



[1/6] cassandra git commit: Use random nodes for batch log when only 2 racks

2015-07-23 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 c2142e654 -> 1c80b04be
  refs/heads/cassandra-2.2 165763903 -> 51ff49975
  refs/heads/trunk c91266878 -> d9dfbddbe


Use random nodes for batch log when only 2 racks

patch by Mihai Suteu and yukim; reviewed by Jeremiah Jordan for CASSANDRA-8735


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1c80b04b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1c80b04b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1c80b04b

Branch: refs/heads/cassandra-2.1
Commit: 1c80b04be1d47d03bbde888cea960f5ff8a95d58
Parents: c2142e6
Author: Yuki Morishita 
Authored: Thu Jul 23 10:24:23 2015 -0500
Committer: Yuki Morishita 
Committed: Thu Jul 23 10:24:23 2015 -0500

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/db/BatchlogManager.java   | 4 +++-
 .../cassandra/service/BatchlogEndpointFilterTest.java   | 9 +
 3 files changed, 9 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c80b04b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5d142cc..69a7b31 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -6,6 +6,7 @@
  * (cqlsh) Allow the SSL protocol version to be specified through the
config file or environment variables (CASSANDRA-9544)
  * Remove repair snapshot leftover on startup (CASSANDRA-7357)
+ * Use random nodes for batch log when only 2 racks (CASSANDRA-8735)
 Merged from 2.0:
  * checkForEndpointCollision fails for legitimate collisions (CASSANDRA-9765)
  * Complete CASSANDRA-8448 fix (CASSANDRA-9519)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c80b04b/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/db/BatchlogManager.java 
b/src/java/org/apache/cassandra/db/BatchlogManager.java
index 20f134d..4588156 100644
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@ -495,7 +495,9 @@ public class BatchlogManager implements BatchlogManagerMBean
 if (validated.keySet().size() == 1)
 {
 // we have only 1 `other` rack
-Collection otherRack = 
Iterables.getOnlyElement(validated.asMap().values());
+// pick up to two random nodes from there
+List otherRack = 
validated.get(validated.keySet().iterator().next());
+Collections.shuffle(otherRack);
 return Lists.newArrayList(Iterables.limit(otherRack, 2));
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c80b04b/test/unit/org/apache/cassandra/service/BatchlogEndpointFilterTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/service/BatchlogEndpointFilterTest.java 
b/test/unit/org/apache/cassandra/service/BatchlogEndpointFilterTest.java
index 72e8df5..3a19b75 100644
--- a/test/unit/org/apache/cassandra/service/BatchlogEndpointFilterTest.java
+++ b/test/unit/org/apache/cassandra/service/BatchlogEndpointFilterTest.java
@@ -20,6 +20,8 @@ package org.apache.cassandra.service;
 import java.net.InetAddress;
 import java.net.UnknownHostException;
 import java.util.Collection;
+import java.util.HashSet;
+
 import org.junit.Test;
 import org.junit.matchers.JUnitMatchers;
 
@@ -78,7 +80,7 @@ public class BatchlogEndpointFilterTest
 }
 
 @Test
-public void shouldSelectTwoFirstHostsFromSingleOtherRack() throws 
UnknownHostException
+public void shouldSelectTwoRandomHostsFromSingleOtherRack() throws 
UnknownHostException
 {
 Multimap endpoints = ImmutableMultimap. builder()
 .put(LOCAL, InetAddress.getByName("0"))
@@ -88,9 +90,8 @@ public class BatchlogEndpointFilterTest
 .put("1", InetAddress.getByName("111"))
 .build();
 Collection result = new TestEndpointFilter(LOCAL, 
endpoints).filter();
-assertThat(result.size(), is(2));
-assertThat(result, JUnitMatchers.hasItem(InetAddress.getByName("1")));
-assertThat(result, JUnitMatchers.hasItem(InetAddress.getByName("11")));
+// result should contain random two distinct values
+assertThat(new HashSet<>(result).size(), is(2));
 }
 
 private static class TestEndpointFilter extends 
BatchlogManager.EndpointFilter



[6/6] cassandra git commit: Merge branch 'cassandra-2.2' into trunk

2015-07-23 Thread yukim
Merge branch 'cassandra-2.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d9dfbddb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d9dfbddb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d9dfbddb

Branch: refs/heads/trunk
Commit: d9dfbddbeefa6f5591b8609d4cd796e77d010a0c
Parents: c912668 51ff499
Author: Yuki Morishita 
Authored: Thu Jul 23 10:27:43 2015 -0500
Committer: Yuki Morishita 
Committed: Thu Jul 23 10:27:43 2015 -0500

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/db/BatchlogManager.java| 4 +++-
 .../apache/cassandra/service/BatchlogEndpointFilterTest.java | 8 
 3 files changed, 8 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d9dfbddb/CHANGES.txt
--
diff --cc CHANGES.txt
index 67566fb,b8593c0..d405a4d
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -24,8 -1,8 +24,9 @@@
  2.2.1
   * UDF / UDA execution time in trace (CASSANDRA-9723)
   * Remove repair snapshot leftover on startup (CASSANDRA-7357)
+  * Use random nodes for batch log when only 2 racks (CASSANDRA-8735)
  
 +
  2.2.0
   * Fix cqlsh copy methods and other windows specific issues (CASSANDRA-9795) 
   * Don't wrap byte arrays in SequentialWriter (CASSANDRA-9797)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d9dfbddb/src/java/org/apache/cassandra/db/BatchlogManager.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d9dfbddb/test/unit/org/apache/cassandra/service/BatchlogEndpointFilterTest.java
--
diff --cc test/unit/org/apache/cassandra/service/BatchlogEndpointFilterTest.java
index 186cc41,3a19b75..be33e3f
--- a/test/unit/org/apache/cassandra/service/BatchlogEndpointFilterTest.java
+++ b/test/unit/org/apache/cassandra/service/BatchlogEndpointFilterTest.java
@@@ -20,11 -20,13 +20,12 @@@ package org.apache.cassandra.service
  import java.net.InetAddress;
  import java.net.UnknownHostException;
  import java.util.Collection;
+ import java.util.HashSet;
  
 -import org.junit.Test;
 -import org.junit.matchers.JUnitMatchers;
 -
  import com.google.common.collect.ImmutableMultimap;
  import com.google.common.collect.Multimap;
 +import org.junit.Test;
 +import org.junit.matchers.JUnitMatchers;
  
  import org.apache.cassandra.db.BatchlogManager;
  



[2/6] cassandra git commit: Use random nodes for batch log when only 2 racks

2015-07-23 Thread yukim
Use random nodes for batch log when only 2 racks

patch by Mihai Suteu and yukim; reviewed by Jeremiah Jordan for CASSANDRA-8735


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1c80b04b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1c80b04b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1c80b04b

Branch: refs/heads/cassandra-2.2
Commit: 1c80b04be1d47d03bbde888cea960f5ff8a95d58
Parents: c2142e6
Author: Yuki Morishita 
Authored: Thu Jul 23 10:24:23 2015 -0500
Committer: Yuki Morishita 
Committed: Thu Jul 23 10:24:23 2015 -0500

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/db/BatchlogManager.java   | 4 +++-
 .../cassandra/service/BatchlogEndpointFilterTest.java   | 9 +
 3 files changed, 9 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c80b04b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5d142cc..69a7b31 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -6,6 +6,7 @@
  * (cqlsh) Allow the SSL protocol version to be specified through the
config file or environment variables (CASSANDRA-9544)
  * Remove repair snapshot leftover on startup (CASSANDRA-7357)
+ * Use random nodes for batch log when only 2 racks (CASSANDRA-8735)
 Merged from 2.0:
  * checkForEndpointCollision fails for legitimate collisions (CASSANDRA-9765)
  * Complete CASSANDRA-8448 fix (CASSANDRA-9519)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c80b04b/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/db/BatchlogManager.java 
b/src/java/org/apache/cassandra/db/BatchlogManager.java
index 20f134d..4588156 100644
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@ -495,7 +495,9 @@ public class BatchlogManager implements BatchlogManagerMBean
 if (validated.keySet().size() == 1)
 {
 // we have only 1 `other` rack
-Collection otherRack = 
Iterables.getOnlyElement(validated.asMap().values());
+// pick up to two random nodes from there
+List otherRack = 
validated.get(validated.keySet().iterator().next());
+Collections.shuffle(otherRack);
 return Lists.newArrayList(Iterables.limit(otherRack, 2));
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c80b04b/test/unit/org/apache/cassandra/service/BatchlogEndpointFilterTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/service/BatchlogEndpointFilterTest.java 
b/test/unit/org/apache/cassandra/service/BatchlogEndpointFilterTest.java
index 72e8df5..3a19b75 100644
--- a/test/unit/org/apache/cassandra/service/BatchlogEndpointFilterTest.java
+++ b/test/unit/org/apache/cassandra/service/BatchlogEndpointFilterTest.java
@@ -20,6 +20,8 @@ package org.apache.cassandra.service;
 import java.net.InetAddress;
 import java.net.UnknownHostException;
 import java.util.Collection;
+import java.util.HashSet;
+
 import org.junit.Test;
 import org.junit.matchers.JUnitMatchers;
 
@@ -78,7 +80,7 @@ public class BatchlogEndpointFilterTest
 }
 
 @Test
-public void shouldSelectTwoFirstHostsFromSingleOtherRack() throws 
UnknownHostException
+public void shouldSelectTwoRandomHostsFromSingleOtherRack() throws 
UnknownHostException
 {
 Multimap endpoints = ImmutableMultimap. builder()
 .put(LOCAL, InetAddress.getByName("0"))
@@ -88,9 +90,8 @@ public class BatchlogEndpointFilterTest
 .put("1", InetAddress.getByName("111"))
 .build();
 Collection result = new TestEndpointFilter(LOCAL, 
endpoints).filter();
-assertThat(result.size(), is(2));
-assertThat(result, JUnitMatchers.hasItem(InetAddress.getByName("1")));
-assertThat(result, JUnitMatchers.hasItem(InetAddress.getByName("11")));
+// result should contain random two distinct values
+assertThat(new HashSet<>(result).size(), is(2));
 }
 
 private static class TestEndpointFilter extends 
BatchlogManager.EndpointFilter



[4/6] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-07-23 Thread yukim
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/51ff4997
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/51ff4997
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/51ff4997

Branch: refs/heads/trunk
Commit: 51ff499754b2668ee2425ca72733506f875b2c64
Parents: 1657639 1c80b04
Author: Yuki Morishita 
Authored: Thu Jul 23 10:25:48 2015 -0500
Committer: Yuki Morishita 
Committed: Thu Jul 23 10:25:48 2015 -0500

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/db/BatchlogManager.java   | 4 +++-
 .../cassandra/service/BatchlogEndpointFilterTest.java   | 9 +
 3 files changed, 9 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/51ff4997/CHANGES.txt
--
diff --cc CHANGES.txt
index 0fb392a,69a7b31..b8593c0
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,13 -1,4 +1,14 @@@
 -2.1.9
 +2.2.1
 + * UDF / UDA execution time in trace (CASSANDRA-9723)
 + * Remove repair snapshot leftover on startup (CASSANDRA-7357)
++ * Use random nodes for batch log when only 2 racks (CASSANDRA-8735)
 +
 +2.2.0
 + * Fix cqlsh copy methods and other windows specific issues (CASSANDRA-9795) 
 + * Don't wrap byte arrays in SequentialWriter (CASSANDRA-9797)
 + * sum() and avg() functions missing for smallint and tinyint types 
(CASSANDRA-9671)
 + * Revert CASSANDRA-9542 (allow native functions in UDA) (CASSANDRA-9771)
 +Merged from 2.1:
   * Fix MarshalException when upgrading superColumn family (CASSANDRA-9582)
   * Fix broken logging for "empty" flushes in Memtable (CASSANDRA-9837)
   * Handle corrupt files on startup (CASSANDRA-9686)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/51ff4997/src/java/org/apache/cassandra/db/BatchlogManager.java
--



[jira] [Commented] (CASSANDRA-9402) Implement proper sandboxing for UDFs

2015-07-23 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14638983#comment-14638983
 ] 

Robert Stupp commented on CASSANDRA-9402:
-

Already pushed a commit to solve the long GC issue.

Re-checked what happens, if java.net and java.no are not available to scripted 
UDFs - and it now works. Probably due to changes in the Java Driver.
Anyway - pushed another commit that removes these packages.

And +10 to get more opinions about this patch (for Java + JavaScript).

> Implement proper sandboxing for UDFs
> 
>
> Key: CASSANDRA-9402
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9402
> Project: Cassandra
>  Issue Type: Task
>Reporter: T Jake Luciani
>Assignee: Robert Stupp
>Priority: Critical
>  Labels: docs-impacting, security
> Fix For: 3.0 beta 1
>
> Attachments: 9402-warning.txt
>
>
> We want to avoid a security exploit for our users.  We need to make sure we 
> ship 2.2 UDFs with good defaults so someone exposing it to the internet 
> accidentally doesn't open themselves up to having arbitrary code run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Skip testClearEphemeralSnapshots() on Windows

2015-07-23 Thread jmckenzie
Skip testClearEphemeralSnapshots() on Windows

Patch by jmckenzie; reviewed by tjake for CASSANDRA-9869


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/16576390
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/16576390
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/16576390

Branch: refs/heads/cassandra-2.2
Commit: 16576390351abe987c236825523608ce79e6e91a
Parents: 53b64a4
Author: Joshua McKenzie 
Authored: Thu Jul 23 11:12:28 2015 -0400
Committer: Joshua McKenzie 
Committed: Thu Jul 23 11:12:28 2015 -0400

--
 test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java | 6 ++
 1 file changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/16576390/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java 
b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
index b5e62b3..5419ef5 100644
--- a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
+++ b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
@@ -50,6 +50,7 @@ import org.apache.cassandra.io.sstable.format.SSTableReader;
 import org.apache.cassandra.io.sstable.format.SSTableWriter;
 import org.apache.commons.lang3.ArrayUtils;
 import org.apache.commons.lang3.StringUtils;
+import org.junit.Assume;
 import org.junit.BeforeClass;
 import org.junit.Test;
 import org.junit.runner.RunWith;
@@ -1530,6 +1531,11 @@ public class ColumnFamilyStoreTest
 @Test
 public void testClearEphemeralSnapshots() throws Throwable
 {
+// We don't do snapshot-based repair on Windows so we don't have 
ephemeral snapshots from repair that need clearing.
+// This test will fail as we'll revert to the 
WindowsFailedSnapshotTracker and counts will be off, but since we
+// don't do snapshot-based repair on Windows, we just skip this test.
+Assume.assumeTrue(!FBUtilities.isWindows());
+
 ColumnFamilyStore cfs = 
Keyspace.open(KEYSPACE1).getColumnFamilyStore(CF_INDEX1);
 
 //cleanup any previous test gargbage



[3/3] cassandra git commit: Merge branch 'cassandra-2.2' into trunk

2015-07-23 Thread jmckenzie
Merge branch 'cassandra-2.2' into trunk

Conflicts:
test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c9126687
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c9126687
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c9126687

Branch: refs/heads/trunk
Commit: c9126687828402a2ce860f1a47d78fb18b54a419
Parents: f713be4 1657639
Author: Joshua McKenzie 
Authored: Thu Jul 23 11:13:20 2015 -0400
Committer: Joshua McKenzie 
Committed: Thu Jul 23 11:13:20 2015 -0400

--
 conf/cassandra-env.ps1  | 20 
 .../cassandra/db/ColumnFamilyStoreTest.java |  6 ++
 2 files changed, 26 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c9126687/conf/cassandra-env.ps1
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c9126687/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
--
diff --cc test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
index 065479b,5419ef5..9da4876
--- a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
+++ b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
@@@ -21,9 -21,36 +21,10 @@@ package org.apache.cassandra.db
  import java.io.File;
  import java.io.IOException;
  import java.nio.ByteBuffer;
 -import java.nio.charset.CharacterCodingException;
 -import java.util.Arrays;
 -import java.util.Collection;
 -import java.util.Collections;
 -import java.util.HashMap;
 -import java.util.HashSet;
 -import java.util.Iterator;
 -import java.util.LinkedList;
 -import java.util.List;
 -import java.util.Map;
 -import java.util.Random;
 -import java.util.Set;
 -import java.util.SortedSet;
 -import java.util.TreeSet;
 -import java.util.UUID;
 -import java.util.concurrent.ExecutionException;
 -import java.util.concurrent.Future;
 -import java.util.concurrent.TimeUnit;
 +import java.util.*;
  
 -import com.google.common.base.Function;
 -import com.google.common.collect.Iterables;
 -import com.google.common.collect.Sets;
 -
 -import org.apache.cassandra.db.index.PerRowSecondaryIndexTest;
 -import org.apache.cassandra.io.sstable.*;
 -import org.apache.cassandra.io.sstable.format.SSTableReader;
 -import org.apache.cassandra.io.sstable.format.SSTableWriter;
 -import org.apache.commons.lang3.ArrayUtils;
 -import org.apache.commons.lang3.StringUtils;
 +import org.junit.Before;
+ import org.junit.Assume;
  import org.junit.BeforeClass;
  import org.junit.Test;
  import org.junit.runner.RunWith;
@@@ -294,42 -1035,24 +295,47 @@@ public class ColumnFamilyStoreTes
  
  // and it remains so after flush. (this wasn't failing before, but 
it's good to check.)
  cfs.forceBlockingFlush();
 -assertRowAndColCount(1, 2, true, cfs.getRangeSlice(Util.range("f", 
"g"), null, ThriftValidation.asIFilter(sp, cfs.metadata, null), 100));
 +assertRangeCount(cfs, col, val, 4);
  }
  
 -
 -private ColumnFamilyStore insertKey1Key2()
 +@Test
 +public void testClearEphemeralSnapshots() throws Throwable
  {
 -ColumnFamilyStore cfs = 
Keyspace.open(KEYSPACE2).getColumnFamilyStore(CF_STANDARD1);
 -List rms = new LinkedList<>();
 -Mutation rm;
 -rm = new Mutation(KEYSPACE2, ByteBufferUtil.bytes("key1"));
 -rm.add(CF_STANDARD1, cellname("Column1"), 
ByteBufferUtil.bytes("asdf"), 0);
 -rms.add(rm);
 -Util.writeColumnFamily(rms);
++// We don't do snapshot-based repair on Windows so we don't have 
ephemeral snapshots from repair that need clearing.
++// This test will fail as we'll revert to the 
WindowsFailedSnapshotTracker and counts will be off, but since we
++// don't do snapshot-based repair on Windows, we just skip this test.
++Assume.assumeTrue(!FBUtilities.isWindows());
+ 
 -rm = new Mutation(KEYSPACE2, ByteBufferUtil.bytes("key2"));
 -rm.add(CF_STANDARD1, cellname("Column1"), 
ByteBufferUtil.bytes("asdf"), 0);
 -rms.add(rm);
 -return Util.writeColumnFamily(rms);
 +ColumnFamilyStore cfs = 
Keyspace.open(KEYSPACE1).getColumnFamilyStore(CF_INDEX1);
 +
 +//cleanup any previous test gargbage
 +cfs.clearSnapshot("");
 +
 +int numRows = 1000;
 +long[] colValues = new long [numRows * 2]; // each row has two columns
 +for (int i = 0; i < colValues.length; i+=2)
 +{
 +colValues[i] = (i % 4 == 0 ? 1L : 2L); // index column
 +colValues[i+1] = 3L; //other column
 +}
 +ScrubTest.fillIndexCF(cfs, false, colValues);
 +
 +cfs.snapshot("nonEph

[2/3] cassandra git commit: Skip testClearEphemeralSnapshots() on Windows

2015-07-23 Thread jmckenzie
Skip testClearEphemeralSnapshots() on Windows

Patch by jmckenzie; reviewed by tjake for CASSANDRA-9869


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/16576390
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/16576390
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/16576390

Branch: refs/heads/trunk
Commit: 16576390351abe987c236825523608ce79e6e91a
Parents: 53b64a4
Author: Joshua McKenzie 
Authored: Thu Jul 23 11:12:28 2015 -0400
Committer: Joshua McKenzie 
Committed: Thu Jul 23 11:12:28 2015 -0400

--
 test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java | 6 ++
 1 file changed, 6 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/16576390/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java 
b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
index b5e62b3..5419ef5 100644
--- a/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
+++ b/test/unit/org/apache/cassandra/db/ColumnFamilyStoreTest.java
@@ -50,6 +50,7 @@ import org.apache.cassandra.io.sstable.format.SSTableReader;
 import org.apache.cassandra.io.sstable.format.SSTableWriter;
 import org.apache.commons.lang3.ArrayUtils;
 import org.apache.commons.lang3.StringUtils;
+import org.junit.Assume;
 import org.junit.BeforeClass;
 import org.junit.Test;
 import org.junit.runner.RunWith;
@@ -1530,6 +1531,11 @@ public class ColumnFamilyStoreTest
 @Test
 public void testClearEphemeralSnapshots() throws Throwable
 {
+// We don't do snapshot-based repair on Windows so we don't have 
ephemeral snapshots from repair that need clearing.
+// This test will fail as we'll revert to the 
WindowsFailedSnapshotTracker and counts will be off, but since we
+// don't do snapshot-based repair on Windows, we just skip this test.
+Assume.assumeTrue(!FBUtilities.isWindows());
+
 ColumnFamilyStore cfs = 
Keyspace.open(KEYSPACE1).getColumnFamilyStore(CF_INDEX1);
 
 //cleanup any previous test gargbage



[1/3] cassandra git commit: Warn on non-high perf power profile on Windows

2015-07-23 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/trunk f713be4aa -> c91266878


Warn on non-high perf power profile on Windows

Patch by jmckenzie; reviewed by pthompson for CASSANDRA-9648


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/53b64a40
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/53b64a40
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/53b64a40

Branch: refs/heads/trunk
Commit: 53b64a406b02518fc7d88124de705c8cf2d7bd46
Parents: 11ac938
Author: Joshua McKenzie 
Authored: Thu Jul 23 11:11:39 2015 -0400
Committer: Joshua McKenzie 
Committed: Thu Jul 23 11:11:39 2015 -0400

--
 conf/cassandra-env.ps1 | 20 
 1 file changed, 20 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/53b64a40/conf/cassandra-env.ps1
--
diff --git a/conf/cassandra-env.ps1 b/conf/cassandra-env.ps1
index 8dddc2d..8b0b775 100644
--- a/conf/cassandra-env.ps1
+++ b/conf/cassandra-env.ps1
@@ -337,6 +337,26 @@ Function SetCassandraEnvironment
 # Add sigar env - see Cassandra-7838
 $env:JVM_OPTS = "$env:JVM_OPTS 
-Djava.library.path=$env:CASSANDRA_HOME\lib\sigar-bin"
 
+# Confirm we're on high performance power plan, warn if not
+# Change to $true to suppress this warning
+$suppressPowerWarning = $false
+if (!$suppressPowerWarning)
+{
+$currentProfile = powercfg /GETACTIVESCHEME
+if (!$currentProfile.Contains("High performance"))
+{
+echo 
"*-*"
+echo 
"*-*"
+echo ""
+echo "WARNING! Detected a power profile other than High 
Performance."
+echo "Performance of this node will suffer."
+echo "Modify conf\cassandra.env.ps1 to suppress this warning."
+echo ""
+echo 
"*-*"
+echo 
"*-*"
+}
+}
+
 # add the jamm javaagent
 if (($env:JVM_VENDOR -ne "OpenJDK") -or 
($env:JVM_VERSION.CompareTo("1.6.0") -eq 1) -or
 (($env:JVM_VERSION -eq "1.6.0") -and 
($env:JVM_PATCH_VERSION.CompareTo("22") -eq 1)))



[jira] [Commented] (CASSANDRA-9402) Implement proper sandboxing for UDFs

2015-07-23 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14638972#comment-14638972
 ] 

Aleksey Yeschenko commented on CASSANDRA-9402:
--

Is it giving the user access to {{Keyspace}}, {{Schema}}, and related classes? 
Is it still possible with this sandbox to bypass our authz permissions this way?

> Implement proper sandboxing for UDFs
> 
>
> Key: CASSANDRA-9402
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9402
> Project: Cassandra
>  Issue Type: Task
>Reporter: T Jake Luciani
>Assignee: Robert Stupp
>Priority: Critical
>  Labels: docs-impacting, security
> Fix For: 3.0 beta 1
>
> Attachments: 9402-warning.txt
>
>
> We want to avoid a security exploit for our users.  We need to make sure we 
> ship 2.2 UDFs with good defaults so someone exposing it to the internet 
> accidentally doesn't open themselves up to having arbitrary code run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: Warn on non-high perf power profile on Windows

2015-07-23 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 11ac93887 -> 165763903


Warn on non-high perf power profile on Windows

Patch by jmckenzie; reviewed by pthompson for CASSANDRA-9648


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/53b64a40
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/53b64a40
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/53b64a40

Branch: refs/heads/cassandra-2.2
Commit: 53b64a406b02518fc7d88124de705c8cf2d7bd46
Parents: 11ac938
Author: Joshua McKenzie 
Authored: Thu Jul 23 11:11:39 2015 -0400
Committer: Joshua McKenzie 
Committed: Thu Jul 23 11:11:39 2015 -0400

--
 conf/cassandra-env.ps1 | 20 
 1 file changed, 20 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/53b64a40/conf/cassandra-env.ps1
--
diff --git a/conf/cassandra-env.ps1 b/conf/cassandra-env.ps1
index 8dddc2d..8b0b775 100644
--- a/conf/cassandra-env.ps1
+++ b/conf/cassandra-env.ps1
@@ -337,6 +337,26 @@ Function SetCassandraEnvironment
 # Add sigar env - see Cassandra-7838
 $env:JVM_OPTS = "$env:JVM_OPTS 
-Djava.library.path=$env:CASSANDRA_HOME\lib\sigar-bin"
 
+# Confirm we're on high performance power plan, warn if not
+# Change to $true to suppress this warning
+$suppressPowerWarning = $false
+if (!$suppressPowerWarning)
+{
+$currentProfile = powercfg /GETACTIVESCHEME
+if (!$currentProfile.Contains("High performance"))
+{
+echo 
"*-*"
+echo 
"*-*"
+echo ""
+echo "WARNING! Detected a power profile other than High 
Performance."
+echo "Performance of this node will suffer."
+echo "Modify conf\cassandra.env.ps1 to suppress this warning."
+echo ""
+echo 
"*-*"
+echo 
"*-*"
+}
+}
+
 # add the jamm javaagent
 if (($env:JVM_VENDOR -ne "OpenJDK") -or 
($env:JVM_VERSION.CompareTo("1.6.0") -eq 1) -or
 (($env:JVM_VERSION -eq "1.6.0") -and 
($env:JVM_PATCH_VERSION.CompareTo("22") -eq 1)))



[jira] [Comment Edited] (CASSANDRA-9402) Implement proper sandboxing for UDFs

2015-07-23 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14638968#comment-14638968
 ] 

T Jake Luciani edited comment on CASSANDRA-9402 at 7/23/15 3:11 PM:


Overall, This is an improvement.  We spoke offline and addressed a potential 
issue with user_function_timeout_policy.  Since a Stop-the-world GC could 
happen during execution of the UDF.

I'd like to get a professional opinion on this work, since I'm not convinced 
you couldn't, for example, access "/etc/passwd" via Nashorn (since nio is 
whitelisted).


was (Author: tjake):
Overall, This is an improvement.  We spoke offline and addressed a potential 
issue with user_function_timeout_policy.  Since a Stop-the-world GC could 
happen during execution of the UDF.

I'd like to get a professional opinion on this work, since I'm not convinced 
you couldn't, for example, access "/etc/password" via Nashorn (since nio is 
whitelisted).

> Implement proper sandboxing for UDFs
> 
>
> Key: CASSANDRA-9402
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9402
> Project: Cassandra
>  Issue Type: Task
>Reporter: T Jake Luciani
>Assignee: Robert Stupp
>Priority: Critical
>  Labels: docs-impacting, security
> Fix For: 3.0 beta 1
>
> Attachments: 9402-warning.txt
>
>
> We want to avoid a security exploit for our users.  We need to make sure we 
> ship 2.2 UDFs with good defaults so someone exposing it to the internet 
> accidentally doesn't open themselves up to having arbitrary code run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >