[jira] [Created] (CASSANDRA-11005) Split consistent range movement flag

2016-01-12 Thread sankalp kohli (JIRA)
sankalp kohli created CASSANDRA-11005:
-

 Summary: Split consistent range movement flag
 Key: CASSANDRA-11005
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11005
 Project: Cassandra
  Issue Type: Bug
Reporter: sankalp kohli
Priority: Trivial


CASSANDRA-7069 added a flag which does not allow multiple range movements in 
the ring. We want to turn this off as we want to move tokens far apart in the 
ring to speed up the moves. The problem is that this flag also turns off strict 
source check. We want to split this flag so that we can keep strict source but 
not stop parallel moves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11005) Split consistent range movement flag

2016-01-12 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli reassigned CASSANDRA-11005:
-

Assignee: sankalp kohli

> Split consistent range movement flag
> 
>
> Key: CASSANDRA-11005
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11005
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Trivial
> Fix For: 2.1.12, 2.2.x, 3.x
>
>
> CASSANDRA-7069 added a flag which does not allow multiple range movements in 
> the ring. We want to turn this off as we want to move tokens far apart in the 
> ring to speed up the moves. The problem is that this flag also turns off 
> strict source check. We want to split this flag so that we can keep strict 
> source but not stop parallel moves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10425) Autoselect GC settings depending on system memory

2016-01-12 Thread Jonathan Shook (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095515#comment-15095515
 ] 

Jonathan Shook edited comment on CASSANDRA-10425 at 1/13/16 3:05 AM:
-

I think we should try to come up with a way of handling settings which one 
would choose differently for a new install. Settings like this will live 
forever without a better approach. I agree entirely with the principle of least 
surprise. However, according to this default, there will be new systems 
deployed in 2020 with CMS. There has to be a better way.

If we were able to have an install mode which would honor previous settings or 
take new defaults that are more desirable for current code and systems, perhaps 
we can avoid  the CMS in 2020 problem. Installers may require a user to specify 
a mode in order to make this truly unsurprising. If I were installing a new 
cluster in 2020, I would be quite surprised to find it running CMS.

Also, the point of having the settings be size-specific is to avoid surprising 
performance deficiencies. This is the kind of change that I would expect to go 
with a major version upgrade. 

So, to follow the principle of least surprise, perhaps we need to consider 
making this possible for those who expect to be able to use more than 32GB with 
G1 to address GC bandwidth and pause issues for heavy workloads, as we've come 
to expect through field experience. Otherwise, we'll be manually rewiring this 
from now on for all but historic pizza-boxen.



was (Author: jshook):
I think we should try to come up with a way of handling settings which one 
would choose differently for a new install. Settings like this will live 
forever without a better approach. I agree entirely with the principle of least 
surprise. However, according to this default, there will be new systems 
deployed in 2020 with CMS. There has to be a better way.

If we were able to have an install mode which would honor previous settings or 
take new defaults that are more desirable for current code and systems, perhaps 
we can avoid  the CMS in 2020 problem. Installers may require a user to specify 
a mode in order to make this truly unsurprising. If I were installing a new 
cluster in 2020, I would be quite surprised to find it running CMS.

Also, the point of having the settings be size-specific is to avoid surprising 
performance deficiencies. This is the kind of change that I would expect to go 
with a major version. 

So, to follow the principle of least surprise, perhaps we need to consider 
making this possible for those who expect to be able to use more than 32GB with 
G1 to address GC bandwidth and pause issues for heavy workloads, as we've come 
to expect through field experience. Otherwise, we'll be manually rewiring this 
from now on for all but historic pizza-boxen.


> Autoselect GC settings depending on system memory
> -
>
> Key: CASSANDRA-10425
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10425
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jonathan Shook
>
> 1) Make GC modular within cassandra-env
> 2) For systems with 32GB or less of ram, use the classic CMS with the 
> established default settings.
> 3) For systems with 48GB or more of ram, use 1/2 or up to 32GB of heap with 
> G1, whichever is lower.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11005) Split consistent range movement flag

2016-01-12 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli updated CASSANDRA-11005:
--
Attachment: CASSANDRA_11005_2.2.diff

> Split consistent range movement flag
> 
>
> Key: CASSANDRA-11005
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11005
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Trivial
> Fix For: 2.1.12, 2.2.x, 3.x
>
> Attachments: CASSANDRA_11005_2.1.diff, CASSANDRA_11005_2.2.diff
>
>
> CASSANDRA-7069 added a flag which does not allow multiple range movements in 
> the ring. We want to turn this off as we want to move tokens far apart in the 
> ring to speed up the moves. The problem is that this flag also turns off 
> strict source check. We want to split this flag so that we can keep strict 
> source but not stop parallel moves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11005) Split consistent range movement flag

2016-01-12 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095496#comment-15095496
 ] 

sankalp kohli commented on CASSANDRA-11005:
---

cc [~brandon.williams] Can you please review this?

> Split consistent range movement flag
> 
>
> Key: CASSANDRA-11005
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11005
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Trivial
> Fix For: 2.1.12, 2.2.x, 3.x
>
> Attachments: CASSANDRA_11005_2.1.diff
>
>
> CASSANDRA-7069 added a flag which does not allow multiple range movements in 
> the ring. We want to turn this off as we want to move tokens far apart in the 
> ring to speed up the moves. The problem is that this flag also turns off 
> strict source check. We want to split this flag so that we can keep strict 
> source but not stop parallel moves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11005) Split consistent range movement flag

2016-01-12 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli updated CASSANDRA-11005:
--
Fix Version/s: 2.1.12
   3.x
   2.2.x
  Component/s: Configuration

> Split consistent range movement flag
> 
>
> Key: CASSANDRA-11005
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11005
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Trivial
> Fix For: 2.1.12, 2.2.x, 3.x
>
>
> CASSANDRA-7069 added a flag which does not allow multiple range movements in 
> the ring. We want to turn this off as we want to move tokens far apart in the 
> ring to speed up the moves. The problem is that this flag also turns off 
> strict source check. We want to split this flag so that we can keep strict 
> source but not stop parallel moves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11005) Split consistent range movement flag

2016-01-12 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli updated CASSANDRA-11005:
--
Attachment: CASSANDRA_11005_2.1.diff

> Split consistent range movement flag
> 
>
> Key: CASSANDRA-11005
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11005
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Trivial
> Fix For: 2.1.12, 2.2.x, 3.x
>
> Attachments: CASSANDRA_11005_2.1.diff
>
>
> CASSANDRA-7069 added a flag which does not allow multiple range movements in 
> the ring. We want to turn this off as we want to move tokens far apart in the 
> ring to speed up the moves. The problem is that this flag also turns off 
> strict source check. We want to split this flag so that we can keep strict 
> source but not stop parallel moves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11006) Allow upgrades and installs to take modern defaults

2016-01-12 Thread Jonathan Shook (JIRA)
Jonathan Shook created CASSANDRA-11006:
--

 Summary: Allow upgrades and installs to take modern defaults
 Key: CASSANDRA-11006
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11006
 Project: Cassandra
  Issue Type: Improvement
  Components: Configuration, Lifecycle, Packaging, Tools
Reporter: Jonathan Shook


See CASSANDRA-10425 for background.

We simply need to provide a way to install or upgrade C* on a system with 
modern settings. Keeping the previous defaults has been the standard rule of 
thumb to avoid surprises. This is a reasonable approach, but we haven't yet 
provided an alternative for full upgrades with new default nor for more 
appropriate installs of new systems. The number of previous defaults which may 
need to be modified for a saner deployment has become a form of technical 
baggage. Often, users will have to micro-manage basic settings to more 
reasonable defaults for every single deployment, upgrade or not. This is 
surprising.

For newer settings that would be more appropriate, we could force the user to 
make a choice. If you are installing a new cluster or node, you may want the 
modern defaults. If you are upgrading an existing node, you may still want the 
modern defaults. If you are upgrading an existing node and have some very 
carefully selected tunings for your hardware, then you may want to keep them. 
Even then, they may be worse than the modern defaults, given version changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8595) Emit timeouts per endpoint

2016-01-12 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095668#comment-15095668
 ] 

sankalp kohli commented on CASSANDRA-8595:
--

Looks like this is already being emitted. SO closing 

> Emit timeouts per endpoint
> --
>
> Key: CASSANDRA-8595
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8595
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Priority: Minor
>
> We currently emit number of timeouts experienced by a co-ordinator while 
> doing reads and writes. This does not tell us which replica or endpoint is 
> responsible for the timeouts. 
> We can keep a map of endpoint to number of timeouts which could be emitted 
> via JMX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8595) Emit timeouts per endpoint

2016-01-12 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli resolved CASSANDRA-8595.
--
Resolution: Not A Problem

> Emit timeouts per endpoint
> --
>
> Key: CASSANDRA-8595
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8595
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Priority: Minor
>
> We currently emit number of timeouts experienced by a co-ordinator while 
> doing reads and writes. This does not tell us which replica or endpoint is 
> responsible for the timeouts. 
> We can keep a map of endpoint to number of timeouts which could be emitted 
> via JMX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11006) Allow upgrades and installs to take modern defaults

2016-01-12 Thread Jonathan Shook (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095532#comment-15095532
 ] 

Jonathan Shook edited comment on CASSANDRA-11006 at 1/13/16 3:34 AM:
-

The difference in the original ticket CASSANDRA-10425 was not that we were 
opting into auto-tuning. The difference was simply that we could take into 
consideration more contemporary hardware that is being deployed, including the 
trending size of RAM. I would generally expect that auto-tuning settings like 
this could be adapted for major versions, and added to the release notes like 
other potentially surprising, yet generally useful changes. If this is not the 
case for GC settings, then how do we allow for the change for CMS to G1 as 
average RAM sizing continues to change?



was (Author: jshook):
The difference in the original ticket CASSANDRA-10425 was not that we were 
opting into auto-tuning. The difference was simply that we could take account 
of more contemporary hardware that is being deployed presently, including the 
trending size of RAM. I would generally expect that auto-tuning settings like 
this could be adapted for major versions, and added to the release notes like 
other potentially surprising, yet generally useful changes. If this is not the 
case for GC settings, then how do we allow for the change for CMS to G1 as 
average RAM sizing continues to change?


> Allow upgrades and installs to take modern defaults
> ---
>
> Key: CASSANDRA-11006
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11006
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration, Lifecycle, Packaging, Tools
>Reporter: Jonathan Shook
>
> See CASSANDRA-10425 for background.
> We simply need to provide a way to install or upgrade C* on a system with 
> modern settings. Keeping the previous defaults has been the standard rule of 
> thumb to avoid surprises. This is a reasonable approach, but we haven't yet 
> provided an alternative for full upgrades with new default nor for more 
> appropriate installs of new systems. The number of previous defaults which 
> may need to be modified for a saner deployment has become a form of technical 
> baggage. Often, users will have to micro-manage basic settings to more 
> reasonable defaults for every single deployment, upgrade or not. This is 
> surprising.
> For newer settings that would be more appropriate, we could force the user to 
> make a choice. If you are installing a new cluster or node, you may want the 
> modern defaults. If you are upgrading an existing node, you may still want 
> the modern defaults. If you are upgrading an existing node and have some very 
> carefully selected tunings for your hardware, then you may want to keep them. 
> Even then, they may be worse than the modern defaults, given version changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-4650) RangeStreamer should be smarter when picking endpoints for streaming in case of N >=3 in each DC.

2016-01-12 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095658#comment-15095658
 ] 

sankalp kohli commented on CASSANDRA-4650:
--

I don't think CASSANDRA-2434 solves this. If you are replacing a dead machine 
with replace args, it could stream from 2 places in some cases when the ideal 
will be 3. 

> RangeStreamer should be smarter when picking endpoints for streaming in case 
> of N >=3 in each DC.  
> ---
>
> Key: CASSANDRA-4650
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4650
> Project: Cassandra
>  Issue Type: Improvement
>Affects Versions: 1.1.5
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Minor
>  Labels: streaming
> Attachments: photo-1.JPG
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> getRangeFetchMap method in RangeStreamer should pick unique nodes to stream 
> data from when number of replicas in each DC is three or more. 
> When N>=3 in a DC, there are two options for streaming a range. Consider an 
> example of 4 nodes in one datacenter and replication factor of 3. 
> If a node goes down, it needs to recover 3 ranges of data. With current code, 
> two nodes could get selected as it orders the node by proximity. 
> We ideally will want to select 3 nodes for streaming the data. We can do this 
> by selecting unique nodes for each range.  
> Advantages:
> This will increase the performance of bootstrapping a node and will also put 
> less pressure on nodes serving the data. 
> Note: This does not affect if N < 3 in each DC as then it streams data from 
> only 2 nodes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10425) Autoselect GC settings depending on system memory

2016-01-12 Thread Jonathan Shook (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095515#comment-15095515
 ] 

Jonathan Shook commented on CASSANDRA-10425:


I think we should try to come up with a way of handling settings which one 
would choose differently for a new install. Settings like this will live 
forever without a better approach. I agree entirely with the principle of least 
surprise. However, according to this default, there will be new systems 
deployed in 2020 with CMS. There has to be a better way.

If we were able to have an install mode which would honor previous settings or 
take new defaults that are more desirable for current code and systems, perhaps 
we can avoid  the CMS in 2020 problem. Installers may require a user to specify 
a mode in order to make this truly unsurprising. If I were installing a new 
cluster in 2020, I would be quite surprised to find it running CMS.

Also, the point of having the settings be size-specific is to avoid surprising 
performance deficiencies. This is the kind of change that I would expect to go 
with a major version. 

So, to follow the principle of least surprise, perhaps we need to consider 
making this possible for those who expect to be able to use more than 32GB with 
G1 to address GC bandwidth and pause issues for heavy workloads, as we've come 
to expect through field experience. Otherwise, we'll be manually rewiring this 
from now on for all but historic pizza-boxen.


> Autoselect GC settings depending on system memory
> -
>
> Key: CASSANDRA-10425
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10425
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jonathan Shook
>
> 1) Make GC modular within cassandra-env
> 2) For systems with 32GB or less of ram, use the classic CMS with the 
> established default settings.
> 3) For systems with 48GB or more of ram, use 1/2 or up to 32GB of heap with 
> G1, whichever is lower.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11006) Allow upgrades and installs to take modern defaults

2016-01-12 Thread Jonathan Shook (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095532#comment-15095532
 ] 

Jonathan Shook commented on CASSANDRA-11006:


The difference in the original ticket CASSANDRA-10425 was not that we were 
opting into auto-tuning. The difference was simply that we could take account 
of more contemporary hardware that is being deployed presently, including the 
trending size of RAM. I would generally expect that auto-tuning settings like 
this could be adapted for major versions, and added to the release notes like 
other potentially surprising, yet generally useful changes. If this is not the 
case for GC settings, then how do we allow for the change for CMS to G1 as 
average RAM sizing continues to change?


> Allow upgrades and installs to take modern defaults
> ---
>
> Key: CASSANDRA-11006
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11006
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration, Lifecycle, Packaging, Tools
>Reporter: Jonathan Shook
>
> See CASSANDRA-10425 for background.
> We simply need to provide a way to install or upgrade C* on a system with 
> modern settings. Keeping the previous defaults has been the standard rule of 
> thumb to avoid surprises. This is a reasonable approach, but we haven't yet 
> provided an alternative for full upgrades with new default nor for more 
> appropriate installs of new systems. The number of previous defaults which 
> may need to be modified for a saner deployment has become a form of technical 
> baggage. Often, users will have to micro-manage basic settings to more 
> reasonable defaults for every single deployment, upgrade or not. This is 
> surprising.
> For newer settings that would be more appropriate, we could force the user to 
> make a choice. If you are installing a new cluster or node, you may want the 
> modern defaults. If you are upgrading an existing node, you may still want 
> the modern defaults. If you are upgrading an existing node and have some very 
> carefully selected tunings for your hardware, then you may want to keep them. 
> Even then, they may be worse than the modern defaults, given version changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10425) Autoselect GC settings depending on system memory

2016-01-12 Thread Jonathan Shook (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095534#comment-15095534
 ] 

Jonathan Shook commented on CASSANDRA-10425:


CASSANDRA-11006 was created to discuss possible ways of handling this.


> Autoselect GC settings depending on system memory
> -
>
> Key: CASSANDRA-10425
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10425
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jonathan Shook
>
> 1) Make GC modular within cassandra-env
> 2) For systems with 32GB or less of ram, use the classic CMS with the 
> established default settings.
> 3) For systems with 48GB or more of ram, use 1/2 or up to 32GB of heap with 
> G1, whichever is lower.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10928) SSTableExportTest.testExportColumnsWithMetadata randomly fails

2016-01-12 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095647#comment-15095647
 ] 

sankalp kohli commented on CASSANDRA-10928:
---

cc [~brandon.williams] Can you please review this?

> SSTableExportTest.testExportColumnsWithMetadata randomly fails
> --
>
> Key: CASSANDRA-10928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10928
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Kjellman
>Assignee: sankalp kohli
> Fix For: 2.1.12
>
> Attachments: CASSANDRA_10928_2.1.diff
>
>
> The SSTableExportTest.testExportColumnsWithMetadata test will randomly fail 
> (bogusly). Currently, the string check used won’t work if the JSON generated 
> happened to order the elements in the array differently.
> {code}
> assertEquals(
> "unexpected serialization format for topLevelDeletion",
> "{\"markedForDeleteAt\":0,\"localDeletionTime\":0}",
> serializedDeletionInfo.toJSONString());
> {code}
> {noformat}
> [junit] Testcase: 
> testExportColumnsWithMetadata(org.apache.cassandra.tools.SSTableExportTest):  
>   FAILED
> [junit] unexpected serialization format for topLevelDeletion 
> expected:<{"[markedForDeleteAt":0,"localDeletionTime]":0}> but 
> was:<{"[localDeletionTime":0,"markedForDeleteAt]":0}>
> [junit] junit.framework.AssertionFailedError: unexpected serialization 
> format for topLevelDeletion 
> expected:<{"[markedForDeleteAt":0,"localDeletionTime]":0}> but 
> was:<{"[localDeletionTime":0,"markedForDeleteAt]":0}>
> [junit]   at 
> org.apache.cassandra.tools.SSTableExportTest.testExportColumnsWithMetadata(SSTableExportTest.java:299)
> [junit]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10477) java.lang.AssertionError in StorageProxy.submitHint

2016-01-12 Thread Jacques-Henri Berthemet (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093552#comment-15093552
 ] 

Jacques-Henri Berthemet commented on CASSANDRA-10477:
-

Will it be fixed in 2.2.x too?

> java.lang.AssertionError in StorageProxy.submitHint
> ---
>
> Key: CASSANDRA-10477
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10477
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: CentOS 6, Oracle JVM 1.8.45
>Reporter: Severin Leonhardt
>Assignee: Ariel Weisberg
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> A few days after updating from 2.0.15 to 2.1.9 we have the following log 
> entry on 2 of 5 machines:
> {noformat}
> ERROR [EXPIRING-MAP-REAPER:1] 2015-10-07 17:01:08,041 
> CassandraDaemon.java:223 - Exception in thread 
> Thread[EXPIRING-MAP-REAPER:1,5,main]
> java.lang.AssertionError: /192.168.11.88
> at 
> org.apache.cassandra.service.StorageProxy.submitHint(StorageProxy.java:949) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.net.MessagingService$5.apply(MessagingService.java:383) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.net.MessagingService$5.apply(MessagingService.java:363) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at org.apache.cassandra.utils.ExpiringMap$1.run(ExpiringMap.java:98) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:118)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_45]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_45]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_45]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_45]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_45]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_45]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> {noformat}
> 192.168.11.88 is the broadcast address of the local machine.
> When this is logged the read request latency of the whole cluster becomes 
> very bad, from 6 ms/op to more than 100 ms/op according to OpsCenter. Clients 
> get a lot of timeouts. We need to restart the affected Cassandra node to get 
> back normal read latencies. It seems write latency is not affected.
> Disabling hinted handoff using {{nodetool disablehandoff}} only prevents the 
> assert from being logged. At some point the read latency becomes bad again. 
> Restarting the node where hinted handoff was disabled results in the read 
> latency being better again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11001) Hadoop integration is incompatible with Cassandra Driver 3.0.0

2016-01-12 Thread Jacek Lewandowski (JIRA)
Jacek Lewandowski created CASSANDRA-11001:
-

 Summary: Hadoop integration is incompatible with Cassandra Driver 
3.0.0
 Key: CASSANDRA-11001
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11001
 Project: Cassandra
  Issue Type: Bug
Reporter: Jacek Lewandowski
Assignee: Jacek Lewandowski


When using Hadoop input format with SSL and Cassandra Driver 3.0.0-beta1, we 
hit the following exception:

{noformat}
Exception in thread "main" java.lang.NoSuchFieldError: DEFAULT_SSL_CIPHER_SUITES
at 
org.apache.cassandra.hadoop.cql3.CqlConfigHelper.getSSLOptions(CqlConfigHelper.java:548)
at 
org.apache.cassandra.hadoop.cql3.CqlConfigHelper.getCluster(CqlConfigHelper.java:315)
at 
org.apache.cassandra.hadoop.cql3.CqlConfigHelper.getInputCluster(CqlConfigHelper.java:298)
at 
org.apache.cassandra.hadoop.cql3.CqlInputFormat.getSplits(CqlInputFormat.java:131)
{noformat}

Should this be fixed with reflection so that Hadoop input/output formats are 
compatible with both old and new driver?

[~jjordan], [~alexliu68] ?




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10909) NPE in ActiveRepairService

2016-01-12 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-10909:

Fix Version/s: 3.3
   3.0.3
   2.2.5
   2.1.13

> NPE in ActiveRepairService 
> ---
>
> Key: CASSANDRA-10909
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10909
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra-3.0.1.777
>Reporter: Eduard Tudenhoefner
>Assignee: Marcus Eriksson
> Fix For: 2.1.13, 2.2.5, 3.0.3, 3.3
>
>
> NPE after one started multiple incremental repairs
> {code}
> INFO  [Thread-62] 2015-12-21 11:40:53,742  RepairRunnable.java:125 - Starting 
> repair command #1, repairing keyspace keyspace1 with repair options 
> (parallelism: parallel, primary range: false, incremental: true, job threads: 
> 1, ColumnFamilies: [], dataCenters: [], hosts: [], # of ranges: 2)
> INFO  [Thread-62] 2015-12-21 11:40:53,813  RepairSession.java:237 - [repair 
> #b13e3740-a7d7-11e5-b568-f565b837eb0d] new session: will sync /10.200.177.32, 
> /10.200.177.33 on range [(10,-9223372036854775808]] for keyspace1.[counter1, 
> standard1]
> INFO  [Repair#1:1] 2015-12-21 11:40:53,853  RepairJob.java:100 - [repair 
> #b13e3740-a7d7-11e5-b568-f565b837eb0d] requesting merkle trees for counter1 
> (to [/10.200.177.33, /10.200.177.32])
> INFO  [Repair#1:1] 2015-12-21 11:40:53,853  RepairJob.java:174 - [repair 
> #b13e3740-a7d7-11e5-b568-f565b837eb0d] Requesting merkle trees for counter1 
> (to [/10.200.177.33, /10.200.177.32])
> INFO  [Thread-62] 2015-12-21 11:40:53,854  RepairSession.java:237 - [repair 
> #b1449fe0-a7d7-11e5-b568-f565b837eb0d] new session: will sync /10.200.177.32, 
> /10.200.177.31 on range [(0,10]] for keyspace1.[counter1, standard1]
> INFO  [AntiEntropyStage:1] 2015-12-21 11:40:53,896  RepairSession.java:181 - 
> [repair #b13e3740-a7d7-11e5-b568-f565b837eb0d] Received merkle tree for 
> counter1 from /10.200.177.32
> INFO  [AntiEntropyStage:1] 2015-12-21 11:40:53,906  RepairSession.java:181 - 
> [repair #b13e3740-a7d7-11e5-b568-f565b837eb0d] Received merkle tree for 
> counter1 from /10.200.177.33
> INFO  [Repair#1:1] 2015-12-21 11:40:53,906  RepairJob.java:100 - [repair 
> #b13e3740-a7d7-11e5-b568-f565b837eb0d] requesting merkle trees for standard1 
> (to [/10.200.177.33, /10.200.177.32])
> INFO  [Repair#1:1] 2015-12-21 11:40:53,906  RepairJob.java:174 - [repair 
> #b13e3740-a7d7-11e5-b568-f565b837eb0d] Requesting merkle trees for standard1 
> (to [/10.200.177.33, /10.200.177.32])
> INFO  [RepairJobTask:2] 2015-12-21 11:40:53,910  SyncTask.java:66 - [repair 
> #b13e3740-a7d7-11e5-b568-f565b837eb0d] Endpoints /10.200.177.33 and 
> /10.200.177.32 are consistent for counter1
> INFO  [RepairJobTask:1] 2015-12-21 11:40:53,910  RepairJob.java:145 - [repair 
> #b13e3740-a7d7-11e5-b568-f565b837eb0d] counter1 is fully synced
> INFO  [AntiEntropyStage:1] 2015-12-21 11:40:54,823  Validator.java:272 - 
> [repair #b17a2ed0-a7d7-11e5-ada8-8304f5629908] Sending completed merkle tree 
> to /10.200.177.33 for keyspace1.counter1
> ERROR [ValidationExecutor:3] 2015-12-21 11:40:55,104  
> CompactionManager.java:1065 - Cannot start multiple repair sessions over the 
> same sstables
> ERROR [ValidationExecutor:3] 2015-12-21 11:40:55,105  Validator.java:259 - 
> Failed creating a merkle tree for [repair 
> #b17a2ed0-a7d7-11e5-ada8-8304f5629908 on keyspace1/standard1, 
> [(10,-9223372036854775808]]], /10.200.177.33 (see log for details)
> ERROR [ValidationExecutor:3] 2015-12-21 11:40:55,110  
> CassandraDaemon.java:195 - Exception in thread 
> Thread[ValidationExecutor:3,1,main]
> java.lang.RuntimeException: Cannot start multiple repair sessions over the 
> same sstables
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1066)
>  ~[cassandra-all-3.0.1.777.jar:3.0.1.777]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.access$700(CompactionManager.java:80)
>  ~[cassandra-all-3.0.1.777.jar:3.0.1.777]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$10.call(CompactionManager.java:679)
>  ~[cassandra-all-3.0.1.777.jar:3.0.1.777]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_40]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_40]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_40]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]
> ERROR [AntiEntropyStage:1] 2015-12-21 11:40:55,174  
> RepairMessageVerbHandler.java:161 - Got error, removing parent repair session
> INFO  [CompactionExecutor:3] 2015-12-21 11:40:55,175  
> CompactionManager.java:489 - Starting 

[jira] [Commented] (CASSANDRA-9778) CQL support for time series aggregation

2016-01-12 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093569#comment-15093569
 ] 

Benjamin Lerer commented on CASSANDRA-9778:
---

While working on CASSANDRA-10707, I started to think about that windowed 
aggregates problem.

I really think that it is a usefull functionality that we should have. I have 
used it for years, with MySQL, for analysing data.

Regarding the implementation, even if in general I also prefer to follow the 
SQL syntax, I do not believe it will be a good fit for Cassandra.

If we have a table like:
{code}
CREATE TABLE trades
{
symbol text,
date date,
time time,
priceMantissa int,
priceExponent tinyint,
volume int,
PRIMARY KEY ((symbol, date), time)
};
{code}
The trades will be inserted with an increasing time and sorted in the same 
order, which is really the use case targeted by this ticket and by 
CASSANDRA-10707. As we can have to process a large amount of data, we want to 
try to limit ourself to the cases where we can build the groups on the flight 
(which is not a requirement in the SQL world).

If we want to get the number of trades per minutes with the SQL syntax we will 
have to write:
{{SELECT hour(time), minute(time), count((*)) FROM Trades WHERE symbol = 'AAPL' 
AND date = '2016-01-11' GROUP BY hour(time), minute(time);}}
which is fine. The problem is that if the user invert by mistake the functions 
like that:
{{SELECT hour(time), minute(time), count((*)) FROM Trades WHERE symbol = 'AAPL' 
AND date = '2016-01-11' GROUP BY minute(time), hour(time);}}
the query will return weird results if it is a normal SELECT and will be pretty 
inefficient within a MV.
The only way to prevent that would be to check the function order and make sure 
that we do not allow to skip functions (e.g. {{ GROUP BY  hour(time), 
second(time)}}).

In my opinion a function like {{floor(, )}} will be 
much better as it does not allow for this type of mistakes and is much more 
flexible (you can create 5 minutes buckets if you want to).
{{SELECT floor(time, m), count((*)) FROM Trades WHERE symbol = 'AAPL' AND date 
= '2016-01-11' GROUP BY floor(time, m);}}
 

  

> CQL support for time series aggregation
> ---
>
> Key: CASSANDRA-9778
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9778
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Benjamin Lerer
> Fix For: 3.x
>
>
> Along with MV (CASSANDRA-6477), time series aggregation or "rollups" are a 
> common design pattern in cassandra applications.  I'd like to add CQL support 
> for this along these lines:
> {code}
> CREATE MATERIALIZED VIEW stocks_by_hour AS
> SELECT exchange, day, day_time(1h) AS hour, symbol, avg(price), sum(volume)
> FROM stocks
> GROUP BY exchange, day, symbol, hour
> PRIMARY KEY  ((exchange, day), hour, symbol);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[Cassandra Wiki] Update of "CompatibilityGuarantees" by SylvainLebresne

2016-01-12 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "CompatibilityGuarantees" page has been changed by SylvainLebresne:
https://wiki.apache.org/cassandra/CompatibilityGuarantees

New page:
= Compatibility guarantees =

The following document describe the compatibility guarantees offered during 
upgrade of Apache Cassandra. When a version is mentioned, this document assumes 
the “tick-tock” versioning and so in X.Y, X is the major version and Y the 
minor one.

== General Definition ==

When we say that upgrading from version X to version Y is supported, we always 
at least mean that there is a path (documented in the NEWS file if any 
specifics are required) for upgrading all the nodes of a cluster from X to Y in 
a rolling fashion and so without incurring the unavailability of the database 
as a whole (that is, without loss of service).

Note however that during major upgrades (3.x to 4.y) ALTER, repair, bootstrap, 
and decommission might be temporary unavailable until the upgrade complete.  
Starting with 4.y, we plan to remove this limitation.

It is also always strongly discouraged to upgrade to any version without 
testing the upgrade in a staging environment and without having at least some 
snapshot of the sstables around. This is particularly ill advised for major 
upgrades.

== Stable vs Experimental ==

Anything is either considered experimental or stable. No guarantee of any sort 
is provided on something experimental, outside of a gentleman's agreement of 
not completely changing/removing features in a minor release without serious 
reasons.

== Minor upgrades ==

Upgrading a node to the minor versions of an equivalent major should be 
virtually indistinguishable from simply restarting the node (without upgrading 
it) from a user point of view. This means in particular:
 * No removal/modifications of any configuration option, startup option, 
exposed metrics or general behavior of the Cassandra process.
 * No removal nor syntactical/semantical change to either CQL, authentication, 
any existing version of the binary protocol or thrift. 

Those guarantees should be enforced as strongly as possible. In the real world 
however, despite our efforts to avoid it, unfortunate backward incompatible 
changes might end up in a release due to:
 * an error: if such change was to pass our vigilance and testing and make it 
in a release, we'll fix that break as soon as possible (in a “patch” release).
 * in rare occasions, fixing a bug might take the form of a breaking change. In 
the hopefully very rare case where preserving the bug is considered a lot worst 
than preserving compatibility, we might do such a change in a minor release.
In both case, we will communicate any such breaking change to the mailing list 
as soon as it is found.

While no features will be removed in a minor upgrade, some feature could be 
deprecated in a minor from time to time. See the section on deprecation for 
more details.

New features may and will be added however, though will be limited to feature 
releases (even-numbered ones). Those new features may not and should not be 
used until the full cluster has been upgraded to support them. 

The corollary of this is that, provided you accept to be limited to the 
features supported by the smaller version in the cluster, clusters with mixed 
versions _within_ a major are supported.

== Major upgrades ==

Major upgrades are only supported to the very next major version. That is, 
upgrading from any 3.x release to 4.x will be supported, but upgrading from 2.x 
to 4.x is not guaranteed. While efforts will be made to allow upgrading from 
any minor to the next major, there may be restrictions: for instance, upgrading 
to 4.0 may only be supported from 3.4. If such restrictions exists, it will be 
clearly documented in the NEWS file of the new major release.

== Deprecation ==

We may deprecate some features/options over time. Typically, this could be 
because an option doesn’t do anything useful following some internal changes, 
or it has been superseded by another “better” option. Deprecation means that 
the use of the feature/option is discouraged and that it is likely to be 
removed in the next major release. We will remove a deprecated feature in the 
next major assuming it has been deprecated since at least 6 months. Given the 
monthly cadence of tick-tock, this means that a feature deprecated in 3.3 will 
(likely) be removed in 4.0. A feature deprecated in 3.10 however, will only be 
removed in 5.0 however.

The deprecation of an option will always be indicated in the NEWS file for the 
release on which it is first deprecated, and a warning will be issued in the 
log file if a deprecated feature is used.

Note that we may sometimes remove an option in a major release without having 
deprecated it before when prior deprecation wasn’t justified. For instance, 
options can be removed from the YAML file that way 

[jira] [Created] (CASSANDRA-11003) cqlsh.py: Shell instance has no attribute 'parse_for_table_meta'

2016-01-12 Thread Eduard Tudenhoefner (JIRA)
Eduard Tudenhoefner created CASSANDRA-11003:
---

 Summary: cqlsh.py: Shell instance has no attribute 
'parse_for_table_meta'
 Key: CASSANDRA-11003
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11003
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Eduard Tudenhoefner
Assignee: Eduard Tudenhoefner


{code}
$ cqlsh -u cassandra -p cassandra
Connected to abc at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.0.1.816 | DSE 5.0.0 | CQL spec 3.4.0 | Native 
protocol v4]
Use HELP for help.
cassandra@cqlsh> CALL EndpointStateTracker.getWorkload('127.0.0.1');

Shell instance has no attribute 'parse_for_table_meta'
{code}

I think this is happening because of a bad merge 
(https://github.com/apache/cassandra/commit/2800bf1082e773daf0af29516b61c711acda626b#diff-1cce67f7d76864f07aaf4d986d6fc051).
 We just need to rename *parse_for_update_meta* to *parse_for_table_meta*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6018) Add option to encrypt commitlog

2016-01-12 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093714#comment-15093714
 ] 

Branimir Lambov commented on CASSANDRA-6018:


Thank you for the update.


[{{EncryptionUtils}} (all 
methods)|https://github.com/apache/cassandra/compare/trunk...jasobrown:6018#diff-7289930937dea70ad2fb73f66006d5d7R61]:
 Could add comment about the expected usage to the JavaDoc. It's not obvious 
that user should update its outputBuffer with the resulting value.

[encrypt|https://github.com/apache/cassandra/compare/trunk...jasobrown:6018#diff-7289930937dea70ad2fb73f66006d5d7R83]:
 Could we not reserve the header bytes, i.e. provide a method to prepare 
buffers for caller that take header and cipher output size into account?
Otherwise, I think it should be renamed to {{encryptAndWrite}}.

[{{addSize}} and {{maybeSwap}} in 
{{EncryptedSegment.write}}|https://github.com/apache/cassandra/compare/trunk...jasobrown:6018#diff-a3015c78b233e027651f8b0be8ae22c8R130]
 can be taken out of the loop.

[{{SegmentIterator}}|https://github.com/apache/cassandra/compare/trunk...jasobrown:6018#diff-4c3a8240a441cef90e680246ee64R105]:
 For uncompressed <=2.1 replay we need to tolerate errors for the whole of the 
last segment, as the segment could be reused and only partially overwritten and 
we can't really identify where the last section is. Also found in 
[{{CommitLogReplayer}}|https://github.com/apache/cassandra/compare/trunk...jasobrown:6018#diff-348a1347dacf897385fb0a97116a1b5eR390].
I realize we don't seem to have a test for this particular scenario.

I don't think the {{SegmentReadException}} can escape to 
{{CommitLogReplayer.recover}} which tries to catch and act on it.


> Add option to encrypt commitlog 
> 
>
> Key: CASSANDRA-6018
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6018
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jason Brown
>Assignee: Jason Brown
>  Labels: commit_log, encryption, security
> Fix For: 3.x
>
>
> We are going to start using cassandra for a billing system, and while I can 
> encrypt sstables at rest (via Datastax Enterprise), commit logs are more or 
> less plain text. Thus, an attacker would be able to easily read, for example, 
> credit card numbers in the clear text commit log (if the calling app does not 
> encrypt the data itself before sending it to cassandra).
> I want to allow the option of encrypting the commit logs, most likely 
> controlled by a property in the yaml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11002) com.datastax.driver.core.exceptions.NoHostAvailableException

2016-01-12 Thread sangshenghong (JIRA)
sangshenghong created CASSANDRA-11002:
-

 Summary: 
com.datastax.driver.core.exceptions.NoHostAvailableException
 Key: CASSANDRA-11002
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11002
 Project: Cassandra
  Issue Type: Bug
 Environment: Apache Cassandra 3.0.2 and 3.1.1
Reporter: sangshenghong
 Fix For: 3.1.1
 Attachments: error.png

I have created one issue CASSANDRA-10996, but the owner suggest I use datastax 
java driver to get KeySpaceMetaData, so I downloaded this driver which version 
is "3.0.0-rc1", I use the following code to connect :
   Cluster cluster = Cluster.builder()
.addContactPoint("192.168.56.11")
.build();
KeyspaceMetadata keySpaceMetaData = 
cluster.getMetadata().getKeyspace(this.keyspace);

But got the following exception:
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
host(s) tried for query failed (tried: hwtest1.localdomain/192.168.56.11:9042 
(com.datastax.driver.core.exceptions.TransportException: 
[hwtest1.localdomain/192.168.56.11] Cannot connect))
at 
com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:231)
at 
com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1382)
at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:393)


I also change the cassandra.yaml based on 
https://github.com/datastax/java-driver/wiki/Connection-requirements

I also tried use Datastax DevCenter, it can connect to 
hwtest1.localdomain/192.168.56.11 sussessfully.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11003) cqlsh.py: Shell instance has no attribute 'parse_for_table_meta'

2016-01-12 Thread Eduard Tudenhoefner (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eduard Tudenhoefner updated CASSANDRA-11003:

Description: 
{code}
$ cqlsh -u cassandra -p cassandra
Connected to abc at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.0.1.816 | DSE 5.0.0 | CQL spec 3.4.0 | Native 
protocol v4]
Use HELP for help.
cassandra@cqlsh> SOME COMMAND;

Shell instance has no attribute 'parse_for_table_meta'
{code}

I think this is happening because of a bad merge 
(https://github.com/apache/cassandra/commit/2800bf1082e773daf0af29516b61c711acda626b#diff-1cce67f7d76864f07aaf4d986d6fc051).
 We just need to rename *parse_for_update_meta* to *parse_for_table_meta*

  was:
{code}
$ cqlsh -u cassandra -p cassandra
Connected to abc at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.0.1.816 | DSE 5.0.0 | CQL spec 3.4.0 | Native 
protocol v4]
Use HELP for help.
cassandra@cqlsh> CALL EndpointStateTracker.getWorkload('127.0.0.1');

Shell instance has no attribute 'parse_for_table_meta'
{code}

I think this is happening because of a bad merge 
(https://github.com/apache/cassandra/commit/2800bf1082e773daf0af29516b61c711acda626b#diff-1cce67f7d76864f07aaf4d986d6fc051).
 We just need to rename *parse_for_update_meta* to *parse_for_table_meta*


> cqlsh.py: Shell instance has no attribute 'parse_for_table_meta'
> 
>
> Key: CASSANDRA-11003
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11003
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Eduard Tudenhoefner
>Assignee: Eduard Tudenhoefner
>
> {code}
> $ cqlsh -u cassandra -p cassandra
> Connected to abc at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 3.0.1.816 | DSE 5.0.0 | CQL spec 3.4.0 | Native 
> protocol v4]
> Use HELP for help.
> cassandra@cqlsh> SOME COMMAND;
> Shell instance has no attribute 'parse_for_table_meta'
> {code}
> I think this is happening because of a bad merge 
> (https://github.com/apache/cassandra/commit/2800bf1082e773daf0af29516b61c711acda626b#diff-1cce67f7d76864f07aaf4d986d6fc051).
>  We just need to rename *parse_for_update_meta* to *parse_for_table_meta*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Remove duplication from NEWS.txt

2016-01-12 Thread samt
Remove duplication from NEWS.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/663f7653
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/663f7653
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/663f7653

Branch: refs/heads/trunk
Commit: 663f7653e04b9c8620cc3ed06f9c97bd7dc62e35
Parents: 4d9a0a1
Author: Sam Tunnicliffe 
Authored: Tue Jan 12 12:51:37 2016 +
Committer: Sam Tunnicliffe 
Committed: Tue Jan 12 12:51:37 2016 +

--
 NEWS.txt | 6 --
 1 file changed, 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/663f7653/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index b6b9e92..9dd4e25 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -31,12 +31,6 @@ New features
 
 Upgrading
 -
-- Nothing specific to 3.2 but please see previous versions upgrading 
section,
-  especially if you are upgrading from 2.2.
-
-
-Upgrading
--
- The compression ratio metrics computation has been modified to be more 
accurate.
- Running Cassandra as root is prevented by default.
- JVM options are moved from cassandra-env.(sh|ps1) to jvm.options file



[1/3] cassandra git commit: Remove duplication from NEWS.txt

2016-01-12 Thread samt
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.3 4d9a0a1e6 -> 663f7653e
  refs/heads/trunk 0bb63d285 -> dfeb8fe82


Remove duplication from NEWS.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/663f7653
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/663f7653
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/663f7653

Branch: refs/heads/cassandra-3.3
Commit: 663f7653e04b9c8620cc3ed06f9c97bd7dc62e35
Parents: 4d9a0a1
Author: Sam Tunnicliffe 
Authored: Tue Jan 12 12:51:37 2016 +
Committer: Sam Tunnicliffe 
Committed: Tue Jan 12 12:51:37 2016 +

--
 NEWS.txt | 6 --
 1 file changed, 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/663f7653/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index b6b9e92..9dd4e25 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -31,12 +31,6 @@ New features
 
 Upgrading
 -
-- Nothing specific to 3.2 but please see previous versions upgrading 
section,
-  especially if you are upgrading from 2.2.
-
-
-Upgrading
--
- The compression ratio metrics computation has been modified to be more 
accurate.
- Running Cassandra as root is prevented by default.
- JVM options are moved from cassandra-env.(sh|ps1) to jvm.options file



[3/3] cassandra git commit: Merge branch 'cassandra-3.3' into trunk

2016-01-12 Thread samt
Merge branch 'cassandra-3.3' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dfeb8fe8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dfeb8fe8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dfeb8fe8

Branch: refs/heads/trunk
Commit: dfeb8fe824e46d6a972668d283afd35d42e7
Parents: 0bb63d2 663f765
Author: Sam Tunnicliffe 
Authored: Tue Jan 12 12:51:56 2016 +
Committer: Sam Tunnicliffe 
Committed: Tue Jan 12 12:51:56 2016 +

--
 NEWS.txt | 6 --
 1 file changed, 6 deletions(-)
--




[jira] [Commented] (CASSANDRA-10997) cqlsh_copy_tests failing en mass when vnodes are disabled

2016-01-12 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093847#comment-15093847
 ] 

Stefania commented on CASSANDRA-10997:
--

Here are the patches and dtests:

||2.1||2.2||3.0||3.3||trunk||
|[patch|https://github.com/stef1927/cassandra/commits/10997-2.1]|[patch|https://github.com/stef1927/cassandra/commits/10997-2.2]|[patch|https://github.com/stef1927/cassandra/commits/10997-3.0]|[patch|https://github.com/stef1927/cassandra/commits/10997-3.3]|[patch|https://github.com/stef1927/cassandra/commits/10997]|
|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10997-2.1-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10997-2.2-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10997-3.0-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10997-3.3-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10997-dtest/]|

I've run the cqlsh copy dtests with vnodes disabled locally on 2.1. The patch 
applies to 2.2 onwards. [~philipthompson] is there a way to run dtests on 
Jenkins on dev branches with vnodes disabled?

[~pauloricardomg] are you able to take this for review? It shouldn't take long.


> cqlsh_copy_tests failing en mass when vnodes are disabled
> -
>
> Key: CASSANDRA-10997
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10997
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Philip Thompson
>Assignee: Stefania
>  Labels: dtest
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Check out [an example cassci 
> failure|http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/186/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_list_data/]
>  as well as the [full novnode report 
> page|http://cassci.datastax.com/userContent/cstar_report/index.html?jobs=cassandra-2.1_novnode_dtest,cassandra-3.0_novnode_dtest,cassandra-2.2_novnode_dtest_known=true].
> Many COPY TO tests are failing when the cluster only has one token. The 
> message {{Found no ranges to query, check begin and end tokens: None - None}} 
> is printed, and it appears to be coming from cqlsh, specfically in 
> pylib/cqlshlib/copyutil.py



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/6] cassandra git commit: Support multiple addComplexDeletion() call in BTreeRow.Builder

2016-01-12 Thread slebresne
Support multiple addComplexDeletion() call in BTreeRow.Builder

patch by slebresne; reviewed by benedict for CASSANDRA-10743

When reading legacy sstable who has an index block stopping in the
middle of a collection range tombstone, this end up calling
BTreeRow.Builder.addComplexDeletion() twice for the same column, so
we need to handle this.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f4037f9b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f4037f9b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f4037f9b

Branch: refs/heads/trunk
Commit: f4037f9b3b20071e66298d4a7d228c1e46bb5206
Parents: 6fdcaef
Author: Sylvain Lebresne 
Authored: Fri Jan 8 14:41:00 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Jan 12 14:15:41 2016 +0100

--
 CHANGES.txt |  2 ++
 src/java/org/apache/cassandra/db/rows/BTreeRow.java | 15 +++
 2 files changed, 13 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f4037f9b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 36a6e43..da5ed26 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 3.0.3
+ * Fix UnsupportedOperationException when reading old sstable with range
+   tombstone (CASSANDRA-10743)
  * MV should use the maximum timestamp of the primary key (CASSANDRA-10910)
  * Fix potential assertion error during compaction (CASSANDRA-10944)
  * Fix counting of received sstables in streaming (CASSANDRA-10949)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f4037f9b/src/java/org/apache/cassandra/db/rows/BTreeRow.java
--
diff --git a/src/java/org/apache/cassandra/db/rows/BTreeRow.java 
b/src/java/org/apache/cassandra/db/rows/BTreeRow.java
index 4bd11da..e8667e0 100644
--- a/src/java/org/apache/cassandra/db/rows/BTreeRow.java
+++ b/src/java/org/apache/cassandra/db/rows/BTreeRow.java
@@ -549,12 +549,19 @@ public class BTreeRow extends AbstractRow
 // TODO: relax this in the case our outer provider is sorted 
(want to delay until remaining changes are
 // bedded in, as less important; galloping makes it pretty 
cheap anyway)
 Arrays.sort(cells, lb, ub, (Comparator) 
column.cellComparator());
-cell = (Cell) cells[lb];
 DeletionTime deletion = DeletionTime.LIVE;
-if (cell instanceof ComplexColumnDeletion)
+// Deal with complex deletion (for which we've use "fake" 
ComplexColumnDeletion cells that we need to remove).
+// Note that in almost all cases we'll at most one of those 
fake cell, but the contract of {{Row.Builder.addComplexDeletion}}
+// does not forbid it being called twice (especially in the 
unsorted case) and this can actually happen when reading
+// legacy sstables (see #10743).
+while (lb < ub)
 {
-// TODO: do we need to be robust to multiple of these 
being provided?
-deletion = new DeletionTime(cell.timestamp(), 
cell.localDeletionTime());
+cell = (Cell) cells[lb];
+if (!(cell instanceof ComplexColumnDeletion))
+break;
+
+if (cell.timestamp() > deletion.markedForDeleteAt())
+deletion = new DeletionTime(cell.timestamp(), 
cell.localDeletionTime());
 lb++;
 }
 



[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-01-12 Thread slebresne
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2d0863c6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2d0863c6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2d0863c6

Branch: refs/heads/cassandra-3.3
Commit: 2d0863c6d4192c263e8c303ba9e581b1bc5780ed
Parents: 663f765 f4037f9
Author: Sylvain Lebresne 
Authored: Tue Jan 12 14:18:30 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Jan 12 14:18:30 2016 +0100

--
 CHANGES.txt |  2 ++
 src/java/org/apache/cassandra/db/rows/BTreeRow.java | 15 +++
 2 files changed, 13 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2d0863c6/CHANGES.txt
--
diff --cc CHANGES.txt
index a650448,da5ed26..2a13ef6
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,15 -1,8 +1,17 @@@
 -3.0.3
 +3.3
 +Merged from 3.0:
+  * Fix UnsupportedOperationException when reading old sstable with range
+tombstone (CASSANDRA-10743)
   * MV should use the maximum timestamp of the primary key (CASSANDRA-10910)
   * Fix potential assertion error during compaction (CASSANDRA-10944)
 +
 +3.2
 + * Make sure tokens don't exist in several data directories (CASSANDRA-6696)
 + * Add requireAuthorization method to IAuthorizer (CASSANDRA-10852)
 + * Move static JVM options to conf/jvm.options file (CASSANDRA-10494)
 + * Fix CassandraVersion to accept x.y version string (CASSANDRA-10931)
 + * Add forceUserDefinedCleanup to allow more flexible cleanup 
(CASSANDRA-10708)
 + * (cqlsh) allow setting TTL with COPY (CASSANDRA-9494)
   * Fix counting of received sstables in streaming (CASSANDRA-10949)
   * Implement hints compression (CASSANDRA-9428)
   * Fix potential assertion error when reading static columns (CASSANDRA-10903)



[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-01-12 Thread slebresne
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2d0863c6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2d0863c6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2d0863c6

Branch: refs/heads/trunk
Commit: 2d0863c6d4192c263e8c303ba9e581b1bc5780ed
Parents: 663f765 f4037f9
Author: Sylvain Lebresne 
Authored: Tue Jan 12 14:18:30 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Jan 12 14:18:30 2016 +0100

--
 CHANGES.txt |  2 ++
 src/java/org/apache/cassandra/db/rows/BTreeRow.java | 15 +++
 2 files changed, 13 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2d0863c6/CHANGES.txt
--
diff --cc CHANGES.txt
index a650448,da5ed26..2a13ef6
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,15 -1,8 +1,17 @@@
 -3.0.3
 +3.3
 +Merged from 3.0:
+  * Fix UnsupportedOperationException when reading old sstable with range
+tombstone (CASSANDRA-10743)
   * MV should use the maximum timestamp of the primary key (CASSANDRA-10910)
   * Fix potential assertion error during compaction (CASSANDRA-10944)
 +
 +3.2
 + * Make sure tokens don't exist in several data directories (CASSANDRA-6696)
 + * Add requireAuthorization method to IAuthorizer (CASSANDRA-10852)
 + * Move static JVM options to conf/jvm.options file (CASSANDRA-10494)
 + * Fix CassandraVersion to accept x.y version string (CASSANDRA-10931)
 + * Add forceUserDefinedCleanup to allow more flexible cleanup 
(CASSANDRA-10708)
 + * (cqlsh) allow setting TTL with COPY (CASSANDRA-9494)
   * Fix counting of received sstables in streaming (CASSANDRA-10949)
   * Implement hints compression (CASSANDRA-9428)
   * Fix potential assertion error when reading static columns (CASSANDRA-10903)



[jira] [Updated] (CASSANDRA-10924) Pass base table's metadata to Index.validateOptions

2016-01-12 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrés de la Peña updated CASSANDRA-10924:
--
Attachment: CASSANDRA-10924-v1.diff

> Pass base table's metadata to Index.validateOptions
> ---
>
> Key: CASSANDRA-10924
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10924
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL, Local Write-Read Paths
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>Priority: Minor
>  Labels: 2i, index, validation
> Fix For: 3.0.x, 3.x
>
> Attachments: CASSANDRA-10924-v0.diff, CASSANDRA-10924-v1.diff
>
>
> Some custom index implementations require the base table's metadata to 
> validate their creation options. For example, the options of these 
> implementations can contain information about which base table's columns are 
> going to be indexed and how, so the implementation needs to know the 
> existence and the type of the columns to be indexed to properly validate.
> The attached patch proposes to add base table's {{CFMetaData}} to Index' 
> optional static method to validate the custom index options:
> {{public static Map validateOptions(CFMetaData cfm, 
> Map options);}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


svn commit: r1724233 - in /cassandra/site: publish/download/index.html publish/index.html src/settings.py

2016-01-12 Thread jake
Author: jake
Date: Tue Jan 12 13:43:17 2016
New Revision: 1724233

URL: http://svn.apache.org/viewvc?rev=1724233=rev
Log:
3.2

Modified:
cassandra/site/publish/download/index.html
cassandra/site/publish/index.html
cassandra/site/src/settings.py

Modified: cassandra/site/publish/download/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/download/index.html?rev=1724233=1724232=1724233=diff
==
--- cassandra/site/publish/download/index.html (original)
+++ cassandra/site/publish/download/index.html Tue Jan 12 13:43:17 2016
@@ -49,16 +49,16 @@
 
 Cassandra is moving to a new release process called http://www.planetcassandra.org/blog/cassandra-2-2-3-0-and-beyond/;>Tick-Tock.
 
-The latest tick-tock release is 3.1.1 (released on
-2015-12-21).
+The latest tick-tock release is 3.2 (released on
+2016-01-11).
 
 
 
   
-  http://www.apache.org/dyn/closer.lua/cassandra/3.1.1/apache-cassandra-3.1.1-bin.tar.gz;>apache-cassandra-3.1.1-bin.tar.gz
-  [http://www.apache.org/dist/cassandra/3.1.1/apache-cassandra-3.1.1-bin.tar.gz.asc;>PGP]
-  [http://www.apache.org/dist/cassandra/3.1.1/apache-cassandra-3.1.1-bin.tar.gz.md5;>MD5]
-  [http://www.apache.org/dist/cassandra/3.1.1/apache-cassandra-3.1.1-bin.tar.gz.sha1;>SHA1]
+  http://www.apache.org/dyn/closer.lua/cassandra/3.2/apache-cassandra-3.2-bin.tar.gz;>apache-cassandra-3.2-bin.tar.gz
+  [http://www.apache.org/dist/cassandra/3.2/apache-cassandra-3.2-bin.tar.gz.asc;>PGP]
+  [http://www.apache.org/dist/cassandra/3.2/apache-cassandra-3.2-bin.tar.gz.md5;>MD5]
+  [http://www.apache.org/dist/cassandra/3.2/apache-cassandra-3.2-bin.tar.gz.sha1;>SHA1]
   
 
 

Modified: cassandra/site/publish/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/index.html?rev=1724233=1724232=1724233=diff
==
--- cassandra/site/publish/index.html (original)
+++ cassandra/site/publish/index.html Tue Jan 12 13:43:17 2016
@@ -77,7 +77,7 @@
   
   
   
-  http://www.planetcassandra.org/blog/cassandra-2-2-3-0-and-beyond/;>Tick-Tock
 release 3.1.1 (http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-3.1.1;>Changes)
+  http://www.planetcassandra.org/blog/cassandra-2-2-3-0-and-beyond/;>Tick-Tock
 release 3.2 (http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-3.2;>Changes)
   
   
 

Modified: cassandra/site/src/settings.py
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/settings.py?rev=1724233=1724232=1724233=diff
==
--- cassandra/site/src/settings.py (original)
+++ cassandra/site/src/settings.py Tue Jan 12 13:43:17 2016
@@ -92,8 +92,8 @@ SITE_POST_PROCESSORS = {
 }
 
 class CassandraDef(object):
-ticktock_version = '3.1.1'
-ticktock_version_date = '2015-12-21'
+ticktock_version = '3.2'
+ticktock_version_date = '2016-01-11'
 stable_version = '3.0.2'
 stable_release_date = '2015-12-21'
 is_stable_prod_ready = False




[jira] [Updated] (CASSANDRA-10997) cqlsh_copy_tests failing en mass when vnodes are disabled

2016-01-12 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10997:

Reviewer: Paulo Motta

> cqlsh_copy_tests failing en mass when vnodes are disabled
> -
>
> Key: CASSANDRA-10997
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10997
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Philip Thompson
>Assignee: Stefania
>  Labels: dtest
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Check out [an example cassci 
> failure|http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/186/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_list_data/]
>  as well as the [full novnode report 
> page|http://cassci.datastax.com/userContent/cstar_report/index.html?jobs=cassandra-2.1_novnode_dtest,cassandra-3.0_novnode_dtest,cassandra-2.2_novnode_dtest_known=true].
> Many COPY TO tests are failing when the cluster only has one token. The 
> message {{Found no ranges to query, check begin and end tokens: None - None}} 
> is printed, and it appears to be coming from cqlsh, specfically in 
> pylib/cqlshlib/copyutil.py



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/6] cassandra git commit: Support multiple addComplexDeletion() call in BTreeRow.Builder

2016-01-12 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 6fdcaef20 -> f4037f9b3
  refs/heads/cassandra-3.3 663f7653e -> 2d0863c6d
  refs/heads/trunk dfeb8fe82 -> 4e209d9d3


Support multiple addComplexDeletion() call in BTreeRow.Builder

patch by slebresne; reviewed by benedict for CASSANDRA-10743

When reading legacy sstable who has an index block stopping in the
middle of a collection range tombstone, this end up calling
BTreeRow.Builder.addComplexDeletion() twice for the same column, so
we need to handle this.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f4037f9b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f4037f9b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f4037f9b

Branch: refs/heads/cassandra-3.0
Commit: f4037f9b3b20071e66298d4a7d228c1e46bb5206
Parents: 6fdcaef
Author: Sylvain Lebresne 
Authored: Fri Jan 8 14:41:00 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Jan 12 14:15:41 2016 +0100

--
 CHANGES.txt |  2 ++
 src/java/org/apache/cassandra/db/rows/BTreeRow.java | 15 +++
 2 files changed, 13 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f4037f9b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 36a6e43..da5ed26 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 3.0.3
+ * Fix UnsupportedOperationException when reading old sstable with range
+   tombstone (CASSANDRA-10743)
  * MV should use the maximum timestamp of the primary key (CASSANDRA-10910)
  * Fix potential assertion error during compaction (CASSANDRA-10944)
  * Fix counting of received sstables in streaming (CASSANDRA-10949)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f4037f9b/src/java/org/apache/cassandra/db/rows/BTreeRow.java
--
diff --git a/src/java/org/apache/cassandra/db/rows/BTreeRow.java 
b/src/java/org/apache/cassandra/db/rows/BTreeRow.java
index 4bd11da..e8667e0 100644
--- a/src/java/org/apache/cassandra/db/rows/BTreeRow.java
+++ b/src/java/org/apache/cassandra/db/rows/BTreeRow.java
@@ -549,12 +549,19 @@ public class BTreeRow extends AbstractRow
 // TODO: relax this in the case our outer provider is sorted 
(want to delay until remaining changes are
 // bedded in, as less important; galloping makes it pretty 
cheap anyway)
 Arrays.sort(cells, lb, ub, (Comparator) 
column.cellComparator());
-cell = (Cell) cells[lb];
 DeletionTime deletion = DeletionTime.LIVE;
-if (cell instanceof ComplexColumnDeletion)
+// Deal with complex deletion (for which we've use "fake" 
ComplexColumnDeletion cells that we need to remove).
+// Note that in almost all cases we'll at most one of those 
fake cell, but the contract of {{Row.Builder.addComplexDeletion}}
+// does not forbid it being called twice (especially in the 
unsorted case) and this can actually happen when reading
+// legacy sstables (see #10743).
+while (lb < ub)
 {
-// TODO: do we need to be robust to multiple of these 
being provided?
-deletion = new DeletionTime(cell.timestamp(), 
cell.localDeletionTime());
+cell = (Cell) cells[lb];
+if (!(cell instanceof ComplexColumnDeletion))
+break;
+
+if (cell.timestamp() > deletion.markedForDeleteAt())
+deletion = new DeletionTime(cell.timestamp(), 
cell.localDeletionTime());
 lb++;
 }
 



[6/6] cassandra git commit: Merge branch 'cassandra-3.3' into trunk

2016-01-12 Thread slebresne
Merge branch 'cassandra-3.3' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4e209d9d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4e209d9d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4e209d9d

Branch: refs/heads/trunk
Commit: 4e209d9d33be63c9a557b032dacb42c5105e9873
Parents: dfeb8fe 2d0863c
Author: Sylvain Lebresne 
Authored: Tue Jan 12 14:18:45 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Jan 12 14:18:45 2016 +0100

--
 CHANGES.txt |  2 ++
 src/java/org/apache/cassandra/db/rows/BTreeRow.java | 15 +++
 2 files changed, 13 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4e209d9d/CHANGES.txt
--
diff --cc CHANGES.txt
index c97eae6,2a13ef6..68ea8b7
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,11 -1,7 +1,13 @@@
 +3.4
 + * Stripe view locks by key and table ID to reduce contention 
(CASSANDRA-10981)
 + * Add nodetool gettimeout and settimeout commands (CASSANDRA-10953)
 + * Add 3.0 metadata to sstablemetadata output (CASSANDRA-10838)
 +
 +
  3.3
  Merged from 3.0:
+  * Fix UnsupportedOperationException when reading old sstable with range
+tombstone (CASSANDRA-10743)
   * MV should use the maximum timestamp of the primary key (CASSANDRA-10910)
   * Fix potential assertion error during compaction (CASSANDRA-10944)
  



[2/6] cassandra git commit: Support multiple addComplexDeletion() call in BTreeRow.Builder

2016-01-12 Thread slebresne
Support multiple addComplexDeletion() call in BTreeRow.Builder

patch by slebresne; reviewed by benedict for CASSANDRA-10743

When reading legacy sstable who has an index block stopping in the
middle of a collection range tombstone, this end up calling
BTreeRow.Builder.addComplexDeletion() twice for the same column, so
we need to handle this.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f4037f9b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f4037f9b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f4037f9b

Branch: refs/heads/cassandra-3.3
Commit: f4037f9b3b20071e66298d4a7d228c1e46bb5206
Parents: 6fdcaef
Author: Sylvain Lebresne 
Authored: Fri Jan 8 14:41:00 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Jan 12 14:15:41 2016 +0100

--
 CHANGES.txt |  2 ++
 src/java/org/apache/cassandra/db/rows/BTreeRow.java | 15 +++
 2 files changed, 13 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f4037f9b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 36a6e43..da5ed26 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 3.0.3
+ * Fix UnsupportedOperationException when reading old sstable with range
+   tombstone (CASSANDRA-10743)
  * MV should use the maximum timestamp of the primary key (CASSANDRA-10910)
  * Fix potential assertion error during compaction (CASSANDRA-10944)
  * Fix counting of received sstables in streaming (CASSANDRA-10949)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f4037f9b/src/java/org/apache/cassandra/db/rows/BTreeRow.java
--
diff --git a/src/java/org/apache/cassandra/db/rows/BTreeRow.java 
b/src/java/org/apache/cassandra/db/rows/BTreeRow.java
index 4bd11da..e8667e0 100644
--- a/src/java/org/apache/cassandra/db/rows/BTreeRow.java
+++ b/src/java/org/apache/cassandra/db/rows/BTreeRow.java
@@ -549,12 +549,19 @@ public class BTreeRow extends AbstractRow
 // TODO: relax this in the case our outer provider is sorted 
(want to delay until remaining changes are
 // bedded in, as less important; galloping makes it pretty 
cheap anyway)
 Arrays.sort(cells, lb, ub, (Comparator) 
column.cellComparator());
-cell = (Cell) cells[lb];
 DeletionTime deletion = DeletionTime.LIVE;
-if (cell instanceof ComplexColumnDeletion)
+// Deal with complex deletion (for which we've use "fake" 
ComplexColumnDeletion cells that we need to remove).
+// Note that in almost all cases we'll at most one of those 
fake cell, but the contract of {{Row.Builder.addComplexDeletion}}
+// does not forbid it being called twice (especially in the 
unsorted case) and this can actually happen when reading
+// legacy sstables (see #10743).
+while (lb < ub)
 {
-// TODO: do we need to be robust to multiple of these 
being provided?
-deletion = new DeletionTime(cell.timestamp(), 
cell.localDeletionTime());
+cell = (Cell) cells[lb];
+if (!(cell instanceof ComplexColumnDeletion))
+break;
+
+if (cell.timestamp() > deletion.markedForDeleteAt())
+deletion = new DeletionTime(cell.timestamp(), 
cell.localDeletionTime());
 lb++;
 }
 



[jira] [Commented] (CASSANDRA-10726) Read repair inserts should not be blocking

2016-01-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093872#comment-15093872
 ] 

Sylvain Lebresne commented on CASSANDRA-10726:
--

bq. Would a reasonable half way house be to keep the write as blocking but 
return success in the case of a write timeout?

That would still break the "monotonic quorum reads": unless you get positive 
acks from the read-repair, you can't guarantee a quorum of replica is now up to 
date. Granted, it will work more often if we do that (than if we don't block at 
all), but guarantees are not about "most of the time" :)

And just to recap my personal position on this, I do feel we should keep the 
guarantee, at least by default, and still feel the right way to deal with the 
scenario you're complaining about would be a better way to deal with nodes 
backing up on writes. But we all know it's easier said than fixed, and while 
I'd rather we spend time on that better way to deal with the 2 scenario 
[~jbellis] mentioned above, I'm not too strongly opposed to a -D stopgap for 
advanced users.

> Read repair inserts should not be blocking
> --
>
> Key: CASSANDRA-10726
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10726
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Richard Low
>
> Today, if there’s a digest mismatch in a foreground read repair, the insert 
> to update out of date replicas is blocking. This means, if it fails, the read 
> fails with a timeout. If a node is dropping writes (maybe it is overloaded or 
> the mutation stage is backed up for some other reason), all reads to a 
> replica set could fail. Further, replicas dropping writes get more out of 
> sync so will require more read repair.
> The comment on the code for why the writes are blocking is:
> {code}
> // wait for the repair writes to be acknowledged, to minimize impact on any 
> replica that's
> // behind on writes in case the out-of-sync row is read multiple times in 
> quick succession
> {code}
> but the bad side effect is that reads timeout. Either the writes should not 
> be blocking or we should return success for the read even if the write times 
> out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10477) java.lang.AssertionError in StorageProxy.submitHint

2016-01-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093874#comment-15093874
 ] 

Sylvain Lebresne commented on CASSANDRA-10477:
--

[~aweisberg] I think you have a bad merge on 3.0 (though strangely the 3.3 and 
trunk branches seem fine), the test run failed at compilation time.

bq. Will it be fixed in 2.2.x too?

It will.

> java.lang.AssertionError in StorageProxy.submitHint
> ---
>
> Key: CASSANDRA-10477
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10477
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: CentOS 6, Oracle JVM 1.8.45
>Reporter: Severin Leonhardt
>Assignee: Ariel Weisberg
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> A few days after updating from 2.0.15 to 2.1.9 we have the following log 
> entry on 2 of 5 machines:
> {noformat}
> ERROR [EXPIRING-MAP-REAPER:1] 2015-10-07 17:01:08,041 
> CassandraDaemon.java:223 - Exception in thread 
> Thread[EXPIRING-MAP-REAPER:1,5,main]
> java.lang.AssertionError: /192.168.11.88
> at 
> org.apache.cassandra.service.StorageProxy.submitHint(StorageProxy.java:949) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.net.MessagingService$5.apply(MessagingService.java:383) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.net.MessagingService$5.apply(MessagingService.java:363) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at org.apache.cassandra.utils.ExpiringMap$1.run(ExpiringMap.java:98) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:118)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_45]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_45]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_45]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_45]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_45]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_45]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> {noformat}
> 192.168.11.88 is the broadcast address of the local machine.
> When this is logged the read request latency of the whole cluster becomes 
> very bad, from 6 ms/op to more than 100 ms/op according to OpsCenter. Clients 
> get a lot of timeouts. We need to restart the affected Cassandra node to get 
> back normal read latencies. It seems write latency is not affected.
> Disabling hinted handoff using {{nodetool disablehandoff}} only prevents the 
> assert from being logged. At some point the read latency becomes bad again. 
> Restarting the node where hinted handoff was disabled results in the read 
> latency being better again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10924) Pass base table's metadata to Index.validateOptions

2016-01-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093890#comment-15093890
 ] 

Andrés de la Peña commented on CASSANDRA-10924:
---

I'm attaching a second version of the patch with the suggested changes. There 
are two overloaded methods to validate the index options, the original and a 
new one including the base table's metadata in its signature. {IndexMetadata} 
tries to invoke the new method and, if there is no such method, it tries to 
invoke the old one. I hope you find it OK.

> Pass base table's metadata to Index.validateOptions
> ---
>
> Key: CASSANDRA-10924
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10924
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL, Local Write-Read Paths
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>Priority: Minor
>  Labels: 2i, index, validation
> Fix For: 3.0.x, 3.x
>
> Attachments: CASSANDRA-10924-v0.diff, CASSANDRA-10924-v1.diff
>
>
> Some custom index implementations require the base table's metadata to 
> validate their creation options. For example, the options of these 
> implementations can contain information about which base table's columns are 
> going to be indexed and how, so the implementation needs to know the 
> existence and the type of the columns to be indexed to properly validate.
> The attached patch proposes to add base table's {{CFMetaData}} to Index' 
> optional static method to validate the custom index options:
> {{public static Map validateOptions(CFMetaData cfm, 
> Map options);}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10924) Pass base table's metadata to Index.validateOptions

2016-01-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093890#comment-15093890
 ] 

Andrés de la Peña edited comment on CASSANDRA-10924 at 1/12/16 1:45 PM:


I'm attaching a second version of the patch with the suggested changes. There 
are two overloaded methods to validate the index options, the original and a 
new one including the base table's metadata in its signature. {{IndexMetadata}} 
tries to invoke the new method and, if there is no such method, it tries to 
invoke the old one. I hope you find it OK.


was (Author: adelapena):
I'm attaching a second version of the patch with the suggested changes. There 
are two overloaded methods to validate the index options, the original and a 
new one including the base table's metadata in its signature. {IndexMetadata} 
tries to invoke the new method and, if there is no such method, it tries to 
invoke the old one. I hope you find it OK.

> Pass base table's metadata to Index.validateOptions
> ---
>
> Key: CASSANDRA-10924
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10924
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL, Local Write-Read Paths
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>Priority: Minor
>  Labels: 2i, index, validation
> Fix For: 3.0.x, 3.x
>
> Attachments: CASSANDRA-10924-v0.diff, CASSANDRA-10924-v1.diff
>
>
> Some custom index implementations require the base table's metadata to 
> validate their creation options. For example, the options of these 
> implementations can contain information about which base table's columns are 
> going to be indexed and how, so the implementation needs to know the 
> existence and the type of the columns to be indexed to properly validate.
> The attached patch proposes to add base table's {{CFMetaData}} to Index' 
> optional static method to validate the custom index options:
> {{public static Map validateOptions(CFMetaData cfm, 
> Map options);}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10996) The system table system.schema_columnfamilies does not exist

2016-01-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093894#comment-15093894
 ] 

Sylvain Lebresne commented on CASSANDRA-10996:
--

bq. you suggest I should use system_schema instead of system,but I tried to use 
system_schema, just get the same exception

To clarify, it's not enough to change {{system}} to {{system_schema}}. The name 
of the tables and their content has changed to present schema that is more 
natural to how it looked on the CQL side.

bq. So now how to get the column value like 
key_aliases,column_aliases,comparator? Is there any example code ?

It's not trivial. You'll have to understand how a thrift schema translate to 
CQL and vice versa. You can start by understanding the translation between the 
2 paradigm in general reading for instance 
[this|http://www.datastax.com/dev/blog/thrift-to-cql3] and 
[that|http://www.datastax.com/dev/blog/does-cql-support-dynamic-columns-wide-rows],
 but after that I'm afraid you'll have to dig into the code. 

But really, you may not have to do that. It's probably a better idea to adapt 
hive to be more CQL aware instead of dealing with raw comparators and column 
aliases. But I can't really help you a lot more on that as I know next to 
nothing to Hive.

> The system table  system.schema_columnfamilies does not exist
> -
>
> Key: CASSANDRA-10996
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10996
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: sangshenghong
>Priority: Critical
> Fix For: 3.1.1
>
> Attachments: error.png
>
>
> In the 2.1.6 version,there is one system table named 
> "system.schema_columnfamilies", but in the latest version 3.1.1, when I 
> execute select * from system.schema_columnfamilies, it throw "unconfigured 
> table schema_columnfamilies" in cqlsh.
> But in the system.log file, it show 
> ColumnFamilyStore.java:381 - Initializing system.schema_columnfamilies
> I checked the doc and found some tables and schemas have been change, so I 
> want to know if there any change for this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11002) com.datastax.driver.core.exceptions.NoHostAvailableException

2016-01-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-11002.
--
Resolution: Not A Problem

I'm sorry but nothing in there indicates a server-side bug. It just appears you 
either have a network/system configuration problem, or a driver one. I suggest 
trying the mailing list (probably belong to the java driver one) to get some 
help but this JIRA is only for tracking bugs and improvements for the server.

> com.datastax.driver.core.exceptions.NoHostAvailableException
> 
>
> Key: CASSANDRA-11002
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11002
> Project: Cassandra
>  Issue Type: Bug
> Environment: Apache Cassandra 3.0.2 and 3.1.1
>Reporter: sangshenghong
> Fix For: 3.1.1
>
> Attachments: error.png
>
>
> I have created one issue CASSANDRA-10996, but the owner suggest I use 
> datastax java driver to get KeySpaceMetaData, so I downloaded this driver 
> which version is "3.0.0-rc1", I use the following code to connect :
>Cluster cluster = Cluster.builder()
> .addContactPoint("192.168.56.11")
> .build();
>   KeyspaceMetadata keySpaceMetaData = 
> cluster.getMetadata().getKeyspace(this.keyspace);
> But got the following exception:
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: hwtest1.localdomain/192.168.56.11:9042 
> (com.datastax.driver.core.exceptions.TransportException: 
> [hwtest1.localdomain/192.168.56.11] Cannot connect))
>   at 
> com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:231)
>   at 
> com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
>   at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1382)
>   at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:393)
> I also change the cassandra.yaml based on 
> https://github.com/datastax/java-driver/wiki/Connection-requirements
> I also tried use Datastax DevCenter, it can connect to 
> hwtest1.localdomain/192.168.56.11 sussessfully.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-01-12 Thread tylerhobbs
Merge branch 'cassandra-2.2' into cassandra-3.0

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2569fbd1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2569fbd1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2569fbd1

Branch: refs/heads/cassandra-3.0
Commit: 2569fbd1429681f80aaf47b8668e9bc15cf0445d
Parents: a942b2c de946ae
Author: Tyler Hobbs 
Authored: Tue Jan 12 11:52:24 2016 -0600
Committer: Tyler Hobbs 
Committed: Tue Jan 12 11:52:24 2016 -0600

--
 CHANGES.txt  | 2 ++
 bin/cqlsh.py | 5 +++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2569fbd1/CHANGES.txt
--
diff --cc CHANGES.txt
index b916fa6,f895139..3ec5346
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,20 -1,6 +1,22 @@@
 -2.2.5
 +3.0.3
 + * Fix AssertionError when removing from list using UPDATE (CASSANDRA-10954)
 + * Fix UnsupportedOperationException when reading old sstable with range
 +   tombstone (CASSANDRA-10743)
 + * MV should use the maximum timestamp of the primary key (CASSANDRA-10910)
 + * Fix potential assertion error during compaction (CASSANDRA-10944)
 + * Fix counting of received sstables in streaming (CASSANDRA-10949)
 + * Implement hints compression (CASSANDRA-9428)
 + * Fix potential assertion error when reading static columns (CASSANDRA-10903)
 + * Avoid NoSuchElementException when executing empty batch (CASSANDRA-10711)
 + * Avoid building PartitionUpdate in toString (CASSANDRA-10897)
 + * Reduce heap spent when receiving many SSTables (CASSANDRA-10797)
 + * Add back support for 3rd party auth providers to bulk loader 
(CASSANDRA-10873)
 + * Eliminate the dependency on jgrapht for UDT resolution (CASSANDRA-10653)
 + * (Hadoop) Close Clusters and Sessions in Hadoop Input/Output classes 
(CASSANDRA-10837)
 + * Fix sstableloader not working with upper case keyspace name 
(CASSANDRA-10806)
 +Merged from 2.2:
+  * (cqlsh) Also apply --connect-timeout to control connection
+timeout (CASSANDRA-10959)
   * Histogram buckets exposed in jmx are sorted incorrectly (CASSANDRA-10975)
   * Enable GC logging by default (CASSANDRA-10140)
   * Optimize pending range computation (CASSANDRA-9258)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2569fbd1/bin/cqlsh.py
--



cassandra git commit: cqlsh: Apply --connect-timeout to control conn

2016-01-12 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 a6e5cdaef -> de946ae45


cqlsh: Apply --connect-timeout to control conn

Patch by Julien Blondeau; reviewed by Tyler Hobbs for CASSANDRA-10959


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/de946ae4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/de946ae4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/de946ae4

Branch: refs/heads/cassandra-2.2
Commit: de946ae45ad8af3718d4159e885e6700230d4818
Parents: a6e5cda
Author: Julien Blondeau 
Authored: Tue Jan 12 11:51:20 2016 -0600
Committer: Tyler Hobbs 
Committed: Tue Jan 12 11:51:20 2016 -0600

--
 CHANGES.txt  | 2 ++
 bin/cqlsh.py | 5 +++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/de946ae4/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 477a104..f895139 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.2.5
+ * (cqlsh) Also apply --connect-timeout to control connection
+   timeout (CASSANDRA-10959)
  * Histogram buckets exposed in jmx are sorted incorrectly (CASSANDRA-10975)
  * Enable GC logging by default (CASSANDRA-10140)
  * Optimize pending range computation (CASSANDRA-9258)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/de946ae4/bin/cqlsh.py
--
diff --git a/bin/cqlsh.py b/bin/cqlsh.py
index c38bc2e..be2ad46 100644
--- a/bin/cqlsh.py
+++ b/bin/cqlsh.py
@@ -141,7 +141,6 @@ from cassandra.cluster import Cluster
 from cassandra.metadata import (ColumnMetadata, KeyspaceMetadata,
 TableMetadata, protect_name, protect_names)
 from cassandra.policies import WhiteListRoundRobinPolicy
-from cassandra.protocol import ResultMessage
 from cassandra.query import SimpleStatement, ordered_dict_factory, 
TraceUnavailable
 
 # cqlsh should run correctly when run out of a Cassandra source tree,
@@ -683,6 +682,7 @@ class Shell(cmd.Cmd):
 auth_provider=self.auth_provider,
 ssl_options=sslhandling.ssl_settings(hostname, 
CONFIG_FILE) if ssl else None,
 
load_balancing_policy=WhiteListRoundRobinPolicy([self.hostname]),
+control_connection_timeout=connect_timeout,
 connect_timeout=connect_timeout)
 self.owns_connection = not use_conn
 self.set_expanded_cql_version(cqlver)
@@ -1201,7 +1201,7 @@ class Shell(cmd.Cmd):
 def perform_simple_statement(self, statement):
 if not statement:
 return False, None
-rows = None
+
 while True:
 try:
 future = self.session.execute_async(statement, 
trace=self.tracing_enabled)
@@ -2047,6 +2047,7 @@ class Shell(cmd.Cmd):
auth_provider=auth_provider,
ssl_options=self.conn.ssl_options,

load_balancing_policy=WhiteListRoundRobinPolicy([self.hostname]),
+   control_connection_timeout=self.conn.connect_timeout,
connect_timeout=self.conn.connect_timeout)
 
 if self.current_keyspace:



[jira] [Commented] (CASSANDRA-10997) cqlsh_copy_tests failing en mass when vnodes are disabled

2016-01-12 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094415#comment-15094415
 ] 

Philip Thompson commented on CASSANDRA-10997:
-

http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10997-2.1-novnode-dtest/
http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10997-2.2-novnode-dtest/
http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10997-3.0-novnode-dtest/
http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10997-3.3-novnode-dtest/
http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-10997-novnode-dtest/

I'll be expanding CI so we catch this problem earlier next time.

> cqlsh_copy_tests failing en mass when vnodes are disabled
> -
>
> Key: CASSANDRA-10997
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10997
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Philip Thompson
>Assignee: Stefania
>  Labels: dtest
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> Check out [an example cassci 
> failure|http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/186/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_list_data/]
>  as well as the [full novnode report 
> page|http://cassci.datastax.com/userContent/cstar_report/index.html?jobs=cassandra-2.1_novnode_dtest,cassandra-3.0_novnode_dtest,cassandra-2.2_novnode_dtest_known=true].
> Many COPY TO tests are failing when the cluster only has one token. The 
> message {{Found no ranges to query, check begin and end tokens: None - None}} 
> is printed, and it appears to be coming from cqlsh, specfically in 
> pylib/cqlshlib/copyutil.py



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10926) Improve error message when removenode called on nonmember node

2016-01-12 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie reassigned CASSANDRA-10926:
---

Assignee: Joshua McKenzie  (was: Joel Knighton)

> Improve error message when removenode called on nonmember node
> --
>
> Key: CASSANDRA-10926
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10926
> Project: Cassandra
>  Issue Type: Improvement
> Environment: CentOS 7 x64, Java 1.8.0.65
>Reporter: Kai Wang
>Assignee: Joshua McKenzie
>Priority: Trivial
>
> {noformat}
> [root@centos-2 ~]# nodetool -u xxx -pw  removenode 
> 97a9042d-ea14-49a9-9f22-2dab4b762673
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at 
> org.apache.cassandra.locator.TokenMetadata.getTokens(TokenMetadata.java:474)
> at 
> org.apache.cassandra.service.StorageService.removeNode(StorageService.java:3793)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> at java.lang.reflect.Method.invoke(Unknown Source)
> at sun.reflect.misc.Trampoline.invoke(Unknown Source)
> at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> at java.lang.reflect.Method.invoke(Unknown Source)
> at sun.reflect.misc.MethodUtil.invoke(Unknown Source)
> at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(Unknown 
> Source)
> at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(Unknown 
> Source)
> at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(Unknown Source)
> at com.sun.jmx.mbeanserver.PerInterface.invoke(Unknown Source)
> at com.sun.jmx.mbeanserver.MBeanSupport.invoke(Unknown Source)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(Unknown Source)
> at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(Unknown Source)
> at 
> com.sun.jmx.remote.security.MBeanServerAccessController.invoke(Unknown Source)
> at javax.management.remote.rmi.RMIConnectionImpl.doOperation(Unknown 
> Source)
> at javax.management.remote.rmi.RMIConnectionImpl.access$300(Unknown 
> Source)
> at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(Unknown 
> Source)
> at java.security.AccessController.doPrivileged(Native Method)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(Unknown 
> Source)
> at javax.management.remote.rmi.RMIConnectionImpl.invoke(Unknown 
> Source)
> at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> at java.lang.reflect.Method.invoke(Unknown Source)
> at sun.rmi.server.UnicastServerRef.dispatch(Unknown Source)
> at sun.rmi.transport.Transport$1.run(Unknown Source)
> at sun.rmi.transport.Transport$1.run(Unknown Source)
> at java.security.AccessController.doPrivileged(Native Method)
> at sun.rmi.transport.Transport.serviceCall(Unknown Source)
> at sun.rmi.transport.tcp.TCPTransport.handleMessages(Unknown Source)
> at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(Unknown 
> Source)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$81(Unknown 
> Source)
> at java.security.AccessController.doPrivileged(Native Method)
> at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(Unknown 
> Source)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10928) SSTableExportTest.testExportColumnsWithMetadata randomly fails

2016-01-12 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie reassigned CASSANDRA-10928:
---

Assignee: Joshua McKenzie  (was: sankalp kohli)

> SSTableExportTest.testExportColumnsWithMetadata randomly fails
> --
>
> Key: CASSANDRA-10928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10928
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Kjellman
>Assignee: Joshua McKenzie
> Fix For: 2.1.12
>
> Attachments: CASSANDRA_10928_2.1.diff
>
>
> The SSTableExportTest.testExportColumnsWithMetadata test will randomly fail 
> (bogusly). Currently, the string check used won’t work if the JSON generated 
> happened to order the elements in the array differently.
> {code}
> assertEquals(
> "unexpected serialization format for topLevelDeletion",
> "{\"markedForDeleteAt\":0,\"localDeletionTime\":0}",
> serializedDeletionInfo.toJSONString());
> {code}
> {noformat}
> [junit] Testcase: 
> testExportColumnsWithMetadata(org.apache.cassandra.tools.SSTableExportTest):  
>   FAILED
> [junit] unexpected serialization format for topLevelDeletion 
> expected:<{"[markedForDeleteAt":0,"localDeletionTime]":0}> but 
> was:<{"[localDeletionTime":0,"markedForDeleteAt]":0}>
> [junit] junit.framework.AssertionFailedError: unexpected serialization 
> format for topLevelDeletion 
> expected:<{"[markedForDeleteAt":0,"localDeletionTime]":0}> but 
> was:<{"[localDeletionTime":0,"markedForDeleteAt]":0}>
> [junit]   at 
> org.apache.cassandra.tools.SSTableExportTest.testExportColumnsWithMetadata(SSTableExportTest.java:299)
> [junit]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[4/4] cassandra git commit: Merge branch 'cassandra-3.3' into trunk

2016-01-12 Thread tylerhobbs
Merge branch 'cassandra-3.3' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a883ff5f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a883ff5f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a883ff5f

Branch: refs/heads/trunk
Commit: a883ff5f31ce47d03306a0a322109c04d01ed534
Parents: 22a1bbb f4ba752
Author: Tyler Hobbs 
Authored: Tue Jan 12 11:53:49 2016 -0600
Committer: Tyler Hobbs 
Committed: Tue Jan 12 11:53:49 2016 -0600

--
 CHANGES.txt  | 2 ++
 bin/cqlsh.py | 5 +++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a883ff5f/CHANGES.txt
--
diff --cc CHANGES.txt
index e03b6a1,a9202ce..36f0a8a
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,11 -1,7 +1,13 @@@
 +3.4
 + * Stripe view locks by key and table ID to reduce contention 
(CASSANDRA-10981)
 + * Add nodetool gettimeout and settimeout commands (CASSANDRA-10953)
 + * Add 3.0 metadata to sstablemetadata output (CASSANDRA-10838)
 +
 +
  3.3
  Merged from 3.0:
+  * (cqlsh) Also apply --connect-timeout to control connection
+timeout (CASSANDRA-10959)
   * Fix AssertionError when removing from list using UPDATE (CASSANDRA-10954)
   * Fix UnsupportedOperationException when reading old sstable with range
 tombstone (CASSANDRA-10743)



[2/3] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-01-12 Thread tylerhobbs
Merge branch 'cassandra-2.2' into cassandra-3.0

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2569fbd1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2569fbd1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2569fbd1

Branch: refs/heads/cassandra-3.3
Commit: 2569fbd1429681f80aaf47b8668e9bc15cf0445d
Parents: a942b2c de946ae
Author: Tyler Hobbs 
Authored: Tue Jan 12 11:52:24 2016 -0600
Committer: Tyler Hobbs 
Committed: Tue Jan 12 11:52:24 2016 -0600

--
 CHANGES.txt  | 2 ++
 bin/cqlsh.py | 5 +++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2569fbd1/CHANGES.txt
--
diff --cc CHANGES.txt
index b916fa6,f895139..3ec5346
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,20 -1,6 +1,22 @@@
 -2.2.5
 +3.0.3
 + * Fix AssertionError when removing from list using UPDATE (CASSANDRA-10954)
 + * Fix UnsupportedOperationException when reading old sstable with range
 +   tombstone (CASSANDRA-10743)
 + * MV should use the maximum timestamp of the primary key (CASSANDRA-10910)
 + * Fix potential assertion error during compaction (CASSANDRA-10944)
 + * Fix counting of received sstables in streaming (CASSANDRA-10949)
 + * Implement hints compression (CASSANDRA-9428)
 + * Fix potential assertion error when reading static columns (CASSANDRA-10903)
 + * Avoid NoSuchElementException when executing empty batch (CASSANDRA-10711)
 + * Avoid building PartitionUpdate in toString (CASSANDRA-10897)
 + * Reduce heap spent when receiving many SSTables (CASSANDRA-10797)
 + * Add back support for 3rd party auth providers to bulk loader 
(CASSANDRA-10873)
 + * Eliminate the dependency on jgrapht for UDT resolution (CASSANDRA-10653)
 + * (Hadoop) Close Clusters and Sessions in Hadoop Input/Output classes 
(CASSANDRA-10837)
 + * Fix sstableloader not working with upper case keyspace name 
(CASSANDRA-10806)
 +Merged from 2.2:
+  * (cqlsh) Also apply --connect-timeout to control connection
+timeout (CASSANDRA-10959)
   * Histogram buckets exposed in jmx are sorted incorrectly (CASSANDRA-10975)
   * Enable GC logging by default (CASSANDRA-10140)
   * Optimize pending range computation (CASSANDRA-9258)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2569fbd1/bin/cqlsh.py
--



[jira] [Updated] (CASSANDRA-10928) SSTableExportTest.testExportColumnsWithMetadata randomly fails

2016-01-12 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-10928:

Assignee: sankalp kohli  (was: Joshua McKenzie)
Reviewer: Joshua McKenzie

> SSTableExportTest.testExportColumnsWithMetadata randomly fails
> --
>
> Key: CASSANDRA-10928
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10928
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Kjellman
>Assignee: sankalp kohli
> Fix For: 2.1.12
>
> Attachments: CASSANDRA_10928_2.1.diff
>
>
> The SSTableExportTest.testExportColumnsWithMetadata test will randomly fail 
> (bogusly). Currently, the string check used won’t work if the JSON generated 
> happened to order the elements in the array differently.
> {code}
> assertEquals(
> "unexpected serialization format for topLevelDeletion",
> "{\"markedForDeleteAt\":0,\"localDeletionTime\":0}",
> serializedDeletionInfo.toJSONString());
> {code}
> {noformat}
> [junit] Testcase: 
> testExportColumnsWithMetadata(org.apache.cassandra.tools.SSTableExportTest):  
>   FAILED
> [junit] unexpected serialization format for topLevelDeletion 
> expected:<{"[markedForDeleteAt":0,"localDeletionTime]":0}> but 
> was:<{"[localDeletionTime":0,"markedForDeleteAt]":0}>
> [junit] junit.framework.AssertionFailedError: unexpected serialization 
> format for topLevelDeletion 
> expected:<{"[markedForDeleteAt":0,"localDeletionTime]":0}> but 
> was:<{"[localDeletionTime":0,"markedForDeleteAt]":0}>
> [junit]   at 
> org.apache.cassandra.tools.SSTableExportTest.testExportColumnsWithMetadata(SSTableExportTest.java:299)
> [junit]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10926) Improve error message when removenode called on nonmember node

2016-01-12 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-10926:

Assignee: Joel Knighton  (was: Joshua McKenzie)
Reviewer: Joshua McKenzie

> Improve error message when removenode called on nonmember node
> --
>
> Key: CASSANDRA-10926
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10926
> Project: Cassandra
>  Issue Type: Improvement
> Environment: CentOS 7 x64, Java 1.8.0.65
>Reporter: Kai Wang
>Assignee: Joel Knighton
>Priority: Trivial
>
> {noformat}
> [root@centos-2 ~]# nodetool -u xxx -pw  removenode 
> 97a9042d-ea14-49a9-9f22-2dab4b762673
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at 
> org.apache.cassandra.locator.TokenMetadata.getTokens(TokenMetadata.java:474)
> at 
> org.apache.cassandra.service.StorageService.removeNode(StorageService.java:3793)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> at java.lang.reflect.Method.invoke(Unknown Source)
> at sun.reflect.misc.Trampoline.invoke(Unknown Source)
> at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> at java.lang.reflect.Method.invoke(Unknown Source)
> at sun.reflect.misc.MethodUtil.invoke(Unknown Source)
> at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(Unknown 
> Source)
> at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(Unknown 
> Source)
> at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(Unknown Source)
> at com.sun.jmx.mbeanserver.PerInterface.invoke(Unknown Source)
> at com.sun.jmx.mbeanserver.MBeanSupport.invoke(Unknown Source)
> at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(Unknown Source)
> at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(Unknown Source)
> at 
> com.sun.jmx.remote.security.MBeanServerAccessController.invoke(Unknown Source)
> at javax.management.remote.rmi.RMIConnectionImpl.doOperation(Unknown 
> Source)
> at javax.management.remote.rmi.RMIConnectionImpl.access$300(Unknown 
> Source)
> at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(Unknown 
> Source)
> at java.security.AccessController.doPrivileged(Native Method)
> at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(Unknown 
> Source)
> at javax.management.remote.rmi.RMIConnectionImpl.invoke(Unknown 
> Source)
> at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> at java.lang.reflect.Method.invoke(Unknown Source)
> at sun.rmi.server.UnicastServerRef.dispatch(Unknown Source)
> at sun.rmi.transport.Transport$1.run(Unknown Source)
> at sun.rmi.transport.Transport$1.run(Unknown Source)
> at java.security.AccessController.doPrivileged(Native Method)
> at sun.rmi.transport.Transport.serviceCall(Unknown Source)
> at sun.rmi.transport.tcp.TCPTransport.handleMessages(Unknown Source)
> at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(Unknown 
> Source)
> at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$81(Unknown 
> Source)
> at java.security.AccessController.doPrivileged(Native Method)
> at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(Unknown 
> Source)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: cqlsh: Apply --connect-timeout to control conn

2016-01-12 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 a942b2ceb -> 2569fbd14


cqlsh: Apply --connect-timeout to control conn

Patch by Julien Blondeau; reviewed by Tyler Hobbs for CASSANDRA-10959


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/de946ae4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/de946ae4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/de946ae4

Branch: refs/heads/cassandra-3.0
Commit: de946ae45ad8af3718d4159e885e6700230d4818
Parents: a6e5cda
Author: Julien Blondeau 
Authored: Tue Jan 12 11:51:20 2016 -0600
Committer: Tyler Hobbs 
Committed: Tue Jan 12 11:51:20 2016 -0600

--
 CHANGES.txt  | 2 ++
 bin/cqlsh.py | 5 +++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/de946ae4/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 477a104..f895139 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.2.5
+ * (cqlsh) Also apply --connect-timeout to control connection
+   timeout (CASSANDRA-10959)
  * Histogram buckets exposed in jmx are sorted incorrectly (CASSANDRA-10975)
  * Enable GC logging by default (CASSANDRA-10140)
  * Optimize pending range computation (CASSANDRA-9258)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/de946ae4/bin/cqlsh.py
--
diff --git a/bin/cqlsh.py b/bin/cqlsh.py
index c38bc2e..be2ad46 100644
--- a/bin/cqlsh.py
+++ b/bin/cqlsh.py
@@ -141,7 +141,6 @@ from cassandra.cluster import Cluster
 from cassandra.metadata import (ColumnMetadata, KeyspaceMetadata,
 TableMetadata, protect_name, protect_names)
 from cassandra.policies import WhiteListRoundRobinPolicy
-from cassandra.protocol import ResultMessage
 from cassandra.query import SimpleStatement, ordered_dict_factory, 
TraceUnavailable
 
 # cqlsh should run correctly when run out of a Cassandra source tree,
@@ -683,6 +682,7 @@ class Shell(cmd.Cmd):
 auth_provider=self.auth_provider,
 ssl_options=sslhandling.ssl_settings(hostname, 
CONFIG_FILE) if ssl else None,
 
load_balancing_policy=WhiteListRoundRobinPolicy([self.hostname]),
+control_connection_timeout=connect_timeout,
 connect_timeout=connect_timeout)
 self.owns_connection = not use_conn
 self.set_expanded_cql_version(cqlver)
@@ -1201,7 +1201,7 @@ class Shell(cmd.Cmd):
 def perform_simple_statement(self, statement):
 if not statement:
 return False, None
-rows = None
+
 while True:
 try:
 future = self.session.execute_async(statement, 
trace=self.tracing_enabled)
@@ -2047,6 +2047,7 @@ class Shell(cmd.Cmd):
auth_provider=auth_provider,
ssl_options=self.conn.ssl_options,

load_balancing_policy=WhiteListRoundRobinPolicy([self.hostname]),
+   control_connection_timeout=self.conn.connect_timeout,
connect_timeout=self.conn.connect_timeout)
 
 if self.current_keyspace:



[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-01-12 Thread tylerhobbs
Merge branch 'cassandra-3.0' into cassandra-3.3

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f4ba752b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f4ba752b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f4ba752b

Branch: refs/heads/cassandra-3.3
Commit: f4ba752b32db2950f2a816b2e7896941de14b822
Parents: 91d7bed 2569fbd
Author: Tyler Hobbs 
Authored: Tue Jan 12 11:53:23 2016 -0600
Committer: Tyler Hobbs 
Committed: Tue Jan 12 11:53:23 2016 -0600

--
 CHANGES.txt  | 2 ++
 bin/cqlsh.py | 5 +++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f4ba752b/CHANGES.txt
--
diff --cc CHANGES.txt
index a301b0f,3ec5346..a9202ce
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,5 -1,4 +1,7 @@@
 -3.0.3
 +3.3
 +Merged from 3.0:
++ * (cqlsh) Also apply --connect-timeout to control connection
++   timeout (CASSANDRA-10959)
   * Fix AssertionError when removing from list using UPDATE (CASSANDRA-10954)
   * Fix UnsupportedOperationException when reading old sstable with range
 tombstone (CASSANDRA-10743)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f4ba752b/bin/cqlsh.py
--



[1/3] cassandra git commit: cqlsh: Apply --connect-timeout to control conn

2016-01-12 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.3 91d7bed55 -> f4ba752b3


cqlsh: Apply --connect-timeout to control conn

Patch by Julien Blondeau; reviewed by Tyler Hobbs for CASSANDRA-10959


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/de946ae4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/de946ae4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/de946ae4

Branch: refs/heads/cassandra-3.3
Commit: de946ae45ad8af3718d4159e885e6700230d4818
Parents: a6e5cda
Author: Julien Blondeau 
Authored: Tue Jan 12 11:51:20 2016 -0600
Committer: Tyler Hobbs 
Committed: Tue Jan 12 11:51:20 2016 -0600

--
 CHANGES.txt  | 2 ++
 bin/cqlsh.py | 5 +++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/de946ae4/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 477a104..f895139 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.2.5
+ * (cqlsh) Also apply --connect-timeout to control connection
+   timeout (CASSANDRA-10959)
  * Histogram buckets exposed in jmx are sorted incorrectly (CASSANDRA-10975)
  * Enable GC logging by default (CASSANDRA-10140)
  * Optimize pending range computation (CASSANDRA-9258)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/de946ae4/bin/cqlsh.py
--
diff --git a/bin/cqlsh.py b/bin/cqlsh.py
index c38bc2e..be2ad46 100644
--- a/bin/cqlsh.py
+++ b/bin/cqlsh.py
@@ -141,7 +141,6 @@ from cassandra.cluster import Cluster
 from cassandra.metadata import (ColumnMetadata, KeyspaceMetadata,
 TableMetadata, protect_name, protect_names)
 from cassandra.policies import WhiteListRoundRobinPolicy
-from cassandra.protocol import ResultMessage
 from cassandra.query import SimpleStatement, ordered_dict_factory, 
TraceUnavailable
 
 # cqlsh should run correctly when run out of a Cassandra source tree,
@@ -683,6 +682,7 @@ class Shell(cmd.Cmd):
 auth_provider=self.auth_provider,
 ssl_options=sslhandling.ssl_settings(hostname, 
CONFIG_FILE) if ssl else None,
 
load_balancing_policy=WhiteListRoundRobinPolicy([self.hostname]),
+control_connection_timeout=connect_timeout,
 connect_timeout=connect_timeout)
 self.owns_connection = not use_conn
 self.set_expanded_cql_version(cqlver)
@@ -1201,7 +1201,7 @@ class Shell(cmd.Cmd):
 def perform_simple_statement(self, statement):
 if not statement:
 return False, None
-rows = None
+
 while True:
 try:
 future = self.session.execute_async(statement, 
trace=self.tracing_enabled)
@@ -2047,6 +2047,7 @@ class Shell(cmd.Cmd):
auth_provider=auth_provider,
ssl_options=self.conn.ssl_options,

load_balancing_policy=WhiteListRoundRobinPolicy([self.hostname]),
+   control_connection_timeout=self.conn.connect_timeout,
connect_timeout=self.conn.connect_timeout)
 
 if self.current_keyspace:



[jira] [Created] (CASSANDRA-11004) LWT results '[applied]' column name collision

2016-01-12 Thread Adam Holmberg (JIRA)
Adam Holmberg created CASSANDRA-11004:
-

 Summary: LWT results '[applied]' column name collision
 Key: CASSANDRA-11004
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11004
 Project: Cassandra
  Issue Type: Bug
Reporter: Adam Holmberg
Priority: Minor
 Fix For: 4.x


LWT requests return a not-well-documented single row result with a boolean 
{{\[applied]}} column and optional column states.

If the table happens to have a column named {{\[applied]}}, this causes a name 
collision. There is no error, but the {{\[applied]}} flag is not available.
{code}
cassandra@cqlsh:test> CREATE TABLE test (k int PRIMARY KEY , "[applied]" int);

cassandra@cqlsh:test> INSERT INTO test (k, "[applied]") VALUES (2, 3) IF NOT 
EXISTS ;

 [applied]
---
  True

cassandra@cqlsh:test> INSERT INTO test (k, "[applied]") VALUES (2, 3) IF NOT 
EXISTS ;

 [applied] | k
---+---
 3 | 2
{code}

I doubt this comes up much (at all) in practice, but thought I'd mention it. 

One alternative approach might be to add a LWT result type 
([flag|https://github.com/apache/cassandra/blob/cassandra-3.0/doc/native_protocol_v4.spec#L518-L522])
 that segregates the "applied" flag information optional row results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10661) Integrate SASI to Cassandra

2016-01-12 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094522#comment-15094522
 ] 

Pavel Yaskevich commented on CASSANDRA-10661:
-

[~doanduyhai] It uses built-in facilities for it, namely 
PartitionRangeReadCommand in 3.x since it returns results in the token order it 
doesn't have to scatter-gather right away and can do what normal read commands 
do.

> Integrate SASI to Cassandra
> ---
>
> Key: CASSANDRA-10661
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10661
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Pavel Yaskevich
>Assignee: Pavel Yaskevich
>  Labels: sasi
> Fix For: 3.x
>
>
> We have recently released new secondary index engine 
> (https://github.com/xedin/sasi) build using SecondaryIndex API, there are 
> still couple of things to work out regarding 3.x since it's currently 
> targeted on 2.0 released. I want to make this an umbrella issue to all of the 
> things related to integration of SASI, which are also tracked in 
> [sasi_issues|https://github.com/xedin/sasi/issues], into mainline Cassandra 
> 3.x release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10587) sstablemetadata NPE on cassandra 2.2

2016-01-12 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-10587:

Assignee: Paulo Motta

> sstablemetadata NPE on cassandra 2.2
> 
>
> Key: CASSANDRA-10587
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10587
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tiago Batista
>Assignee: Paulo Motta
> Fix For: 2.2.x, 3.x
>
>
> I have recently upgraded my cassandra cluster to 2.2, currently running 
> 2.2.3. After running the first repair, cassandra renames the sstables to the 
> new naming schema that does not contain the keyspace name.
>  This causes sstablemetadata to fail with the following stack trace:
> {noformat}
> Exception in thread "main" java.lang.NullPointerException
> at 
> org.apache.cassandra.io.sstable.Descriptor.fromFilename(Descriptor.java:275)
> at 
> org.apache.cassandra.io.sstable.Descriptor.fromFilename(Descriptor.java:172)
> at 
> org.apache.cassandra.tools.SSTableMetadataViewer.main(SSTableMetadataViewer.java:52)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10490) DTCS historic compaction, possibly with major compaction

2016-01-12 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian updated CASSANDRA-10490:
---
Component/s: Compaction

> DTCS historic compaction, possibly with major compaction
> 
>
> Key: CASSANDRA-10490
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10490
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Jonathan Shook
>  Labels: compaction, triage
> Fix For: 2.2.x, 3.x
>
>
> Presently, it's simply painful to run a major compaction with DTCS. It 
> doesn't really serve a useful purpose. Instead, a DTCS major compaction 
> should allow for a DTCS-style compaction to go back before 
> max_sstable_age_days. We can call this a historic compaction, for lack of a 
> better term.
> Such a compaction should not take precedence over normal compaction work, but 
> should be considered a background task. By default there should be a cap on 
> the number of these tasks running. It would be nice to have a separate 
> "max_historic_compaction_tasks" and possibly a 
> "max_historic_compaction_throughput" in the compaction settings to allow for 
> separate throttles on this. I would set these at 1 and 20% of the usual 
> compaction throughput if they aren't set explicitly.
> It may also be desirable to allow historic compaction to run apart from 
> running a major compaction, and to simply disable major compaction altogether 
> for DTCS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10688) Stack overflow from SSTableReader$InstanceTidier.runOnClose in Leak Detector

2016-01-12 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094514#comment-15094514
 ] 

Ariel Weisberg commented on CASSANDRA-10688:


It also occurs to me that there are thread safety issues. It's not feasible to 
iterate collections via iterators because they can be invalidated. I'm going to 
have to remove that.

> Stack overflow from SSTableReader$InstanceTidier.runOnClose in Leak Detector
> 
>
> Key: CASSANDRA-10688
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10688
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Testing
>Reporter: Jeremiah Jordan
>Assignee: Ariel Weisberg
> Fix For: 3.0.x
>
>
> Running some tests against cassandra-3.0 
> 9fc957cf3097e54ccd72e51b2d0650dc3e83eae0
> The tests are just running cassandra-stress write and read while adding and 
> removing nodes from the cluster.  After the test runs when I go back through 
> logs I find the following Stackoverflow fairly often:
> ERROR [Strong-Reference-Leak-Detector:1] 2015-11-11 00:04:10,638  
> Ref.java:413 - Stackoverflow [private java.lang.Runnable 
> org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier.runOnClose,
>  final java.lang.Runnable 
> org.apache.cassandra.io.sstable.format.SSTableReader$DropPageCache.andThen, 
> final org.apache.cassandra.cache.InstrumentingCache 
> org.apache.cassandra.io.sstable.SSTableRewriter$InvalidateKeys.cache, private 
> final org.apache.cassandra.cache.ICache 
> org.apache.cassandra.cache.InstrumentingCache.map, private final 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap 
> org.apache.cassandra.cache.ConcurrentLinkedHashCache.map, final 
> com.googlecode.concurrentlinkedhashmap.LinkedDeque 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap.evictionDeque, 
> com.googlecode.concurrentlinkedhashmap.Linked 
> com.googlecode.concurrentlinkedhashmap.LinkedDeque.first, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> ... (repeated a whole bunch more)  
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> final java.lang.Object 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.key, 
> public final byte[] org.apache.cassandra.cache.KeyCacheKey.key



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11003) cqlsh.py: Shell instance has no attribute 'parse_for_table_meta'

2016-01-12 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-11003:

Reviewer: Tyler Hobbs

> cqlsh.py: Shell instance has no attribute 'parse_for_table_meta'
> 
>
> Key: CASSANDRA-11003
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11003
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Eduard Tudenhoefner
>Assignee: Eduard Tudenhoefner
> Fix For: 3.0.x, 3.x
>
> Attachments: 11003-cassandra-3.0.txt
>
>
> {code}
> $ cqlsh -u cassandra -p cassandra
> Connected to abc at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 3.0.1.816 | DSE 5.0.0 | CQL spec 3.4.0 | Native 
> protocol v4]
> Use HELP for help.
> cassandra@cqlsh> SOME COMMAND;
> Shell instance has no attribute 'parse_for_table_meta'
> {code}
> I think this is happening because of a bad merge 
> (https://github.com/apache/cassandra/commit/2800bf1082e773daf0af29516b61c711acda626b#diff-1cce67f7d76864f07aaf4d986d6fc051).
>  We just need to rename *parse_for_update_meta* to *parse_for_table_meta*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10980) nodetool scrub NPEs when keyspace isn't specified

2016-01-12 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-10980:

Reviewer: Marcus Eriksson

> nodetool scrub NPEs when keyspace isn't specified
> -
>
> Key: CASSANDRA-10980
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10980
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Cassandra (and nodetool) version 3.1
>Reporter: Will Hayworth
>Assignee: Yuki Morishita
>Priority: Trivial
>  Labels: lhf
> Fix For: 2.2.x, 3.0.x, 3.x
>
> Attachments: nodetool_scrub_npe.txt
>
>
> I've attached logs of what I saw. Running nodetool scrub without anything 
> else specified resulted in the NPE. Running with the keyspace specified saw 
> successful termination.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10688) Stack overflow from SSTableReader$InstanceTidier.runOnClose in Leak Detector

2016-01-12 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094485#comment-15094485
 ] 

Ariel Weisberg commented on CASSANDRA-10688:


There is an issue with sun.nio.fs.UnixPath getting stuck. It's iterable and I 
think it is returning itself. Shouldn't matter because it should be pruned 
since it is in the visited set, but evidently that isn't happening.

> Stack overflow from SSTableReader$InstanceTidier.runOnClose in Leak Detector
> 
>
> Key: CASSANDRA-10688
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10688
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths, Testing
>Reporter: Jeremiah Jordan
>Assignee: Ariel Weisberg
> Fix For: 3.0.x
>
>
> Running some tests against cassandra-3.0 
> 9fc957cf3097e54ccd72e51b2d0650dc3e83eae0
> The tests are just running cassandra-stress write and read while adding and 
> removing nodes from the cluster.  After the test runs when I go back through 
> logs I find the following Stackoverflow fairly often:
> ERROR [Strong-Reference-Leak-Detector:1] 2015-11-11 00:04:10,638  
> Ref.java:413 - Stackoverflow [private java.lang.Runnable 
> org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier.runOnClose,
>  final java.lang.Runnable 
> org.apache.cassandra.io.sstable.format.SSTableReader$DropPageCache.andThen, 
> final org.apache.cassandra.cache.InstrumentingCache 
> org.apache.cassandra.io.sstable.SSTableRewriter$InvalidateKeys.cache, private 
> final org.apache.cassandra.cache.ICache 
> org.apache.cassandra.cache.InstrumentingCache.map, private final 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap 
> org.apache.cassandra.cache.ConcurrentLinkedHashCache.map, final 
> com.googlecode.concurrentlinkedhashmap.LinkedDeque 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap.evictionDeque, 
> com.googlecode.concurrentlinkedhashmap.Linked 
> com.googlecode.concurrentlinkedhashmap.LinkedDeque.first, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> ... (repeated a whole bunch more)  
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next, 
> final java.lang.Object 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.key, 
> public final byte[] org.apache.cassandra.cache.KeyCacheKey.key



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/4] cassandra git commit: cqlsh: Apply --connect-timeout to control conn

2016-01-12 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 22a1bbb10 -> a883ff5f3


cqlsh: Apply --connect-timeout to control conn

Patch by Julien Blondeau; reviewed by Tyler Hobbs for CASSANDRA-10959


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/de946ae4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/de946ae4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/de946ae4

Branch: refs/heads/trunk
Commit: de946ae45ad8af3718d4159e885e6700230d4818
Parents: a6e5cda
Author: Julien Blondeau 
Authored: Tue Jan 12 11:51:20 2016 -0600
Committer: Tyler Hobbs 
Committed: Tue Jan 12 11:51:20 2016 -0600

--
 CHANGES.txt  | 2 ++
 bin/cqlsh.py | 5 +++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/de946ae4/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 477a104..f895139 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.2.5
+ * (cqlsh) Also apply --connect-timeout to control connection
+   timeout (CASSANDRA-10959)
  * Histogram buckets exposed in jmx are sorted incorrectly (CASSANDRA-10975)
  * Enable GC logging by default (CASSANDRA-10140)
  * Optimize pending range computation (CASSANDRA-9258)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/de946ae4/bin/cqlsh.py
--
diff --git a/bin/cqlsh.py b/bin/cqlsh.py
index c38bc2e..be2ad46 100644
--- a/bin/cqlsh.py
+++ b/bin/cqlsh.py
@@ -141,7 +141,6 @@ from cassandra.cluster import Cluster
 from cassandra.metadata import (ColumnMetadata, KeyspaceMetadata,
 TableMetadata, protect_name, protect_names)
 from cassandra.policies import WhiteListRoundRobinPolicy
-from cassandra.protocol import ResultMessage
 from cassandra.query import SimpleStatement, ordered_dict_factory, 
TraceUnavailable
 
 # cqlsh should run correctly when run out of a Cassandra source tree,
@@ -683,6 +682,7 @@ class Shell(cmd.Cmd):
 auth_provider=self.auth_provider,
 ssl_options=sslhandling.ssl_settings(hostname, 
CONFIG_FILE) if ssl else None,
 
load_balancing_policy=WhiteListRoundRobinPolicy([self.hostname]),
+control_connection_timeout=connect_timeout,
 connect_timeout=connect_timeout)
 self.owns_connection = not use_conn
 self.set_expanded_cql_version(cqlver)
@@ -1201,7 +1201,7 @@ class Shell(cmd.Cmd):
 def perform_simple_statement(self, statement):
 if not statement:
 return False, None
-rows = None
+
 while True:
 try:
 future = self.session.execute_async(statement, 
trace=self.tracing_enabled)
@@ -2047,6 +2047,7 @@ class Shell(cmd.Cmd):
auth_provider=auth_provider,
ssl_options=self.conn.ssl_options,

load_balancing_policy=WhiteListRoundRobinPolicy([self.hostname]),
+   control_connection_timeout=self.conn.connect_timeout,
connect_timeout=self.conn.connect_timeout)
 
 if self.current_keyspace:



[3/4] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-01-12 Thread tylerhobbs
Merge branch 'cassandra-3.0' into cassandra-3.3

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f4ba752b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f4ba752b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f4ba752b

Branch: refs/heads/trunk
Commit: f4ba752b32db2950f2a816b2e7896941de14b822
Parents: 91d7bed 2569fbd
Author: Tyler Hobbs 
Authored: Tue Jan 12 11:53:23 2016 -0600
Committer: Tyler Hobbs 
Committed: Tue Jan 12 11:53:23 2016 -0600

--
 CHANGES.txt  | 2 ++
 bin/cqlsh.py | 5 +++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f4ba752b/CHANGES.txt
--
diff --cc CHANGES.txt
index a301b0f,3ec5346..a9202ce
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,5 -1,4 +1,7 @@@
 -3.0.3
 +3.3
 +Merged from 3.0:
++ * (cqlsh) Also apply --connect-timeout to control connection
++   timeout (CASSANDRA-10959)
   * Fix AssertionError when removing from list using UPDATE (CASSANDRA-10954)
   * Fix UnsupportedOperationException when reading old sstable with range
 tombstone (CASSANDRA-10743)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f4ba752b/bin/cqlsh.py
--



[2/4] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-01-12 Thread tylerhobbs
Merge branch 'cassandra-2.2' into cassandra-3.0

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2569fbd1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2569fbd1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2569fbd1

Branch: refs/heads/trunk
Commit: 2569fbd1429681f80aaf47b8668e9bc15cf0445d
Parents: a942b2c de946ae
Author: Tyler Hobbs 
Authored: Tue Jan 12 11:52:24 2016 -0600
Committer: Tyler Hobbs 
Committed: Tue Jan 12 11:52:24 2016 -0600

--
 CHANGES.txt  | 2 ++
 bin/cqlsh.py | 5 +++--
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2569fbd1/CHANGES.txt
--
diff --cc CHANGES.txt
index b916fa6,f895139..3ec5346
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,20 -1,6 +1,22 @@@
 -2.2.5
 +3.0.3
 + * Fix AssertionError when removing from list using UPDATE (CASSANDRA-10954)
 + * Fix UnsupportedOperationException when reading old sstable with range
 +   tombstone (CASSANDRA-10743)
 + * MV should use the maximum timestamp of the primary key (CASSANDRA-10910)
 + * Fix potential assertion error during compaction (CASSANDRA-10944)
 + * Fix counting of received sstables in streaming (CASSANDRA-10949)
 + * Implement hints compression (CASSANDRA-9428)
 + * Fix potential assertion error when reading static columns (CASSANDRA-10903)
 + * Avoid NoSuchElementException when executing empty batch (CASSANDRA-10711)
 + * Avoid building PartitionUpdate in toString (CASSANDRA-10897)
 + * Reduce heap spent when receiving many SSTables (CASSANDRA-10797)
 + * Add back support for 3rd party auth providers to bulk loader 
(CASSANDRA-10873)
 + * Eliminate the dependency on jgrapht for UDT resolution (CASSANDRA-10653)
 + * (Hadoop) Close Clusters and Sessions in Hadoop Input/Output classes 
(CASSANDRA-10837)
 + * Fix sstableloader not working with upper case keyspace name 
(CASSANDRA-10806)
 +Merged from 2.2:
+  * (cqlsh) Also apply --connect-timeout to control connection
+timeout (CASSANDRA-10959)
   * Histogram buckets exposed in jmx are sorted incorrectly (CASSANDRA-10975)
   * Enable GC logging by default (CASSANDRA-10140)
   * Optimize pending range computation (CASSANDRA-9258)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2569fbd1/bin/cqlsh.py
--



[jira] [Updated] (CASSANDRA-10909) NPE in ActiveRepairService

2016-01-12 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-10909:

Reproduced In: 3.0.1, 3.0.0  (was: 3.0.0, 3.0.1)
 Reviewer: Carl Yeksigian

> NPE in ActiveRepairService 
> ---
>
> Key: CASSANDRA-10909
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10909
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra-3.0.1.777
>Reporter: Eduard Tudenhoefner
>Assignee: Marcus Eriksson
> Fix For: 2.1.13, 2.2.5, 3.0.3, 3.3
>
>
> NPE after one started multiple incremental repairs
> {code}
> INFO  [Thread-62] 2015-12-21 11:40:53,742  RepairRunnable.java:125 - Starting 
> repair command #1, repairing keyspace keyspace1 with repair options 
> (parallelism: parallel, primary range: false, incremental: true, job threads: 
> 1, ColumnFamilies: [], dataCenters: [], hosts: [], # of ranges: 2)
> INFO  [Thread-62] 2015-12-21 11:40:53,813  RepairSession.java:237 - [repair 
> #b13e3740-a7d7-11e5-b568-f565b837eb0d] new session: will sync /10.200.177.32, 
> /10.200.177.33 on range [(10,-9223372036854775808]] for keyspace1.[counter1, 
> standard1]
> INFO  [Repair#1:1] 2015-12-21 11:40:53,853  RepairJob.java:100 - [repair 
> #b13e3740-a7d7-11e5-b568-f565b837eb0d] requesting merkle trees for counter1 
> (to [/10.200.177.33, /10.200.177.32])
> INFO  [Repair#1:1] 2015-12-21 11:40:53,853  RepairJob.java:174 - [repair 
> #b13e3740-a7d7-11e5-b568-f565b837eb0d] Requesting merkle trees for counter1 
> (to [/10.200.177.33, /10.200.177.32])
> INFO  [Thread-62] 2015-12-21 11:40:53,854  RepairSession.java:237 - [repair 
> #b1449fe0-a7d7-11e5-b568-f565b837eb0d] new session: will sync /10.200.177.32, 
> /10.200.177.31 on range [(0,10]] for keyspace1.[counter1, standard1]
> INFO  [AntiEntropyStage:1] 2015-12-21 11:40:53,896  RepairSession.java:181 - 
> [repair #b13e3740-a7d7-11e5-b568-f565b837eb0d] Received merkle tree for 
> counter1 from /10.200.177.32
> INFO  [AntiEntropyStage:1] 2015-12-21 11:40:53,906  RepairSession.java:181 - 
> [repair #b13e3740-a7d7-11e5-b568-f565b837eb0d] Received merkle tree for 
> counter1 from /10.200.177.33
> INFO  [Repair#1:1] 2015-12-21 11:40:53,906  RepairJob.java:100 - [repair 
> #b13e3740-a7d7-11e5-b568-f565b837eb0d] requesting merkle trees for standard1 
> (to [/10.200.177.33, /10.200.177.32])
> INFO  [Repair#1:1] 2015-12-21 11:40:53,906  RepairJob.java:174 - [repair 
> #b13e3740-a7d7-11e5-b568-f565b837eb0d] Requesting merkle trees for standard1 
> (to [/10.200.177.33, /10.200.177.32])
> INFO  [RepairJobTask:2] 2015-12-21 11:40:53,910  SyncTask.java:66 - [repair 
> #b13e3740-a7d7-11e5-b568-f565b837eb0d] Endpoints /10.200.177.33 and 
> /10.200.177.32 are consistent for counter1
> INFO  [RepairJobTask:1] 2015-12-21 11:40:53,910  RepairJob.java:145 - [repair 
> #b13e3740-a7d7-11e5-b568-f565b837eb0d] counter1 is fully synced
> INFO  [AntiEntropyStage:1] 2015-12-21 11:40:54,823  Validator.java:272 - 
> [repair #b17a2ed0-a7d7-11e5-ada8-8304f5629908] Sending completed merkle tree 
> to /10.200.177.33 for keyspace1.counter1
> ERROR [ValidationExecutor:3] 2015-12-21 11:40:55,104  
> CompactionManager.java:1065 - Cannot start multiple repair sessions over the 
> same sstables
> ERROR [ValidationExecutor:3] 2015-12-21 11:40:55,105  Validator.java:259 - 
> Failed creating a merkle tree for [repair 
> #b17a2ed0-a7d7-11e5-ada8-8304f5629908 on keyspace1/standard1, 
> [(10,-9223372036854775808]]], /10.200.177.33 (see log for details)
> ERROR [ValidationExecutor:3] 2015-12-21 11:40:55,110  
> CassandraDaemon.java:195 - Exception in thread 
> Thread[ValidationExecutor:3,1,main]
> java.lang.RuntimeException: Cannot start multiple repair sessions over the 
> same sstables
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1066)
>  ~[cassandra-all-3.0.1.777.jar:3.0.1.777]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.access$700(CompactionManager.java:80)
>  ~[cassandra-all-3.0.1.777.jar:3.0.1.777]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$10.call(CompactionManager.java:679)
>  ~[cassandra-all-3.0.1.777.jar:3.0.1.777]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_40]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_40]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_40]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]
> ERROR [AntiEntropyStage:1] 2015-12-21 11:40:55,174  
> RepairMessageVerbHandler.java:161 - Got error, removing parent repair session
> INFO  [CompactionExecutor:3] 2015-12-21 11:40:55,175  
> CompactionManager.java:489 - Starting anticompaction for 

[jira] [Updated] (CASSANDRA-10956) Enable authentication of native protocol users via client certificates

2016-01-12 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-10956:

Reviewer: Sam Tunnicliffe

> Enable authentication of native protocol users via client certificates
> --
>
> Key: CASSANDRA-10956
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10956
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Samuel Klock
>Assignee: Samuel Klock
> Attachments: 10956.patch
>
>
> Currently, the native protocol only supports user authentication via SASL.  
> While this is adequate for many use cases, it may be superfluous in scenarios 
> where clients are required to present an SSL certificate to connect to the 
> server.  If the certificate presented by a client is sufficient by itself to 
> specify a user, then an additional (series of) authentication step(s) via 
> SASL merely add overhead.  Worse, for uses wherein it's desirable to obtain 
> the identity from the client's certificate, it's necessary to implement a 
> custom SASL mechanism to do so, which increases the effort required to 
> maintain both client and server and which also duplicates functionality 
> already provided via SSL/TLS.
> Cassandra should provide a means of using certificates for user 
> authentication in the native protocol without any effort above configuring 
> SSL on the client and server.  Here's a possible strategy:
> * Add a new authenticator interface that returns {{AuthenticatedUser}} 
> objects based on the certificate chain presented by the client.
> * If this interface is in use, the user is authenticated immediately after 
> the server receives the {{STARTUP}} message.  It then responds with a 
> {{READY}} message.
> * Otherwise, the existing flow of control is used (i.e., if the authenticator 
> requires authentication, then an {{AUTHENTICATE}} message is sent to the 
> client).
> One advantage of this strategy is that it is backwards-compatible with 
> existing schemes; current users of SASL/{{IAuthenticator}} are not impacted.  
> Moreover, it can function as a drop-in replacement for SASL schemes without 
> requiring code changes (or even config changes) on the client side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10992) Hanging streaming sessions

2016-01-12 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094989#comment-15094989
 ] 

Paulo Motta edited comment on CASSANDRA-10992 at 1/12/16 9:31 PM:
--

I don't know exactly what's happening, but the {{AsynchronousCloseException}} 
makes it smell like the interrupt workaround for CASSANDRA-10012 is closing the 
channel after a genuine timeout, preventing a retry. This was fixed on 
CASSANDRA-10961, so to test that hypothesis, could you try replacing the jar I 
attached (which contains the 2.1 revert for CASSANDRA-10012) in all nodes 
involved in a repair of a specific subrange? A rolling restart will be needed.  
If this does not solve the issue, please attach corresponding trace logs as 
instructed before (making sure to enable trace logs in the logback 
configuration before triggering the faulty repair operation after replacing the 
jars).


was (Author: pauloricardomg):
I don't know exactly what's happening, but the {{AsynchronousCloseException}} 
makes it smell like the interrupt workaround for CASSANDRA-10012 is closing the 
channel after a genuine timeout, preventing a retry. This was fixed on 
CASSANDRA-10961, so to test that hypothesis, could you try replacing the jar I 
attached (which contains the 2.1 revert for CASSANDRA-10012) in a subset of the 
nodes involved in the repair? A rolling restart will be needed.  If this does 
not solve the issue, please attach corresponding trace logs as instructed 
before (making sure to enable trace logs in the logback configuration before 
triggering the faulty repair operation after replacing the jars).

> Hanging streaming sessions
> --
>
> Key: CASSANDRA-10992
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10992
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.1.12, Debian Wheezy
>Reporter: mlowicki
>Assignee: Paulo Motta
> Fix For: 2.1.12
>
> Attachments: apache-cassandra-2.1.12-SNAPSHOT.jar
>
>
> I've started recently running repair using [Cassandra 
> Reaper|https://github.com/spotify/cassandra-reaper]  (built-in {{nodetool 
> repair}} doesn't work for me - CASSANDRA-9935). It behaves fine but I've 
> noticed hanging streaming sessions:
> {code}
> root@db1:~# date
> Sat Jan  9 16:43:00 UTC 2016
> root@db1:~# nt netstats -H | grep total
> Receiving 5 files, 46.59 MB total. Already received 1 files, 11.32 MB 
> total
> Sending 7 files, 46.28 MB total. Already sent 7 files, 46.28 MB total
> Receiving 6 files, 64.15 MB total. Already received 1 files, 12.14 MB 
> total
> Sending 5 files, 61.15 MB total. Already sent 5 files, 61.15 MB total
> Receiving 4 files, 7.75 MB total. Already received 3 files, 7.58 MB 
> total
> Sending 4 files, 4.29 MB total. Already sent 4 files, 4.29 MB total
> Receiving 12 files, 13.79 MB total. Already received 11 files, 7.66 
> MB total
> Sending 5 files, 15.32 MB total. Already sent 5 files, 15.32 MB total
> Receiving 8 files, 20.35 MB total. Already received 1 files, 13.63 MB 
> total
> Sending 38 files, 125.34 MB total. Already sent 38 files, 125.34 MB 
> total
> root@db1:~# date
> Sat Jan  9 17:45:42 UTC 2016
> root@db1:~# nt netstats -H | grep total
> Receiving 5 files, 46.59 MB total. Already received 1 files, 11.32 MB 
> total
> Sending 7 files, 46.28 MB total. Already sent 7 files, 46.28 MB total
> Receiving 6 files, 64.15 MB total. Already received 1 files, 12.14 MB 
> total
> Sending 5 files, 61.15 MB total. Already sent 5 files, 61.15 MB total
> Receiving 4 files, 7.75 MB total. Already received 3 files, 7.58 MB 
> total
> Sending 4 files, 4.29 MB total. Already sent 4 files, 4.29 MB total
> Receiving 12 files, 13.79 MB total. Already received 11 files, 7.66 
> MB total
> Sending 5 files, 15.32 MB total. Already sent 5 files, 15.32 MB total
> Receiving 8 files, 20.35 MB total. Already received 1 files, 13.63 MB 
> total
> Sending 38 files, 125.34 MB total. Already sent 38 files, 125.34 MB 
> total
> {code}
> Such sessions are left even when repair job is long time done (confirmed by 
> checking Reaper's and Cassandra's logs). {{streaming_socket_timeout_in_ms}} 
> in cassandra.yaml is set to default value (360).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10992) Hanging streaming sessions

2016-01-12 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094989#comment-15094989
 ] 

Paulo Motta commented on CASSANDRA-10992:
-

I don't know exactly what's happening, but the {{AsynchronousCloseException}} 
makes it smell like the interrupt workaround for CASSANDRA-10012 is closing the 
channel after a genuine timeout, preventing a retry. This was fixed on 
CASSANDRA-10961, so to test that hypothesis, could you try replacing the jar I 
attached (which contains the 2.1 revert for CASSANDRA-10012) in a subset of the 
nodes involved in the repair? A rolling restart will be needed.  If this does 
not solve the issue, please attach corresponding trace logs as instructed 
before (making sure to enable trace logs in the logback configuration before 
triggering the faulty repair operation after replacing the jars).

> Hanging streaming sessions
> --
>
> Key: CASSANDRA-10992
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10992
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.1.12, Debian Wheezy
>Reporter: mlowicki
>Assignee: Paulo Motta
> Fix For: 2.1.12
>
> Attachments: apache-cassandra-2.1.12-SNAPSHOT.jar
>
>
> I've started recently running repair using [Cassandra 
> Reaper|https://github.com/spotify/cassandra-reaper]  (built-in {{nodetool 
> repair}} doesn't work for me - CASSANDRA-9935). It behaves fine but I've 
> noticed hanging streaming sessions:
> {code}
> root@db1:~# date
> Sat Jan  9 16:43:00 UTC 2016
> root@db1:~# nt netstats -H | grep total
> Receiving 5 files, 46.59 MB total. Already received 1 files, 11.32 MB 
> total
> Sending 7 files, 46.28 MB total. Already sent 7 files, 46.28 MB total
> Receiving 6 files, 64.15 MB total. Already received 1 files, 12.14 MB 
> total
> Sending 5 files, 61.15 MB total. Already sent 5 files, 61.15 MB total
> Receiving 4 files, 7.75 MB total. Already received 3 files, 7.58 MB 
> total
> Sending 4 files, 4.29 MB total. Already sent 4 files, 4.29 MB total
> Receiving 12 files, 13.79 MB total. Already received 11 files, 7.66 
> MB total
> Sending 5 files, 15.32 MB total. Already sent 5 files, 15.32 MB total
> Receiving 8 files, 20.35 MB total. Already received 1 files, 13.63 MB 
> total
> Sending 38 files, 125.34 MB total. Already sent 38 files, 125.34 MB 
> total
> root@db1:~# date
> Sat Jan  9 17:45:42 UTC 2016
> root@db1:~# nt netstats -H | grep total
> Receiving 5 files, 46.59 MB total. Already received 1 files, 11.32 MB 
> total
> Sending 7 files, 46.28 MB total. Already sent 7 files, 46.28 MB total
> Receiving 6 files, 64.15 MB total. Already received 1 files, 12.14 MB 
> total
> Sending 5 files, 61.15 MB total. Already sent 5 files, 61.15 MB total
> Receiving 4 files, 7.75 MB total. Already received 3 files, 7.58 MB 
> total
> Sending 4 files, 4.29 MB total. Already sent 4 files, 4.29 MB total
> Receiving 12 files, 13.79 MB total. Already received 11 files, 7.66 
> MB total
> Sending 5 files, 15.32 MB total. Already sent 5 files, 15.32 MB total
> Receiving 8 files, 20.35 MB total. Already received 1 files, 13.63 MB 
> total
> Sending 38 files, 125.34 MB total. Already sent 38 files, 125.34 MB 
> total
> {code}
> Such sessions are left even when repair job is long time done (confirmed by 
> checking Reaper's and Cassandra's logs). {{streaming_socket_timeout_in_ms}} 
> in cassandra.yaml is set to default value (360).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[Cassandra Wiki] Trivial Update of "FrontPage" by SylvainLebresne

2016-01-12 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "FrontPage" page has been changed by SylvainLebresne:
https://wiki.apache.org/cassandra/FrontPage?action=diff=112=113

   * [[ArticlesAndPresentations|Articles and Presentations]] about Cassandra.
   * [[DataModel|A description of the Cassandra data model]]
   * [[CassandraLimitations|Cassandra Limitations]]: where Cassandra is not a 
good fit
+  * [[CompatibilityGuarantees|Compatibility Guarantees]]: what compatibility 
guarantees are provided across versions
  
  == Application developer and operator documentation ==
  


[jira] [Commented] (CASSANDRA-10979) LCS doesn't do L0 STC on new tables while an L0->L1 compaction is in progress

2016-01-12 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093818#comment-15093818
 ] 

Marcus Eriksson commented on CASSANDRA-10979:
-

This LGTM, could you push a branch so we get the test runs in?

And, should we really target 2.1 for this?

> LCS doesn't do L0 STC on new tables while an L0->L1 compaction is in progress
> -
>
> Key: CASSANDRA-10979
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10979
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: 2.1.11 / 4.8.3 DSE.
>Reporter: Jeff Ferland
>Assignee: Carl Yeksigian
>  Labels: compaction, leveled
> Fix For: 2.1.x
>
> Attachments: 10979-2.1.txt
>
>
> Reading code from 
> https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
>  and comparing with behavior shown in 
> https://gist.github.com/autocracy/c95aca6b00e42215daaf, the following happens:
> Score for L1,L2,and L3 is all < 1 (paste shows 20/10 and 200/100, due to 
> incremental repair).
> Relevant code from here is
> if (Sets.intersection(l1overlapping, compacting).size() > 0)
> return Collections.emptyList();
> Since there will be overlap between what is compacting and L1 (in my case, 
> pushing over 1,000 tables in to L1 from L0 SCTS), I get a pile up of 1,000 
> smaller tables in L0 while awaiting the transition from L0 to L1 and destroy 
> my performance.
> Requested outcome is to continue to perform SCTS on non-compacting L0 tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10992) Hanging streaming sessions

2016-01-12 Thread mlowicki (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094919#comment-15094919
 ] 

mlowicki commented on CASSANDRA-10992:
--

Some IO errors I've found in logs:
{code}
ERROR [Thread-518762] 2016-01-12 14:36:11,130 CassandraDaemon.java:227 - 
Exception in thread Thread[Thread-518762,5,main]
java.lang.RuntimeException: java.io.IOException: Connection timed out
at com.google.common.base.Throwables.propagate(Throwables.java:160) 
~[guava-16.0.jar:na]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
~[apache-cassandra-2.1.12.jar:2.1.12]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66]
Caused by: java.io.IOException: Connection timed out
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.8.0_66]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
~[na:1.8.0_66]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) 
~[na:1.8.0_66]
at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[na:1.8.0_66]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) 
~[na:1.8.0_66]
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:59) 
~[na:1.8.0_66]
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:109) 
~[na:1.8.0_66]
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103) 
~[na:1.8.0_66]
at 
org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:178)
 ~[apache-cassandra-2.1.12.jar:2.1.12]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[apache-cassandra-2.1.12.jar:2.1.12]
... 1 common frames omitted
{code}

{code}
ERROR [STREAM-IN-/10.210.58.133] 2016-01-12 15:01:39,450 StreamSession.java:505 
- [Stream #193dd5c0-b93b-11e5-a713-8fe7d1d062ea] Streaming error occurred
java.io.IOException: Connection timed out
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.8.0_66]
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
~[na:1.8.0_66]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) 
~[na:1.8.0_66]
at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[na:1.8.0_66]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) 
~[na:1.8.0_66]
at 
org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:51)
 ~[apache-cassandra-2.1.12.jar:2.1.12]
at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:250)
 ~[apache-cassandra-2.1.12.jar:2.1.12]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66]
INFO  [STREAM-IN-/10.210.58.133] 2016-01-12 15:01:39,451 
StreamResultFuture.java:180 - [Stream #193dd5c0-b93b-11e5-a713-8fe7d1d062ea] 
Session with /10.210.58.133 is complete
WARN  [STREAM-IN-/10.210.58.133] 2016-01-12 15:01:39,451 
StreamResultFuture.java:207 - [Stream #193dd5c0-b93b-11e5-a713-8fe7d1d062ea] 
Stream failed
{code}

{code}
ERROR [Thread-404196] 2016-01-12 14:44:05,532 CassandraDaemon.java:227 - 
Exception in thread Thread[Thread-404196,5,main]
java.lang.RuntimeException: java.nio.channels.AsynchronousCloseException
at com.google.common.base.Throwables.propagate(Throwables.java:160) 
~[guava-16.0.jar:na]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) 
~[apache-cassandra-2.1.12.jar:2.1.12]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_66]
Caused by: java.nio.channels.AsynchronousCloseException: null
at 
java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:205)
 ~[na:1.8.0_66]
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:407) 
~[na:1.8.0_66]
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:59) 
~[na:1.8.0_66]
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:109) 
~[na:1.8.0_66]
at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103) 
~[na:1.8.0_66]
at 
org.apache.cassandra.streaming.compress.CompressedInputStream$Reader.runMayThrow(CompressedInputStream.java:178)
 ~[apache-cassandra-2.1.12.jar:2.1.12]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[apache-cassandra-2.1.12.jar:2.1.12]
... 1 common frames omitted
{code}

{code}
ERROR [STREAM-OUT-/10.210.3.224] 2016-01-12 14:44:12,114 StreamSession.java:505 
- [Stream #e7af3850-b93a-11e5-bebc-2f019a24a954] Streaming error occurred
java.io.IOException: Broken pipe
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method) ~[na:1.8.0_66]
at 
sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:427) 
~[na:1.8.0_66]
at 
sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:492) 
~[na:1.8.0_66]
at 

[jira] [Updated] (CASSANDRA-10992) Hanging streaming sessions

2016-01-12 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10992:

Attachment: apache-cassandra-2.1.12-SNAPSHOT.jar

> Hanging streaming sessions
> --
>
> Key: CASSANDRA-10992
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10992
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.1.12, Debian Wheezy
>Reporter: mlowicki
>Assignee: Paulo Motta
> Fix For: 2.1.12
>
> Attachments: apache-cassandra-2.1.12-SNAPSHOT.jar
>
>
> I've started recently running repair using [Cassandra 
> Reaper|https://github.com/spotify/cassandra-reaper]  (built-in {{nodetool 
> repair}} doesn't work for me - CASSANDRA-9935). It behaves fine but I've 
> noticed hanging streaming sessions:
> {code}
> root@db1:~# date
> Sat Jan  9 16:43:00 UTC 2016
> root@db1:~# nt netstats -H | grep total
> Receiving 5 files, 46.59 MB total. Already received 1 files, 11.32 MB 
> total
> Sending 7 files, 46.28 MB total. Already sent 7 files, 46.28 MB total
> Receiving 6 files, 64.15 MB total. Already received 1 files, 12.14 MB 
> total
> Sending 5 files, 61.15 MB total. Already sent 5 files, 61.15 MB total
> Receiving 4 files, 7.75 MB total. Already received 3 files, 7.58 MB 
> total
> Sending 4 files, 4.29 MB total. Already sent 4 files, 4.29 MB total
> Receiving 12 files, 13.79 MB total. Already received 11 files, 7.66 
> MB total
> Sending 5 files, 15.32 MB total. Already sent 5 files, 15.32 MB total
> Receiving 8 files, 20.35 MB total. Already received 1 files, 13.63 MB 
> total
> Sending 38 files, 125.34 MB total. Already sent 38 files, 125.34 MB 
> total
> root@db1:~# date
> Sat Jan  9 17:45:42 UTC 2016
> root@db1:~# nt netstats -H | grep total
> Receiving 5 files, 46.59 MB total. Already received 1 files, 11.32 MB 
> total
> Sending 7 files, 46.28 MB total. Already sent 7 files, 46.28 MB total
> Receiving 6 files, 64.15 MB total. Already received 1 files, 12.14 MB 
> total
> Sending 5 files, 61.15 MB total. Already sent 5 files, 61.15 MB total
> Receiving 4 files, 7.75 MB total. Already received 3 files, 7.58 MB 
> total
> Sending 4 files, 4.29 MB total. Already sent 4 files, 4.29 MB total
> Receiving 12 files, 13.79 MB total. Already received 11 files, 7.66 
> MB total
> Sending 5 files, 15.32 MB total. Already sent 5 files, 15.32 MB total
> Receiving 8 files, 20.35 MB total. Already received 1 files, 13.63 MB 
> total
> Sending 38 files, 125.34 MB total. Already sent 38 files, 125.34 MB 
> total
> {code}
> Such sessions are left even when repair job is long time done (confirmed by 
> checking Reaper's and Cassandra's logs). {{streaming_socket_timeout_in_ms}} 
> in cassandra.yaml is set to default value (360).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7281) SELECT on tuple relations are broken for mixed ASC/DESC clustering order

2016-01-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094027#comment-15094027
 ] 

Sylvain Lebresne commented on CASSANDRA-7281:
-

Had a look at [~blerer]'s modified version and it looks good, but we obviously 
need patches for 3.0 and upwards (as well as links to the CI results). I don't 
think we'll want to commit this to 2.1 at this point however, so I suggest not 
wasting time on that.

> SELECT on tuple relations are broken for mixed ASC/DESC clustering order
> 
>
> Key: CASSANDRA-7281
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7281
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Marcin Szymaniuk
> Fix For: 2.1.x
>
> Attachments: 
> 0001-CASSANDRA-7281-SELECT-on-tuple-relations-are-broken-.patch, 
> 0001-CASSANDRA-7281-SELECT-on-tuple-relations-are-broken-v2.patch, 
> 0001-CASSANDRA-7281-SELECT-on-tuple-relations-are-broken-v3.patch, 
> 0001-CASSANDRA-7281-SELECT-on-tuple-relations-are-broken-v4.patch, 
> 0001-CASSANDRA-7281-SELECT-on-tuple-relations-are-broken-v5.patch, 
> 7281_unit_tests.txt
>
>
> As noted on 
> [CASSANDRA-6875|https://issues.apache.org/jira/browse/CASSANDRA-6875?focusedCommentId=13992153=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13992153],
>  the tuple notation is broken when the clustering order mixes ASC and DESC 
> directives because the range of data they describe don't correspond to a 
> single continuous slice internally. To copy the example from CASSANDRA-6875:
> {noformat}
> cqlsh:ks> create table foo (a int, b int, c int, PRIMARY KEY (a, b, c)) WITH 
> CLUSTERING ORDER BY (b DESC, c ASC);
> cqlsh:ks> INSERT INTO foo (a, b, c) VALUES (0, 2, 0);
> cqlsh:ks> INSERT INTO foo (a, b, c) VALUES (0, 1, 0);
> cqlsh:ks> INSERT INTO foo (a, b, c) VALUES (0, 1, 1);
> cqlsh:ks> INSERT INTO foo (a, b, c) VALUES (0, 0, 0);
> cqlsh:ks> SELECT * FROM foo WHERE a=0;
>  a | b | c
> ---+---+---
>  0 | 2 | 0
>  0 | 1 | 0
>  0 | 1 | 1
>  0 | 0 | 0
> (4 rows)
> cqlsh:ks> SELECT * FROM foo WHERE a=0 AND (b, c) > (1, 0);
>  a | b | c
> ---+---+---
>  0 | 2 | 0
> (1 rows)
> {noformat}
> The last query should really return {{(0, 2, 0)}} and {{(0, 1, 1)}}.
> For that specific example we should generate 2 internal slices, but I believe 
> that with more clustering columns we may have more slices.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6018) Add option to encrypt commitlog

2016-01-12 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094050#comment-15094050
 ] 

Branimir Lambov commented on CASSANDRA-6018:


bq. While there's prossibly some improvements that can be taken into account, 
...

Sounds good. Could {{encrypt}} be renamed to {{encryptAndWrite}} to make 
obvious what it does?

I believe the || in [catching 
{{SegmentReadException}}|https://github.com/apache/cassandra/commit/cbc36f629a3fe5ad537f57a4c24e437052772178#diff-4c3a8240a441cef90e680246ee64R105]
 should be an &&: invalid CRC is not tolerated even in the last segment.

The {{ReadCommandTest}} failures should disappear once you rebase to latest 
trunk.

> Add option to encrypt commitlog 
> 
>
> Key: CASSANDRA-6018
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6018
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jason Brown
>Assignee: Jason Brown
>  Labels: commit_log, encryption, security
> Fix For: 3.x
>
>
> We are going to start using cassandra for a billing system, and while I can 
> encrypt sstables at rest (via Datastax Enterprise), commit logs are more or 
> less plain text. Thus, an attacker would be able to easily read, for example, 
> credit card numbers in the clear text commit log (if the calling app does not 
> encrypt the data itself before sending it to cassandra).
> I want to allow the option of encrypting the commit logs, most likely 
> controlled by a property in the yaml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10676) AssertionError in CompactionExecutor

2016-01-12 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094085#comment-15094085
 ] 

Yuki Morishita commented on CASSANDRA-10676:


+1

> AssertionError in CompactionExecutor
> 
>
> Key: CASSANDRA-10676
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10676
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: C* 2.1.9 on Debian Wheezy
>Reporter: mlowicki
>Assignee: Carl Yeksigian
> Fix For: 2.1.x
>
>
> {code}
> ERROR [CompactionExecutor:33329] 2015-11-09 08:16:22,759 
> CassandraDaemon.java:223 - Exception in thread 
> Thread[CompactionExecutor:33329,1,main]
> java.lang.AssertionError: 
> /var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-888705-Data.db
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.getApproximateKeyCount(SSTableReader.java:279)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:151)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:236)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_80]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_80]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_80]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
>   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> ^C
> root@db1:~# tail -f /var/log/cassandra/system.log
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:151)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:73)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:236)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_80]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_80]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_80]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
>   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10726) Read repair inserts should not be blocking

2016-01-12 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15093975#comment-15093975
 ] 

Jonathan Ellis commented on CASSANDRA-10726:


I can live with the -D approach.

> Read repair inserts should not be blocking
> --
>
> Key: CASSANDRA-10726
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10726
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Richard Low
>
> Today, if there’s a digest mismatch in a foreground read repair, the insert 
> to update out of date replicas is blocking. This means, if it fails, the read 
> fails with a timeout. If a node is dropping writes (maybe it is overloaded or 
> the mutation stage is backed up for some other reason), all reads to a 
> replica set could fail. Further, replicas dropping writes get more out of 
> sync so will require more read repair.
> The comment on the code for why the writes are blocking is:
> {code}
> // wait for the repair writes to be acknowledged, to minimize impact on any 
> replica that's
> // behind on writes in case the out-of-sync row is read multiple times in 
> quick succession
> {code}
> but the bad side effect is that reads timeout. Either the writes should not 
> be blocking or we should return success for the read even if the write times 
> out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10954) [Regression] Error when removing list element with UPDATE statement

2016-01-12 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094055#comment-15094055
 ] 

Benjamin Lerer commented on CASSANDRA-10954:


+1

> [Regression] Error when removing list element with UPDATE statement
> ---
>
> Key: CASSANDRA-10954
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10954
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: Cassandra 3.0.0, Cassandra 3.1.1
>Reporter: DOAN DuyHai
>Assignee: Sylvain Lebresne
>  Labels: regression
> Fix For: 3.0.x, 3.x
>
>
> Steps to reproduce:
> {code:sql}
> CREATE TABLE simple(
>   id int PRIMARY KEY,
>   int_list list
> );
> INSERT INTO simple(id, int_list) VALUES(10, [1,2,3]);
> SELECT * FROM simple;
>  id | int_list
> +---
>  10 | [1, 2, 3]
> UPDATE simple SET int_list[0]=null WHERE id=10;
> ServerError:  message="java.lang.AssertionError">
> {code}
>  Per CQL semantics, setting a column to NULL == deleting it.
>  When using debugger, below is the Java stack trace on server side:
> {noformat}
>  ERROR o.apache.cassandra.transport.Message - Unexpected exception during 
> request; channel = [id: 0x6dbc33bd, /192.168.51.1:57723 => /192.168.51.1:9473]
> java.lang.AssertionError: null
>   at org.apache.cassandra.db.rows.BufferCell.(BufferCell.java:49) 
> ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.db.rows.BufferCell.tombstone(BufferCell.java:88) 
> ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.UpdateParameters.addTombstone(UpdateParameters.java:141)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.UpdateParameters.addTombstone(UpdateParameters.java:136)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.Lists$SetterByIndex.execute(Lists.java:362) 
> ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:94)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement.addUpdates(ModificationStatement.java:666)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement.getMutations(ModificationStatement.java:606)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement.executeWithoutCondition(ModificationStatement.java:413)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:401)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:472)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:449)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:130)
>  ~[cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [cassandra-all-3.1.1.jar:3.1.1]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [cassandra-all-3.1.1.jar:3.1.1]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_60-ea]
>   at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  [cassandra-all-3.1.1.jar:3.1.1]
>   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [cassandra-all-3.1.1.jar:3.1.1]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60-ea]
> {noformat}
> The root cause seems to be located at *org.apache.cassandra.cql3.Lists:362* :
> {code:java}
> CellPath elementPath = 
> existingRow.getComplexColumnData(column).getCellByIndex(idx).path();
> 

[jira] [Commented] (CASSANDRA-10924) Pass base table's metadata to Index.validateOptions

2016-01-12 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094053#comment-15094053
 ] 

Sam Tunnicliffe commented on CASSANDRA-10924:
-

v1 looks good, thanks! I've pushed branches for CI and will commit them when 
cassci is happy.

||branch||testall||dtest||
|[10924-3.0|https://github.com/beobal/cassandra/tree/10924-3.0]|[testall|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-10924-3.0-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-10924-3.0-dtest]|
|[10924-3.3|https://github.com/beobal/cassandra/tree/10924-3.3]|[testall|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-10924-3.3-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-10924-3.3-dtest]|
|[10924-trunk|https://github.com/beobal/cassandra/tree/10924-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-10924-trunk-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-10924-trunk-dtest]|


> Pass base table's metadata to Index.validateOptions
> ---
>
> Key: CASSANDRA-10924
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10924
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL, Local Write-Read Paths
>Reporter: Andrés de la Peña
>Assignee: Andrés de la Peña
>Priority: Minor
>  Labels: 2i, index, validation
> Fix For: 3.0.x, 3.x
>
> Attachments: CASSANDRA-10924-v0.diff, CASSANDRA-10924-v1.diff
>
>
> Some custom index implementations require the base table's metadata to 
> validate their creation options. For example, the options of these 
> implementations can contain information about which base table's columns are 
> going to be indexed and how, so the implementation needs to know the 
> existence and the type of the columns to be indexed to properly validate.
> The attached patch proposes to add base table's {{CFMetaData}} to Index' 
> optional static method to validate the custom index options:
> {{public static Map validateOptions(CFMetaData cfm, 
> Map options);}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10992) Hanging streaming sessions

2016-01-12 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094002#comment-15094002
 ] 

Paulo Motta commented on CASSANDRA-10992:
-

Unfortunately hanging stream sessions is a classic problem which can have many 
possible causes (including network problems), so it's difficult to troubleshoot 
without more information or reproduction steps. Some questions:
* Do you see any errors or warnings in your system logs?
* Could you try decreasing your streaming_socket_timeout_in_ms to 1 ms and 
see if the problem persists?

If the problem persists and you're able to reproduce it fairly simply, I 
recommend you to replace the cassandra jar of all nodes involved in a specific 
repair with this [jar with more debug 
logging|https://issues.apache.org/jira/secure/attachment/12781836/apache-cassandra-2.1.12-SNAPSHOT.jar],
 and also set the log level of the {{org.apache.cassandra.streaming}} package 
to {{TRACE}} on {{conf/logback.xml}}, and post back the logs for better 
troubleshooting. If you prefer to build your own jar with {{ant jar}} you can 
do it from this 
[branch|https://github.com/pauloricardomg/cassandra/tree/2.1-10961].

> Hanging streaming sessions
> --
>
> Key: CASSANDRA-10992
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10992
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.1.12, Debian Wheezy
>Reporter: mlowicki
>Assignee: Paulo Motta
> Fix For: 2.1.12
>
> Attachments: apache-cassandra-2.1.12-SNAPSHOT.jar
>
>
> I've started recently running repair using [Cassandra 
> Reaper|https://github.com/spotify/cassandra-reaper]  (built-in {{nodetool 
> repair}} doesn't work for me - CASSANDRA-9935). It behaves fine but I've 
> noticed hanging streaming sessions:
> {code}
> root@db1:~# date
> Sat Jan  9 16:43:00 UTC 2016
> root@db1:~# nt netstats -H | grep total
> Receiving 5 files, 46.59 MB total. Already received 1 files, 11.32 MB 
> total
> Sending 7 files, 46.28 MB total. Already sent 7 files, 46.28 MB total
> Receiving 6 files, 64.15 MB total. Already received 1 files, 12.14 MB 
> total
> Sending 5 files, 61.15 MB total. Already sent 5 files, 61.15 MB total
> Receiving 4 files, 7.75 MB total. Already received 3 files, 7.58 MB 
> total
> Sending 4 files, 4.29 MB total. Already sent 4 files, 4.29 MB total
> Receiving 12 files, 13.79 MB total. Already received 11 files, 7.66 
> MB total
> Sending 5 files, 15.32 MB total. Already sent 5 files, 15.32 MB total
> Receiving 8 files, 20.35 MB total. Already received 1 files, 13.63 MB 
> total
> Sending 38 files, 125.34 MB total. Already sent 38 files, 125.34 MB 
> total
> root@db1:~# date
> Sat Jan  9 17:45:42 UTC 2016
> root@db1:~# nt netstats -H | grep total
> Receiving 5 files, 46.59 MB total. Already received 1 files, 11.32 MB 
> total
> Sending 7 files, 46.28 MB total. Already sent 7 files, 46.28 MB total
> Receiving 6 files, 64.15 MB total. Already received 1 files, 12.14 MB 
> total
> Sending 5 files, 61.15 MB total. Already sent 5 files, 61.15 MB total
> Receiving 4 files, 7.75 MB total. Already received 3 files, 7.58 MB 
> total
> Sending 4 files, 4.29 MB total. Already sent 4 files, 4.29 MB total
> Receiving 12 files, 13.79 MB total. Already received 11 files, 7.66 
> MB total
> Sending 5 files, 15.32 MB total. Already sent 5 files, 15.32 MB total
> Receiving 8 files, 20.35 MB total. Already received 1 files, 13.63 MB 
> total
> Sending 38 files, 125.34 MB total. Already sent 38 files, 125.34 MB 
> total
> {code}
> Such sessions are left even when repair job is long time done (confirmed by 
> checking Reaper's and Cassandra's logs). {{streaming_socket_timeout_in_ms}} 
> in cassandra.yaml is set to default value (360).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6018) Add option to encrypt commitlog

2016-01-12 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094004#comment-15094004
 ] 

Jason Brown commented on CASSANDRA-6018:


Pushed changes to a new commit on the same branch - also, [this 
diff|https://github.com/apache/cassandra/commit/cbc36f629a3fe5ad537f57a4c24e437052772178]
 as a shortcut for changes wrt this update.

Added some more comments to {{EncryptionUtils}}, although I'm sure more could 
always be added :)

bq. Could we not reserve the header bytes, ...

While there's prossibly some improvements that can be taken into account, I 
think we might get into trouble wrt reusing the input buffer for the output 
buffer on the {{Cipher.doFinal()}} calls. Also, as I'm planning on using 
{{EncryptionUtils}} for encrypting sstables and hints (already implemented, 
pending internal review), the structure and use of overloading encrpyt (and 
using the {{WritableByteChannel}}), and other such things will become much more 
obvious.

bq. addSize and maybeSwap in {{EncryptedSegment.write}} can be taken out of the 
loop.

I don't think addSize can be taken out as we'll miscount the number of bytes 
written out. I discovered this via by your handy additions to the 
CommitLogStressTest :)  WRT {{maybeSwap()}}, I figured out we don't need that 
at all as we can always safely reassign the encryptedBuffer back to the buffer, 
and then compare capacity outside of the loop for the CAS.

bq. For uncompressed <=2.1 replay we need to tolerate errors for the whole of 
the last segment...

Done.

bq. I don't think the {{SegmentReadException}} can escape to 
{[CommitLogReplayer.recover}} which tries to catch and act on it.

Ahh, good call. Fixed by moving catching SRE in 
{{SegmentReader.SegmentIterator.computeNext()}}

> Add option to encrypt commitlog 
> 
>
> Key: CASSANDRA-6018
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6018
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jason Brown
>Assignee: Jason Brown
>  Labels: commit_log, encryption, security
> Fix For: 3.x
>
>
> We are going to start using cassandra for a billing system, and while I can 
> encrypt sstables at rest (via Datastax Enterprise), commit logs are more or 
> less plain text. Thus, an attacker would be able to easily read, for example, 
> credit card numbers in the clear text commit log (if the calling app does not 
> encrypt the data itself before sending it to cassandra).
> I want to allow the option of encrypting the commit logs, most likely 
> controlled by a property in the yaml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10899) CQL-3.0.html spec is not published yet

2016-01-12 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094024#comment-15094024
 ] 

Benjamin Lerer commented on CASSANDRA-10899:


+1

> CQL-3.0.html spec is not published yet
> --
>
> Key: CASSANDRA-10899
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10899
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Wei Deng
>Assignee: Tyler Hobbs
>Priority: Minor
> Attachments: 10889-3.0.txt
>
>
> We have https://cassandra.apache.org/doc/cql3/CQL-2.2.html but CQL-3.0.html 
> doesn't exist yet and needs to be published, since Cassandra 3.0 is now 
> officially GA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10490) DTCS historic compaction, possibly with major compaction

2016-01-12 Thread Wei Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094064#comment-15094064
 ] 

Wei Deng commented on CASSANDRA-10490:
--

A related proposal was suggested by [~Bj0rn] in CASSANDRA-8361: "major 
compaction for DTCS should put data perfectly in windows rather than everything 
in one SSTable."

Since that JIRA got closed as duplicate, I'm pasting it here to continue the 
discussion.

> DTCS historic compaction, possibly with major compaction
> 
>
> Key: CASSANDRA-10490
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10490
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jonathan Shook
>  Labels: compaction, triage
> Fix For: 2.2.x, 3.x
>
>
> Presently, it's simply painful to run a major compaction with DTCS. It 
> doesn't really serve a useful purpose. Instead, a DTCS major compaction 
> should allow for a DTCS-style compaction to go back before 
> max_sstable_age_days. We can call this a historic compaction, for lack of a 
> better term.
> Such a compaction should not take precedence over normal compaction work, but 
> should be considered a background task. By default there should be a cap on 
> the number of these tasks running. It would be nice to have a separate 
> "max_historic_compaction_tasks" and possibly a 
> "max_historic_compaction_throughput" in the compaction settings to allow for 
> separate throttles on this. I would set these at 1 and 20% of the usual 
> compaction throughput if they aren't set explicitly.
> It may also be desirable to allow historic compaction to run apart from 
> running a major compaction, and to simply disable major compaction altogether 
> for DTCS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6018) Add option to encrypt commitlog

2016-01-12 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094080#comment-15094080
 ] 

Branimir Lambov commented on CASSANDRA-6018:


Sorry, {{CommitLogReplayer}} still does {{tolerateErrorsInSection &= end == 
reader.length() || end < 0;}} for uncompressed segments.

It's better to not do anything there as compression / decryption should have 
blown up by then (and the {{CommitLogUpgradeTest}} will verify that it does).

We should also add an encrypted log to {{CommitLogUpgradeTest}}.

> Add option to encrypt commitlog 
> 
>
> Key: CASSANDRA-6018
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6018
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jason Brown
>Assignee: Jason Brown
>  Labels: commit_log, encryption, security
> Fix For: 3.x
>
>
> We are going to start using cassandra for a billing system, and while I can 
> encrypt sstables at rest (via Datastax Enterprise), commit logs are more or 
> less plain text. Thus, an attacker would be able to easily read, for example, 
> credit card numbers in the clear text commit log (if the calling app does not 
> encrypt the data itself before sending it to cassandra).
> I want to allow the option of encrypting the commit logs, most likely 
> controlled by a property in the yaml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-01-12 Thread slebresne
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2318f76c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2318f76c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2318f76c

Branch: refs/heads/cassandra-3.3
Commit: 2318f76c8ea739b484e77ff3d2d52d279b084e8b
Parents: 2d0863c 4c7b06b
Author: Sylvain Lebresne 
Authored: Tue Jan 12 16:51:50 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Jan 12 16:51:50 2016 +0100

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/cql3/Lists.java   |  2 +-
 .../cql3/validation/entities/CollectionsTest.java   | 12 
 3 files changed, 14 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2318f76c/CHANGES.txt
--
diff --cc CHANGES.txt
index 2a13ef6,6daf7f9..50dc106
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,5 -1,5 +1,6 @@@
 -3.0.3
 +3.3
 +Merged from 3.0:
+  * Fix AssertionError when removing from list using UPDATE (CASSANDRA-10954)
   * Fix UnsupportedOperationException when reading old sstable with range
 tombstone (CASSANDRA-10743)
   * MV should use the maximum timestamp of the primary key (CASSANDRA-10910)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2318f76c/src/java/org/apache/cassandra/cql3/Lists.java
--
diff --cc src/java/org/apache/cassandra/cql3/Lists.java
index 43a97ae,18b382b..17c1575
--- a/src/java/org/apache/cassandra/cql3/Lists.java
+++ b/src/java/org/apache/cassandra/cql3/Lists.java
@@@ -356,9 -356,10 +356,9 @@@ public abstract class List
  if (idx < 0 || idx >= existingSize)
  throw new InvalidRequestException(String.format("List index 
%d out of bound, list has size %d", idx, existingSize));
  
 -CellPath elementPath = 
existingRow.getComplexColumnData(column).getCellByIndex(idx).path();
  if (value == null)
  {
- params.addTombstone(column);
+ params.addTombstone(column, elementPath);
  }
  else if (value != ByteBufferUtil.UNSET_BYTE_BUFFER)
  {



[6/6] cassandra git commit: Merge branch 'cassandra-3.3' into trunk

2016-01-12 Thread slebresne
Merge branch 'cassandra-3.3' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/837d0d04
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/837d0d04
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/837d0d04

Branch: refs/heads/trunk
Commit: 837d0d0457eb5ceec25c7dc628eead79f9be5927
Parents: 4e209d9 2318f76
Author: Sylvain Lebresne 
Authored: Tue Jan 12 16:52:00 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Jan 12 16:52:00 2016 +0100

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/cql3/Lists.java   |  2 +-
 .../cql3/validation/entities/CollectionsTest.java   | 12 
 3 files changed, 14 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/837d0d04/CHANGES.txt
--
diff --cc CHANGES.txt
index 68ea8b7,50dc106..41d06cc
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,11 -1,6 +1,12 @@@
 +3.4
 + * Stripe view locks by key and table ID to reduce contention 
(CASSANDRA-10981)
 + * Add nodetool gettimeout and settimeout commands (CASSANDRA-10953)
 + * Add 3.0 metadata to sstablemetadata output (CASSANDRA-10838)
 +
 +
  3.3
  Merged from 3.0:
+  * Fix AssertionError when removing from list using UPDATE (CASSANDRA-10954)
   * Fix UnsupportedOperationException when reading old sstable with range
 tombstone (CASSANDRA-10743)
   * MV should use the maximum timestamp of the primary key (CASSANDRA-10910)



[2/6] cassandra git commit: Properly pass CellPath when setting list element to null

2016-01-12 Thread slebresne
Properly pass CellPath when setting list element to null

patch by slebresne; reviewed by blerer for CASSANDRA-10954


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4c7b06b0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4c7b06b0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4c7b06b0

Branch: refs/heads/cassandra-3.3
Commit: 4c7b06b0a87f88bfaff5d55e6b302a25e0391f57
Parents: f4037f9
Author: Sylvain Lebresne 
Authored: Mon Jan 4 15:11:16 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Jan 12 16:50:04 2016 +0100

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/cql3/Lists.java   |  2 +-
 .../cql3/validation/entities/CollectionsTest.java   | 12 
 3 files changed, 14 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4c7b06b0/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index da5ed26..6daf7f9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.3
+ * Fix AssertionError when removing from list using UPDATE (CASSANDRA-10954)
  * Fix UnsupportedOperationException when reading old sstable with range
tombstone (CASSANDRA-10743)
  * MV should use the maximum timestamp of the primary key (CASSANDRA-10910)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4c7b06b0/src/java/org/apache/cassandra/cql3/Lists.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Lists.java 
b/src/java/org/apache/cassandra/cql3/Lists.java
index 4b41a9d..18b382b 100644
--- a/src/java/org/apache/cassandra/cql3/Lists.java
+++ b/src/java/org/apache/cassandra/cql3/Lists.java
@@ -359,7 +359,7 @@ public abstract class Lists
 CellPath elementPath = 
existingRow.getComplexColumnData(column).getCellByIndex(idx).path();
 if (value == null)
 {
-params.addTombstone(column);
+params.addTombstone(column, elementPath);
 }
 else if (value != ByteBufferUtil.UNSET_BYTE_BUFFER)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4c7b06b0/test/unit/org/apache/cassandra/cql3/validation/entities/CollectionsTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/CollectionsTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/entities/CollectionsTest.java
index 48e5ad3..a0a6e73 100644
--- 
a/test/unit/org/apache/cassandra/cql3/validation/entities/CollectionsTest.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/entities/CollectionsTest.java
@@ -852,4 +852,16 @@ public class CollectionsTest extends CQLTester
 
 assertRows(execute("SELECT s FROM %s WHERE k = 0"), row(set(largeText, 
"v2")));
 }
+
+@Test
+public void testRemovalThroughUpdate() throws Throwable
+{
+createTable("CREATE TABLE %s (k int PRIMARY KEY, l list)");
+
+ execute("INSERT INTO %s(k, l) VALUES(?, ?)", 0, list(1, 2, 3));
+ assertRows(execute("SELECT * FROM %s"), row(0, list(1, 2, 3)));
+
+ execute("UPDATE %s SET l[0] = null WHERE k=0");
+ assertRows(execute("SELECT * FROM %s"), row(0, list(2, 3)));
+}
 }



[3/6] cassandra git commit: Properly pass CellPath when setting list element to null

2016-01-12 Thread slebresne
Properly pass CellPath when setting list element to null

patch by slebresne; reviewed by blerer for CASSANDRA-10954


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4c7b06b0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4c7b06b0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4c7b06b0

Branch: refs/heads/trunk
Commit: 4c7b06b0a87f88bfaff5d55e6b302a25e0391f57
Parents: f4037f9
Author: Sylvain Lebresne 
Authored: Mon Jan 4 15:11:16 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Jan 12 16:50:04 2016 +0100

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/cql3/Lists.java   |  2 +-
 .../cql3/validation/entities/CollectionsTest.java   | 12 
 3 files changed, 14 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4c7b06b0/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index da5ed26..6daf7f9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.3
+ * Fix AssertionError when removing from list using UPDATE (CASSANDRA-10954)
  * Fix UnsupportedOperationException when reading old sstable with range
tombstone (CASSANDRA-10743)
  * MV should use the maximum timestamp of the primary key (CASSANDRA-10910)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4c7b06b0/src/java/org/apache/cassandra/cql3/Lists.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Lists.java 
b/src/java/org/apache/cassandra/cql3/Lists.java
index 4b41a9d..18b382b 100644
--- a/src/java/org/apache/cassandra/cql3/Lists.java
+++ b/src/java/org/apache/cassandra/cql3/Lists.java
@@ -359,7 +359,7 @@ public abstract class Lists
 CellPath elementPath = 
existingRow.getComplexColumnData(column).getCellByIndex(idx).path();
 if (value == null)
 {
-params.addTombstone(column);
+params.addTombstone(column, elementPath);
 }
 else if (value != ByteBufferUtil.UNSET_BYTE_BUFFER)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4c7b06b0/test/unit/org/apache/cassandra/cql3/validation/entities/CollectionsTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/CollectionsTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/entities/CollectionsTest.java
index 48e5ad3..a0a6e73 100644
--- 
a/test/unit/org/apache/cassandra/cql3/validation/entities/CollectionsTest.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/entities/CollectionsTest.java
@@ -852,4 +852,16 @@ public class CollectionsTest extends CQLTester
 
 assertRows(execute("SELECT s FROM %s WHERE k = 0"), row(set(largeText, 
"v2")));
 }
+
+@Test
+public void testRemovalThroughUpdate() throws Throwable
+{
+createTable("CREATE TABLE %s (k int PRIMARY KEY, l list)");
+
+ execute("INSERT INTO %s(k, l) VALUES(?, ?)", 0, list(1, 2, 3));
+ assertRows(execute("SELECT * FROM %s"), row(0, list(1, 2, 3)));
+
+ execute("UPDATE %s SET l[0] = null WHERE k=0");
+ assertRows(execute("SELECT * FROM %s"), row(0, list(2, 3)));
+}
 }



[1/6] cassandra git commit: Properly pass CellPath when setting list element to null

2016-01-12 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 f4037f9b3 -> 4c7b06b0a
  refs/heads/cassandra-3.3 2d0863c6d -> 2318f76c8
  refs/heads/trunk 4e209d9d3 -> 837d0d045


Properly pass CellPath when setting list element to null

patch by slebresne; reviewed by blerer for CASSANDRA-10954


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4c7b06b0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4c7b06b0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4c7b06b0

Branch: refs/heads/cassandra-3.0
Commit: 4c7b06b0a87f88bfaff5d55e6b302a25e0391f57
Parents: f4037f9
Author: Sylvain Lebresne 
Authored: Mon Jan 4 15:11:16 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Jan 12 16:50:04 2016 +0100

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/cql3/Lists.java   |  2 +-
 .../cql3/validation/entities/CollectionsTest.java   | 12 
 3 files changed, 14 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4c7b06b0/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index da5ed26..6daf7f9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.3
+ * Fix AssertionError when removing from list using UPDATE (CASSANDRA-10954)
  * Fix UnsupportedOperationException when reading old sstable with range
tombstone (CASSANDRA-10743)
  * MV should use the maximum timestamp of the primary key (CASSANDRA-10910)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4c7b06b0/src/java/org/apache/cassandra/cql3/Lists.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Lists.java 
b/src/java/org/apache/cassandra/cql3/Lists.java
index 4b41a9d..18b382b 100644
--- a/src/java/org/apache/cassandra/cql3/Lists.java
+++ b/src/java/org/apache/cassandra/cql3/Lists.java
@@ -359,7 +359,7 @@ public abstract class Lists
 CellPath elementPath = 
existingRow.getComplexColumnData(column).getCellByIndex(idx).path();
 if (value == null)
 {
-params.addTombstone(column);
+params.addTombstone(column, elementPath);
 }
 else if (value != ByteBufferUtil.UNSET_BYTE_BUFFER)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4c7b06b0/test/unit/org/apache/cassandra/cql3/validation/entities/CollectionsTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/CollectionsTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/entities/CollectionsTest.java
index 48e5ad3..a0a6e73 100644
--- 
a/test/unit/org/apache/cassandra/cql3/validation/entities/CollectionsTest.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/entities/CollectionsTest.java
@@ -852,4 +852,16 @@ public class CollectionsTest extends CQLTester
 
 assertRows(execute("SELECT s FROM %s WHERE k = 0"), row(set(largeText, 
"v2")));
 }
+
+@Test
+public void testRemovalThroughUpdate() throws Throwable
+{
+createTable("CREATE TABLE %s (k int PRIMARY KEY, l list)");
+
+ execute("INSERT INTO %s(k, l) VALUES(?, ?)", 0, list(1, 2, 3));
+ assertRows(execute("SELECT * FROM %s"), row(0, list(1, 2, 3)));
+
+ execute("UPDATE %s SET l[0] = null WHERE k=0");
+ assertRows(execute("SELECT * FROM %s"), row(0, list(2, 3)));
+}
 }



[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-01-12 Thread slebresne
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2318f76c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2318f76c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2318f76c

Branch: refs/heads/trunk
Commit: 2318f76c8ea739b484e77ff3d2d52d279b084e8b
Parents: 2d0863c 4c7b06b
Author: Sylvain Lebresne 
Authored: Tue Jan 12 16:51:50 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Jan 12 16:51:50 2016 +0100

--
 CHANGES.txt |  1 +
 src/java/org/apache/cassandra/cql3/Lists.java   |  2 +-
 .../cql3/validation/entities/CollectionsTest.java   | 12 
 3 files changed, 14 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2318f76c/CHANGES.txt
--
diff --cc CHANGES.txt
index 2a13ef6,6daf7f9..50dc106
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,5 -1,5 +1,6 @@@
 -3.0.3
 +3.3
 +Merged from 3.0:
+  * Fix AssertionError when removing from list using UPDATE (CASSANDRA-10954)
   * Fix UnsupportedOperationException when reading old sstable with range
 tombstone (CASSANDRA-10743)
   * MV should use the maximum timestamp of the primary key (CASSANDRA-10910)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2318f76c/src/java/org/apache/cassandra/cql3/Lists.java
--
diff --cc src/java/org/apache/cassandra/cql3/Lists.java
index 43a97ae,18b382b..17c1575
--- a/src/java/org/apache/cassandra/cql3/Lists.java
+++ b/src/java/org/apache/cassandra/cql3/Lists.java
@@@ -356,9 -356,10 +356,9 @@@ public abstract class List
  if (idx < 0 || idx >= existingSize)
  throw new InvalidRequestException(String.format("List index 
%d out of bound, list has size %d", idx, existingSize));
  
 -CellPath elementPath = 
existingRow.getComplexColumnData(column).getCellByIndex(idx).path();
  if (value == null)
  {
- params.addTombstone(column);
+ params.addTombstone(column, elementPath);
  }
  else if (value != ByteBufferUtil.UNSET_BYTE_BUFFER)
  {



[jira] [Commented] (CASSANDRA-10428) cqlsh: Include sub-second precision in timestamps by default

2016-01-12 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094141#comment-15094141
 ] 

Stefania commented on CASSANDRA-10428:
--

bq. {{SELECT * from test where id='1' and time = '2015-09-29 20:54:24.20';}} 
does not work...

{{cqlsh}} sends text statements to the server so the problem is server side, 
have a look at {{dateStringPatterns}} in TimestampSerializer.java. Due to the 
multiplication of time format combinations with the time zone and so forth, 
we'd have to add several patterns to support fewer millisecond digits. A 
regular expression would perhaps be better. Either way it's not a trivial 
change and I would prefer to open a different ticket since it is not related to 
cqlsh.

bq. Surprisingly this didn't break (m)any cqlsh dtests, is this expected?

If we are still referring to the point above, I would assume that yes it is 
expected since it is something we currently do not support server side.

bq. It seems the %f format does not work correctly on jython on Windows,

This would explain the {{%f}} noted above that I could not reproduce. I'm not 
sure if we need to support the lack of {{%f}} for jython on Windows, cc 
[~thobbs]. If we do, how would we recognize that we are running on jython? 

Thanks for testing on Windows and for working out the {{%f}} problem!

bq. I tested with copy to/from, and it seems to work correctly, but 
microseconds are silently discarded on copy from since we don't support this 
natively in the timestamp format. Should we maybe print a warning if the 
timestamp is in sub-ms precision different from zero?

We cannot print warnings in the worker processes at present. We can return 
errors to the parent process, but this would be excessive, or we have a limited 
{{printdebug}} method that only works with {{cqlsh --debug}} and does not take 
into account the fact that we may want to suppress warnings when printing to 
STDOUT as we do in the parent process. To support warnings properly, we need to 
extend the communication protocol between parent and worker processes. Also, in 
case of an exact string format match, it is not easy to recognize 
sub-milliseconds,  so I would prefer to leave it as is. I don't think the extra 
work would be justified. I have however added a new test, 
{{test_round_trip_with_sub_second_precision}}.

I've rebased and restarted CI.

> cqlsh: Include sub-second precision in timestamps by default
> 
>
> Key: CASSANDRA-10428
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10428
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: OSX 10.10.2
>Reporter: Chandran Anjur Narasimhan
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 3.x
>
>
> Query with >= timestamp works. But the exact timestamp value is not working.
> {noformat}
> NCHAN-M-D0LZ:bin nchan$ ./cqlsh
> Connected to CCC Multi-Region Cassandra Cluster at :.
> [cqlsh 5.0.1 | Cassandra 2.1.7 | CQL spec 3.2.0 | Native protocol v3]
> Use HELP for help.
> cqlsh>
> {noformat}
> {panel:title=Schema|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
> cqlsh:ccc> desc COLUMNFAMILY ez_task_result ;
> CREATE TABLE ccc.ez_task_result (
> submissionid text,
> ezid text,
> name text,
> time timestamp,
> analyzed_index_root text,
> ...
> ...
> PRIMARY KEY (submissionid, ezid, name, time)
> {panel}
> {panel:title=Working|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
> cqlsh:ccc> select submissionid, ezid, name, time, state, status, 
> translated_criteria_status from ez_task_result where 
> submissionid='760dd154670811e58c04005056bb6ff0' and 
> ezid='760dd6de670811e594fc005056bb6ff0' and name='run-sanities' and 
> time>='2015-09-29 20:54:23-0700';
>  submissionid | ezid | name   
>   | time | state | status  | 
> translated_criteria_status
> --+--+--+--+---+-+
>  760dd154670811e58c04005056bb6ff0 | 760dd6de670811e594fc005056bb6ff0 | 
> run-sanities | 2015-09-29 20:54:23-0700 | EXECUTING | IN_PROGRESS |   
> run-sanities started
> (1 rows)
> cqlsh:ccc>
> {panel}
> {panel:title=Not 
> working|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}
> cqlsh:ccc> select submissionid, ezid, name, time, state, status, 
> translated_criteria_status from ez_task_result where 
> submissionid='760dd154670811e58c04005056bb6ff0' and 
> ezid='760dd6de670811e594fc005056bb6ff0' and name='run-sanities' and 
> time='2015-09-29 20:54:23-0700';
>  submissionid | ezid | name | time | 

[jira] [Commented] (CASSANDRA-10661) Integrate SASI to Cassandra

2016-01-12 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15094106#comment-15094106
 ] 

DOAN DuyHai commented on CASSANDRA-10661:
-

[~xedin] If you have some time, can you point me to the source code (class) 
where SASI manages the fetching of data on other nodes in the ring ? Jason 
Brown told me that SASI does not use the scatter-gather technique but fetches 
data by token range: https://twitter.com/doanduyhai/status/662392685706289152

> Integrate SASI to Cassandra
> ---
>
> Key: CASSANDRA-10661
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10661
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Pavel Yaskevich
>Assignee: Pavel Yaskevich
>  Labels: sasi
> Fix For: 3.x
>
>
> We have recently released new secondary index engine 
> (https://github.com/xedin/sasi) build using SecondaryIndex API, there are 
> still couple of things to work out regarding 3.x since it's currently 
> targeted on 2.0 released. I want to make this an umbrella issue to all of the 
> things related to integration of SASI, which are also tracked in 
> [sasi_issues|https://github.com/xedin/sasi/issues], into mainline Cassandra 
> 3.x release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10963) Bootstrap stream fails with java.lang.InterruptedException

2016-01-12 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095145#comment-15095145
 ] 

Paulo Motta commented on CASSANDRA-10963:
-

You're most likely running into CASSANDRA-10797, which was fixed only on 3.0+. 
Workaround on 2.2 is to temporarily increase heap of joining node during 
bootstrap, and potentially combine with G1 GC which is more efficient than CMS 
for larger heaps.

> Bootstrap stream fails with java.lang.InterruptedException 
> ---
>
> Key: CASSANDRA-10963
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10963
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: [cqlsh 5.0.1 | Cassandra 2.2.4 | CQL spec 3.3.1 | Native 
> protocol v4]
> java version "1.8.0_65"
>Reporter: Jack Money
>Assignee: Paulo Motta
>
> hello
> I got 2 nodes in 2 DC.
> Each node own 100% data of keyspace hugespace.
> Keyspace have 21 tables with 2TB data
> Biggest table have 1.6 TB of data.
> Biggest sstable 1,3 TB.
> Schemats:
> {noformat} 
> KEYSPACE hugespace WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'DC1': '3', 'DC2': '1'};
> CREATE TABLE hugespace.content (
> y int,
> m int,
> d int,
> ts bigint,
> ha text,
> co text,
> he text,
> ids bigint,
> ifr text,
> js text,
> PRIMARY KEY ((y, m, d), ts, ha)
> ) WITH CLUSTERING ORDER BY (ts ASC, ha ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> CREATE INDEX content_ids_idx ON hugespace.content (ids);
> {noformat}
> I tried to add one node (target 6 node in DC1) to DC1.
> Names:
> Existing node in DC1 = nodeDC1
> Existing node in DC2 = nodeDC2
> New node joining DC1 = joiningDC1
> joiningDC1
> {noformat} 
> INFO  [main] 2016-01-04 12:17:55,535 StorageService.java:1176 - JOINING: 
> Starting to bootstrap...
> INFO  [main] 2016-01-04 12:17:55,802 StreamResultFuture.java:86 - [Stream 
> #2f473320-b2dd-11e5-8353-b5506ad414a4] Executing streaming plan for Bootstrap
> INFO  [StreamConnectionEstablisher:1] 2016-01-04 12:17:55,803 
> StreamSession.java:232 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] 
> Starting streaming to /nodeDC1
> INFO  [StreamConnectionEstablisher:2] 2016-01-04 12:17:55,803 
> StreamSession.java:232 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] 
> Starting streaming to /nodeDC2
> DEBUG [StreamConnectionEstablisher:2] 2016-01-04 12:17:55,803 
> ConnectionHandler.java:82 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] 
> Sending stream init for incoming stream
> DEBUG [StreamConnectionEstablisher:1] 2016-01-04 12:17:55,803 
> ConnectionHandler.java:82 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] 
> Sending stream init for incoming stream
> DEBUG [StreamConnectionEstablisher:1] 2016-01-04 12:17:55,806 
> ConnectionHandler.java:87 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] 
> Sending stream init for outgoing stream
> DEBUG [StreamConnectionEstablisher:2] 2016-01-04 12:17:55,806 
> ConnectionHandler.java:87 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] 
> Sending stream init for outgoing stream
> DEBUG [STREAM-OUT-/nodeDC1] 2016-01-04 12:17:55,810 
> ConnectionHandler.java:334 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] 
> Sending Prepare (5 requests,  0 files}
> DEBUG [STREAM-OUT-/nodeDC2] 2016-01-04 12:17:55,810 
> ConnectionHandler.java:334 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] 
> Sending Prepare (2 requests,  0 files}
> INFO  [StreamConnectionEstablisher:2] 2016-01-04 12:17:55,810 
> StreamCoordinator.java:213 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4, 
> ID#0] Beginning stream session with /nodeDC2
> INFO  [StreamConnectionEstablisher:1] 2016-01-04 12:17:55,810 
> StreamCoordinator.java:213 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4, 
> ID#0] Beginning stream session with /nodeDC1
> DEBUG [STREAM-IN-/nodeDC2] 2016-01-04 12:17:55,821 ConnectionHandler.java:266 
> - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4] Received Prepare (0 
> requests,  1 files}
> INFO  [STREAM-IN-/nodeDC2] 2016-01-04 12:17:55,822 
> StreamResultFuture.java:168 - [Stream #2f473320-b2dd-11e5-8353-b5506ad414a4 
> ID#0] Prepare completed. Receiving 1 files(161 bytes), sending 0 files(0 

cassandra git commit: Support user-defined compactions through nodetool

2016-01-12 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/trunk a883ff5f3 -> 836a30b17


Support user-defined compactions through nodetool

patch by jeffj; reviewed by yukim for CASSANDRA-10660


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/836a30b1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/836a30b1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/836a30b1

Branch: refs/heads/trunk
Commit: 836a30b17d5ab7d9b0c1f22be27c6469cbdf583b
Parents: a883ff5
Author: Jeff Jirsa 
Authored: Sat Nov 14 21:33:53 2015 -0800
Committer: Yuki Morishita 
Committed: Tue Jan 12 17:11:33 2016 -0600

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/tools/NodeProbe.java   |  4 
 .../cassandra/tools/nodetool/Compact.java   | 23 ++--
 3 files changed, 26 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/836a30b1/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 36f0a8a..6bfd7ad 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.4
+ * Support user-defined compaction through nodetool (CASSANDRA-10660)
  * Stripe view locks by key and table ID to reduce contention (CASSANDRA-10981)
  * Add nodetool gettimeout and settimeout commands (CASSANDRA-10953)
  * Add 3.0 metadata to sstablemetadata output (CASSANDRA-10838)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/836a30b1/src/java/org/apache/cassandra/tools/NodeProbe.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeProbe.java 
b/src/java/org/apache/cassandra/tools/NodeProbe.java
index 891ed83..a8d23ca 100644
--- a/src/java/org/apache/cassandra/tools/NodeProbe.java
+++ b/src/java/org/apache/cassandra/tools/NodeProbe.java
@@ -294,6 +294,10 @@ public class NodeProbe implements AutoCloseable
 }
 }
 
+public void forceUserDefinedCompaction(String datafiles) throws 
IOException, ExecutionException, InterruptedException
+{
+compactionProxy.forceUserDefinedCompaction(datafiles);
+}
 
 public void forceKeyspaceCompaction(boolean splitOutput, String 
keyspaceName, String... tableNames) throws IOException, ExecutionException, 
InterruptedException
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/836a30b1/src/java/org/apache/cassandra/tools/nodetool/Compact.java
--
diff --git a/src/java/org/apache/cassandra/tools/nodetool/Compact.java 
b/src/java/org/apache/cassandra/tools/nodetool/Compact.java
index 002541d..f268f0a 100644
--- a/src/java/org/apache/cassandra/tools/nodetool/Compact.java
+++ b/src/java/org/apache/cassandra/tools/nodetool/Compact.java
@@ -27,18 +27,37 @@ import java.util.List;
 import org.apache.cassandra.tools.NodeProbe;
 import org.apache.cassandra.tools.NodeTool.NodeToolCmd;
 
-@Command(name = "compact", description = "Force a (major) compaction on one or 
more tables")
+@Command(name = "compact", description = "Force a (major) compaction on one or 
more tables or user-defined compaction on given SSTables")
 public class Compact extends NodeToolCmd
 {
-@Arguments(usage = "[ ...]", description = "The keyspace 
followed by one or many tables")
+@Arguments(usage = "[ ...] or ...", 
description = "The keyspace followed by one or many tables or list of SSTable 
data files when using --user-defined")
 private List args = new ArrayList<>();
 
 @Option(title = "split_output", name = {"-s", "--split-output"}, 
description = "Use -s to not create a single big file")
 private boolean splitOutput = false;
 
+@Option(title = "user-defined", name = {"--user-defined"}, description = 
"Use --user-defined to submit listed files for user-defined compaction")
+private boolean userDefined = false;
+
 @Override
 public void execute(NodeProbe probe)
 {
+if (splitOutput && userDefined)
+{
+throw new RuntimeException("Invalid option combination: User 
defined compaction can not be split");
+}
+else if (userDefined)
+{
+try
+{
+String userDefinedFiles = String.join(",", args);
+probe.forceUserDefinedCompaction(userDefinedFiles);
+} catch (Exception e) {
+throw new RuntimeException("Error occurred during user defined 
compaction", e);
+}
+return;
+}
+
 List keyspaces = parseOptionalKeyspace(args, probe);
 String[] tableNames = parseOptionalTables(args);
 



[jira] [Comment Edited] (CASSANDRA-10963) Bootstrap stream fails with java.lang.InterruptedException

2016-01-12 Thread Jack Money (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095130#comment-15095130
 ] 

Jack Money edited comment on CASSANDRA-10963 at 1/12/16 10:42 PM:
--

Scrubbing finished 12 hours ago, I use patch from 10961 and it's work for 
stream error message.
But after transfer about 80GB new node die with this messages:
(i try 2 times)
{noformat}
INFO  [Service Thread] 2016-01-12 17:02:42,963 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 6861ms.  CMS Old Gen: 6256236856 -> 4427508232; Par 
Eden Space: 670205224 -> 0; Par Survivor Space: 83886080 -> 0
INFO  [Service Thread] 2016-01-12 17:02:52,061 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 5609ms.  CMS Old Gen: 7364000784 -> 7532609408; Par 
Eden Space: 670505104 -> 299957256; Par Survivor Space: 83886080 -> 0
INFO  [Service Thread] 2016-01-12 17:02:57,708 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 5038ms.  CMS Old Gen: 7532969944 -> 7532424544; Par 
Eden Space: 671088640 -> 328612416; Par Survivor Space: 83886072 -> 0
INFO  [Service Thread] 2016-01-12 17:03:04,841 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 6501ms.  CMS Old Gen: 7532969960 -> 7532748368; Par 
Eden Space: 671088640 -> 351893640; Par Survivor Space: 83886080 -> 0
WARN  [GossipTasks:1] 2016-01-12 17:03:04,846 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 7137910003 > 50
WARN  [GossipTasks:1] 2016-01-12 17:03:09,995 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 5148993221 > 50
INFO  [Service Thread] 2016-01-12 17:03:10,007 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 5027ms.  CMS Old Gen: 7532787912 -> 7532201528; Par 
Eden Space: 671088640 -> 634423608; Par Survivor Space: 83886080 -> 0
INFO  [Service Thread] 2016-01-12 17:03:16,234 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 6114ms.  CMS Old Gen: 7532201528 -> 7532051976; Par 
Eden Space: 671088640 -> 670324296; Par Survivor Space: 83712776 -> 42735048
WARN  [GossipTasks:1] 2016-01-12 17:03:21,201 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 11205569973 > 50
WARN  [GossipTasks:1] 2016-01-12 17:03:27,302 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 6101023026 > 50
INFO  [ScheduledTasks:1] 2016-01-12 17:04:10,072 MessagingService.java:944 - 
MUTATION messages were dropped in last 5000 ms: 9 for internal timeout and 0 
for cross node timeout
INFO  [Service Thread] 2016-01-12 17:04:21,165 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 30487ms.  CMS Old Gen: 7532413400 -> 7532090624; Par 
Eden Space: 671088640 -> 670332520; Par Survivor Space: 83886080 -> 77490848
INFO  [Service Thread] 2016-01-12 17:04:21,165 StatusLogger.java:52 - Pool Name 
   Active   Pending  Completed   Blocked  All Time Blocked
WARN  [GossipTasks:1] 2016-01-12 17:04:16,346 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 4905344 > 50
WARN  [GossipTasks:1] 2016-01-12 17:04:31,992 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 15645583776 > 50
INFO  [Service Thread] 2016-01-12 17:04:59,948 StatusLogger.java:56 - 
MutationStage 6 09121928 0  
   0
WARN  [GossipTasks:1] 2016-01-12 17:05:07,610 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 35618257255 > 50
INFO  [Service Thread] 2016-01-12 17:06:21,389 StatusLogger.java:56 - ReadStage 
0 0  0 0 0
WARN  [GossipTasks:1] 2016-01-12 17:06:21,390 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 73779488958 > 50
WARN  [GossipTasks:1] 2016-01-12 17:07:17,999 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 56609717402 > 50
WARN  [GossipTasks:1] 2016-01-12 17:07:28,849 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 10850199929 > 50
WARN  [GossipTasks:1] 2016-01-12 17:07:41,546 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 12697078517 > 50
INFO  [ScheduledTasks:1] 2016-01-12 17:07:49,606 StatusLogger.java:56 - 
MutationStage11 09121947 0  
   0
INFO  [Service Thread] 2016-01-12 17:07:54,377 StatusLogger.java:56 - 
RequestResponseStage  0 0  4 0  
   0
WARN  [GossipTasks:1] 2016-01-12 17:08:12,152 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 30605423065 > 50
WARN  [GossipTasks:1] 2016-01-12 17:08:19,408 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 7256123604 > 50
ERROR [SharedPool-Worker-11] 2016-01-12 17:08:40,727 
JVMStabilityInspector.java:117 - JVM state determined to be 

[jira] [Commented] (CASSANDRA-10963) Bootstrap stream fails with java.lang.InterruptedException

2016-01-12 Thread Jack Money (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095130#comment-15095130
 ] 

Jack Money commented on CASSANDRA-10963:


Scrubbing finished 5 hour ago, I use patch from 10961 and it's work for stream 
error message.
But after transfer about 80GB new node die with this messages:
(i try 2 times)
{noformat}
INFO  [Service Thread] 2016-01-12 17:02:42,963 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 6861ms.  CMS Old Gen: 6256236856 -> 4427508232; Par 
Eden Space: 670205224 -> 0; Par Survivor Space: 83886080 -> 0
INFO  [Service Thread] 2016-01-12 17:02:52,061 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 5609ms.  CMS Old Gen: 7364000784 -> 7532609408; Par 
Eden Space: 670505104 -> 299957256; Par Survivor Space: 83886080 -> 0
INFO  [Service Thread] 2016-01-12 17:02:57,708 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 5038ms.  CMS Old Gen: 7532969944 -> 7532424544; Par 
Eden Space: 671088640 -> 328612416; Par Survivor Space: 83886072 -> 0
INFO  [Service Thread] 2016-01-12 17:03:04,841 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 6501ms.  CMS Old Gen: 7532969960 -> 7532748368; Par 
Eden Space: 671088640 -> 351893640; Par Survivor Space: 83886080 -> 0
WARN  [GossipTasks:1] 2016-01-12 17:03:04,846 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 7137910003 > 50
WARN  [GossipTasks:1] 2016-01-12 17:03:09,995 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 5148993221 > 50
INFO  [Service Thread] 2016-01-12 17:03:10,007 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 5027ms.  CMS Old Gen: 7532787912 -> 7532201528; Par 
Eden Space: 671088640 -> 634423608; Par Survivor Space: 83886080 -> 0
INFO  [Service Thread] 2016-01-12 17:03:16,234 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 6114ms.  CMS Old Gen: 7532201528 -> 7532051976; Par 
Eden Space: 671088640 -> 670324296; Par Survivor Space: 83712776 -> 42735048
WARN  [GossipTasks:1] 2016-01-12 17:03:21,201 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 11205569973 > 50
WARN  [GossipTasks:1] 2016-01-12 17:03:27,302 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 6101023026 > 50
INFO  [ScheduledTasks:1] 2016-01-12 17:04:10,072 MessagingService.java:944 - 
MUTATION messages were dropped in last 5000 ms: 9 for internal timeout and 0 
for cross node timeout
INFO  [Service Thread] 2016-01-12 17:04:21,165 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 30487ms.  CMS Old Gen: 7532413400 -> 7532090624; Par 
Eden Space: 671088640 -> 670332520; Par Survivor Space: 83886080 -> 77490848
INFO  [Service Thread] 2016-01-12 17:04:21,165 StatusLogger.java:52 - Pool Name 
   Active   Pending  Completed   Blocked  All Time Blocked
WARN  [GossipTasks:1] 2016-01-12 17:04:16,346 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 4905344 > 50
WARN  [GossipTasks:1] 2016-01-12 17:04:31,992 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 15645583776 > 50
INFO  [Service Thread] 2016-01-12 17:04:59,948 StatusLogger.java:56 - 
MutationStage 6 09121928 0  
   0
WARN  [GossipTasks:1] 2016-01-12 17:05:07,610 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 35618257255 > 50
INFO  [Service Thread] 2016-01-12 17:06:21,389 StatusLogger.java:56 - ReadStage 
0 0  0 0 0
WARN  [GossipTasks:1] 2016-01-12 17:06:21,390 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 73779488958 > 50
WARN  [GossipTasks:1] 2016-01-12 17:07:17,999 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 56609717402 > 50
WARN  [GossipTasks:1] 2016-01-12 17:07:28,849 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 10850199929 > 50
WARN  [GossipTasks:1] 2016-01-12 17:07:41,546 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 12697078517 > 50
INFO  [ScheduledTasks:1] 2016-01-12 17:07:49,606 StatusLogger.java:56 - 
MutationStage11 09121947 0  
   0
INFO  [Service Thread] 2016-01-12 17:07:54,377 StatusLogger.java:56 - 
RequestResponseStage  0 0  4 0  
   0
WARN  [GossipTasks:1] 2016-01-12 17:08:12,152 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 30605423065 > 50
WARN  [GossipTasks:1] 2016-01-12 17:08:19,408 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 7256123604 > 50
ERROR [SharedPool-Worker-11] 2016-01-12 17:08:40,727 
JVMStabilityInspector.java:117 - JVM state determined to be unstable.  Exiting 
forcefully due to:

[jira] [Commented] (CASSANDRA-10979) LCS doesn't do L0 STC on new tables while an L0->L1 compaction is in progress

2016-01-12 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095112#comment-15095112
 ] 

Jeremiah Jordan commented on CASSANDRA-10979:
-

Agree with Jeff here. This seems like a bug in the intended behavior that we 
should at least fix in 2.2+, since it's not a critical bug fix to go in 2.1.

> LCS doesn't do L0 STC on new tables while an L0->L1 compaction is in progress
> -
>
> Key: CASSANDRA-10979
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10979
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: 2.1.11 / 4.8.3 DSE.
>Reporter: Jeff Ferland
>Assignee: Carl Yeksigian
>  Labels: compaction, leveled
> Fix For: 3.x
>
> Attachments: 10979-2.1.txt
>
>
> Reading code from 
> https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
>  and comparing with behavior shown in 
> https://gist.github.com/autocracy/c95aca6b00e42215daaf, the following happens:
> Score for L1,L2,and L3 is all < 1 (paste shows 20/10 and 200/100, due to 
> incremental repair).
> Relevant code from here is
> if (Sets.intersection(l1overlapping, compacting).size() > 0)
> return Collections.emptyList();
> Since there will be overlap between what is compacting and L1 (in my case, 
> pushing over 1,000 tables in to L1 from L0 SCTS), I get a pile up of 1,000 
> smaller tables in L0 while awaiting the transition from L0 to L1 and destroy 
> my performance.
> Requested outcome is to continue to perform SCTS on non-compacting L0 tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10963) Bootstrap stream fails with java.lang.InterruptedException

2016-01-12 Thread Jack Money (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15095130#comment-15095130
 ] 

Jack Money edited comment on CASSANDRA-10963 at 1/12/16 10:42 PM:
--

Scrubbing finished 12 hour ago, I use patch from 10961 and it's work for stream 
error message.
But after transfer about 80GB new node die with this messages:
(i try 2 times)
{noformat}
INFO  [Service Thread] 2016-01-12 17:02:42,963 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 6861ms.  CMS Old Gen: 6256236856 -> 4427508232; Par 
Eden Space: 670205224 -> 0; Par Survivor Space: 83886080 -> 0
INFO  [Service Thread] 2016-01-12 17:02:52,061 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 5609ms.  CMS Old Gen: 7364000784 -> 7532609408; Par 
Eden Space: 670505104 -> 299957256; Par Survivor Space: 83886080 -> 0
INFO  [Service Thread] 2016-01-12 17:02:57,708 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 5038ms.  CMS Old Gen: 7532969944 -> 7532424544; Par 
Eden Space: 671088640 -> 328612416; Par Survivor Space: 83886072 -> 0
INFO  [Service Thread] 2016-01-12 17:03:04,841 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 6501ms.  CMS Old Gen: 7532969960 -> 7532748368; Par 
Eden Space: 671088640 -> 351893640; Par Survivor Space: 83886080 -> 0
WARN  [GossipTasks:1] 2016-01-12 17:03:04,846 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 7137910003 > 50
WARN  [GossipTasks:1] 2016-01-12 17:03:09,995 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 5148993221 > 50
INFO  [Service Thread] 2016-01-12 17:03:10,007 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 5027ms.  CMS Old Gen: 7532787912 -> 7532201528; Par 
Eden Space: 671088640 -> 634423608; Par Survivor Space: 83886080 -> 0
INFO  [Service Thread] 2016-01-12 17:03:16,234 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 6114ms.  CMS Old Gen: 7532201528 -> 7532051976; Par 
Eden Space: 671088640 -> 670324296; Par Survivor Space: 83712776 -> 42735048
WARN  [GossipTasks:1] 2016-01-12 17:03:21,201 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 11205569973 > 50
WARN  [GossipTasks:1] 2016-01-12 17:03:27,302 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 6101023026 > 50
INFO  [ScheduledTasks:1] 2016-01-12 17:04:10,072 MessagingService.java:944 - 
MUTATION messages were dropped in last 5000 ms: 9 for internal timeout and 0 
for cross node timeout
INFO  [Service Thread] 2016-01-12 17:04:21,165 GCInspector.java:284 - 
ConcurrentMarkSweep GC in 30487ms.  CMS Old Gen: 7532413400 -> 7532090624; Par 
Eden Space: 671088640 -> 670332520; Par Survivor Space: 83886080 -> 77490848
INFO  [Service Thread] 2016-01-12 17:04:21,165 StatusLogger.java:52 - Pool Name 
   Active   Pending  Completed   Blocked  All Time Blocked
WARN  [GossipTasks:1] 2016-01-12 17:04:16,346 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 4905344 > 50
WARN  [GossipTasks:1] 2016-01-12 17:04:31,992 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 15645583776 > 50
INFO  [Service Thread] 2016-01-12 17:04:59,948 StatusLogger.java:56 - 
MutationStage 6 09121928 0  
   0
WARN  [GossipTasks:1] 2016-01-12 17:05:07,610 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 35618257255 > 50
INFO  [Service Thread] 2016-01-12 17:06:21,389 StatusLogger.java:56 - ReadStage 
0 0  0 0 0
WARN  [GossipTasks:1] 2016-01-12 17:06:21,390 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 73779488958 > 50
WARN  [GossipTasks:1] 2016-01-12 17:07:17,999 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 56609717402 > 50
WARN  [GossipTasks:1] 2016-01-12 17:07:28,849 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 10850199929 > 50
WARN  [GossipTasks:1] 2016-01-12 17:07:41,546 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 12697078517 > 50
INFO  [ScheduledTasks:1] 2016-01-12 17:07:49,606 StatusLogger.java:56 - 
MutationStage11 09121947 0  
   0
INFO  [Service Thread] 2016-01-12 17:07:54,377 StatusLogger.java:56 - 
RequestResponseStage  0 0  4 0  
   0
WARN  [GossipTasks:1] 2016-01-12 17:08:12,152 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 30605423065 > 50
WARN  [GossipTasks:1] 2016-01-12 17:08:19,408 FailureDetector.java:287 - Not 
marking nodes down due to local pause of 7256123604 > 50
ERROR [SharedPool-Worker-11] 2016-01-12 17:08:40,727 
JVMStabilityInspector.java:117 - JVM state determined to be 

  1   2   >