[jira] [Updated] (CASSANDRA-7839) Support standard EC2 naming conventions in Ec2Snitch

2016-04-15 Thread Gregory Ramsperger (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Ramsperger updated CASSANDRA-7839:
--
Status: Patch Available  (was: Open)

Attached patch is of commit: 
https://github.com/ramsperger/cassandra/commit/e930fb7ade614b8a46091d81e458053599c3e519

Full Github branch:
https://github.com/ramsperger/cassandra/tree/CASSANDRA-7839-aws-naming-conventions

> Support standard EC2 naming conventions in Ec2Snitch
> 
>
> Key: CASSANDRA-7839
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7839
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Gregory Ramsperger
>Assignee: Gregory Ramsperger
> Attachments: CASSANDRA-7839-aws-naming-conventions.patch
>
>
> The EC2 snitches use datacenter and rack naming conventions inconsistent with 
> those presented in Amazon EC2 APIs as region and availability zone. A 
> discussion of this is found in CASSANDRA-4026. This has not been changed for 
> valid backwards compatibility reasons. Using SnitchProperties, it is possible 
> to switch between the legacy naming and the full, AWS-style naming. 
> Proposal:
> * introduce a property (ec2_naming_scheme) to switch naming schemes.
> * default to current/legacy naming scheme
> * add support for a new scheme ("standard") which is consistent AWS 
> conventions
> ** data centers will be the region name, including the number
> ** racks will be the availability zone name, including the region name
> Examples:
> * * legacy* : datacenter is the part of the availability zone name preceding 
> the last "\-" when the zone ends in \-1 and includes the number if not \-1. 
> Rack is the portion of the availability zone name following  the last "\-".
> ** us-west-1a => dc: us-west, rack: 1a
> ** us-west-2b => dc: us-west-2, rack: 2b; 
> * *standard* : datacenter is the part of the availability zone name preceding 
> zone letter. rack is the entire availability zone name.
> ** us-west-1a => dc: us-west-1, rack: us-west-1a
> ** us-west-2b => dc: us-west-2, rack: us-west-2b; 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7839) Support standard EC2 naming conventions in Ec2Snitch

2016-04-15 Thread Gregory Ramsperger (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Ramsperger updated CASSANDRA-7839:
--
Attachment: CASSANDRA-7839-aws-naming-conventions.patch

Patch of: 
https://github.com/ramsperger/cassandra/commit/e930fb7ade614b8a46091d81e458053599c3e519

Github branch:
https://github.com/ramsperger/cassandra/tree/CASSANDRA-7839-aws-naming-conventions

> Support standard EC2 naming conventions in Ec2Snitch
> 
>
> Key: CASSANDRA-7839
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7839
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Gregory Ramsperger
>Assignee: Gregory Ramsperger
> Attachments: CASSANDRA-7839-aws-naming-conventions.patch
>
>
> The EC2 snitches use datacenter and rack naming conventions inconsistent with 
> those presented in Amazon EC2 APIs as region and availability zone. A 
> discussion of this is found in CASSANDRA-4026. This has not been changed for 
> valid backwards compatibility reasons. Using SnitchProperties, it is possible 
> to switch between the legacy naming and the full, AWS-style naming. 
> Proposal:
> * introduce a property (ec2_naming_scheme) to switch naming schemes.
> * default to current/legacy naming scheme
> * add support for a new scheme ("standard") which is consistent AWS 
> conventions
> ** data centers will be the region name, including the number
> ** racks will be the availability zone name, including the region name
> Examples:
> * * legacy* : datacenter is the part of the availability zone name preceding 
> the last "\-" when the zone ends in \-1 and includes the number if not \-1. 
> Rack is the portion of the availability zone name following  the last "\-".
> ** us-west-1a => dc: us-west, rack: 1a
> ** us-west-2b => dc: us-west-2, rack: 2b; 
> * *standard* : datacenter is the part of the availability zone name preceding 
> zone letter. rack is the entire availability zone name.
> ** us-west-1a => dc: us-west-1, rack: us-west-1a
> ** us-west-2b => dc: us-west-2, rack: us-west-2b; 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7839) Support standard EC2 naming conventions in Ec2Snitch

2016-04-15 Thread Gregory Ramsperger (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Ramsperger updated CASSANDRA-7839:
--
Attachment: (was: 
conditionally-use-full-Amazon-style-naming-for-dc.patch)

> Support standard EC2 naming conventions in Ec2Snitch
> 
>
> Key: CASSANDRA-7839
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7839
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Gregory Ramsperger
>Assignee: Gregory Ramsperger
>
> The EC2 snitches use datacenter and rack naming conventions inconsistent with 
> those presented in Amazon EC2 APIs as region and availability zone. A 
> discussion of this is found in CASSANDRA-4026. This has not been changed for 
> valid backwards compatibility reasons. Using SnitchProperties, it is possible 
> to switch between the legacy naming and the full, AWS-style naming. 
> Proposal:
> * introduce a property (ec2_naming_scheme) to switch naming schemes.
> * default to current/legacy naming scheme
> * add support for a new scheme ("standard") which is consistent AWS 
> conventions
> ** data centers will be the region name, including the number
> ** racks will be the availability zone name, including the region name
> Examples:
> * * legacy* : datacenter is the part of the availability zone name preceding 
> the last "\-" when the zone ends in \-1 and includes the number if not \-1. 
> Rack is the portion of the availability zone name following  the last "\-".
> ** us-west-1a => dc: us-west, rack: 1a
> ** us-west-2b => dc: us-west-2, rack: 2b; 
> * *standard* : datacenter is the part of the availability zone name preceding 
> zone letter. rack is the entire availability zone name.
> ** us-west-1a => dc: us-west-1, rack: us-west-1a
> ** us-west-2b => dc: us-west-2, rack: us-west-2b; 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7839) Support standard EC2 naming conventions in Ec2Snitch

2016-04-15 Thread Gregory Ramsperger (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Ramsperger updated CASSANDRA-7839:
--
Description: 
The EC2 snitches use datacenter and rack naming conventions inconsistent with 
those presented in Amazon EC2 APIs as region and availability zone. A 
discussion of this is found in CASSANDRA-4026. This has not been changed for 
valid backwards compatibility reasons. Using SnitchProperties, it is possible 
to switch between the legacy naming and the full, AWS-style naming. 

Proposal:
* introduce a property (ec2_naming_scheme) to switch naming schemes.
* default to current/legacy naming scheme
* add support for a new scheme ("standard") which is consistent AWS conventions
** data centers will be the region name, including the number
** racks will be the availability zone name, including the region name


Examples:
* * legacy* : datacenter is the part of the availability zone name preceding 
the last "\-" when the zone ends in \-1 and includes the number if not \-1. 
Rack is the portion of the availability zone name following  the last "\-".
** us-west-1a => dc: us-west, rack: 1a
** us-west-2b => dc: us-west-2, rack: 2b; 
* *standard* : datacenter is the part of the availability zone name preceding 
zone letter. rack is the entire availability zone name.
** us-west-1a => dc: us-west-1, rack: us-west-1a
** us-west-2b => dc: us-west-2, rack: us-west-2b; 



  was:
The EC2 snitches use datacenter and rack naming conventions inconsistent with 
those presented in Amazon EC2 APIs as region and availability zone. A 
discussion of this is found in CASSANDRA-4026. This has not been changed for 
valid backwards compatibility reasons. Using SnitchProperties, it is possible 
to switch between the legacy naming and the full, AWS-style naming. 

Proposal:
* introduce a property (ec2_naming_scheme) to switch naming schemes.
* default to current/legacy naming scheme
* add support for a new scheme ("full") which is consistent AWS conventions
** data centers will be the region name, including the number
** racks will be the availability zone name, including the region name


Examples:
* * legacy* : datacenter is the part of the availability zone name preceding 
the last "\-" when the zone ends in \-1 and includes the number if not \-1. 
Rack is the portion of the availability zone name following  the last "\-".
** us-west-1a => dc: us-west, rack: 1a
** us-west-2b => dc: us-west-2, rack: 2b; 
* *full* : datacenter is the part of the availability zone name preceding zone 
letter. rack is the entire availability zone name.
** us-west-1a => dc: us-west-1, rack: us-west-1a
** us-west-2b => dc: us-west-2, rack: us-west-2b; 




> Support standard EC2 naming conventions in Ec2Snitch
> 
>
> Key: CASSANDRA-7839
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7839
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Gregory Ramsperger
>Assignee: Gregory Ramsperger
> Attachments: conditionally-use-full-Amazon-style-naming-for-dc.patch
>
>
> The EC2 snitches use datacenter and rack naming conventions inconsistent with 
> those presented in Amazon EC2 APIs as region and availability zone. A 
> discussion of this is found in CASSANDRA-4026. This has not been changed for 
> valid backwards compatibility reasons. Using SnitchProperties, it is possible 
> to switch between the legacy naming and the full, AWS-style naming. 
> Proposal:
> * introduce a property (ec2_naming_scheme) to switch naming schemes.
> * default to current/legacy naming scheme
> * add support for a new scheme ("standard") which is consistent AWS 
> conventions
> ** data centers will be the region name, including the number
> ** racks will be the availability zone name, including the region name
> Examples:
> * * legacy* : datacenter is the part of the availability zone name preceding 
> the last "\-" when the zone ends in \-1 and includes the number if not \-1. 
> Rack is the portion of the availability zone name following  the last "\-".
> ** us-west-1a => dc: us-west, rack: 1a
> ** us-west-2b => dc: us-west-2, rack: 2b; 
> * *standard* : datacenter is the part of the availability zone name preceding 
> zone letter. rack is the entire availability zone name.
> ** us-west-1a => dc: us-west-1, rack: us-west-1a
> ** us-west-2b => dc: us-west-2, rack: us-west-2b; 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11452) Cache implementation using LIRS eviction for in-process page cache

2016-04-15 Thread Ben Manes (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243989#comment-15243989
 ] 

Ben Manes commented on CASSANDRA-11452:
---

The hash table trick isn't applicable since I didn't fork it for Caffeine or 
CLHM. I was opposed to Guava's decision to do that, other than for computation, 
as I feel the trade-off is sharply negative.

The random walk has a mixed effect in small traces (512 entry). For most its 
equivalent, for multi3 its better, and negative otherwise. I think multi3 is 
better only because its a mixed workload that is TinyLFU struggles on (in 
comparison to LIRS). For larger workloads (database, search, oltp) its 
equivalent, as we'd expect. (multi1: -4%, multi3: +3%, gli: -2.5%, cs: -2%)

A 1% random admittance can have a similar 1-2% reduction, but goes away at a 
lower rate like 0.4% (1/255). That also passes the collision test since it 
causes some jitter. It may not be enough in a more adversarial test.

Branimir had noticed earlier that using _greater or equal to_ was a solution, 
but as noted it had a negative impact. However we care mostly about hot 
candidates being rejected by an artificially hot victim. Most candidates are 
very cold so the filter avoids polluting the cache. If we add a constraint to 
the randomization to only be applied to warm candidates then we pass the test 
and don't see a degredation. I used a constraint of greater than 5, where the 
maximum frequency is 15 (4-bit counters).

I lean towards large traces being more realistic and meaningful, so I am not 
overly worried either way. But I would like to keep the small traces in good 
standing as they are easy comparisons for understandable patterns.

What are your thoughts on applying the randomness only when the candidate has 
at least a moderate frequency?

> Cache implementation using LIRS eviction for in-process page cache
> --
>
> Key: CASSANDRA-11452
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11452
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>
> Following up from CASSANDRA-5863, to make best use of caching and to avoid 
> having to explicitly marking compaction accesses as non-cacheable, we need a 
> cache implementation that uses an eviction algorithm that can better handle 
> non-recurring accesses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10890) I am using centos 6. I upgraded from cassandra 2.0.8 to 3.0.1. when i run cqlsh, it shows 'cql shell requires python 2.7' . i installed python 2.7 but still my cql s

2016-04-15 Thread Naveen Achyuta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Achyuta updated CASSANDRA-10890:
---
Summary: I am using centos 6. I upgraded from cassandra 2.0.8 to 3.0.1. 
when i run cqlsh, it shows 'cql shell requires python 2.7' . i installed python 
2.7 but still my cql shell is not working.   (was: I am using centos 6. I 
upgraded from cassandra 2.0.8 to 3.0.1 when is run cqlsh, it shows 'cql shell 
requires python 2.7' . i installed python 2.7 but still my cql shell is not 
working. )

> I am using centos 6. I upgraded from cassandra 2.0.8 to 3.0.1. when i run 
> cqlsh, it shows 'cql shell requires python 2.7' . i installed python 2.7 but 
> still my cql shell is not working. 
> --
>
> Key: CASSANDRA-10890
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10890
> Project: Cassandra
>  Issue Type: Test
>  Components: CQL
> Environment: centOS 6 
>Reporter: Naveen Achyuta
> Fix For: 3.0.1
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11587) Cfstats estimate number of keys should return 0 for empty table

2016-04-15 Thread Jane Deng (JIRA)
Jane Deng created CASSANDRA-11587:
-

 Summary: Cfstats estimate number of keys should return 0 for empty 
table
 Key: CASSANDRA-11587
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11587
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Cassandra 2.1.13
Nodeltool
Reporter: Jane Deng
Priority: Trivial


If sstable count is 0, the "estimate number of keys" in cfstats shows -1. 

{noformat}
SSTable count: 0
Number of keys (estimate): -1
{noformat}

The initial value of keyCount in SSTableReader is -1. When there is no sstable, 
nor entry in memtable, the keyCount isn't increased. Cfstats should have some 
boundary check and return 0 for this case. 





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11586) Avoid Silent Insert or Update Failure In Clusters With Time Skew

2016-04-15 Thread Mukil Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukil Kesavan updated CASSANDRA-11586:
--
Summary: Avoid Silent Insert or Update Failure In Clusters With Time Skew  
(was: Insert or Update Behavior In Clusters With Time Skew)

> Avoid Silent Insert or Update Failure In Clusters With Time Skew
> 
>
> Key: CASSANDRA-11586
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11586
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core, CQL
>Reporter: Mukil Kesavan
>
> It isn't uncommon to have a cluster of Cassandra servers with clock skew 
> ranging from a few milliseconds to seconds or even minutes even with NTP 
> configured on them. We use the coordinator's timestamp for all insert/update 
> requests. Currently, an update to an already existing row with an older 
> timestamp (because the request coordinator's clock is lagging behind) results 
> in a successful response to the client even though the update was dropped. 
> Here's a sample sequence of requests:
> * Consider 3 Cassandra servers with times, T+10, T+5 and T respectively
> * INSERT INTO TABLE1 (id, data) VALUES (1, "one"); is coordinated by server 1 
> with timestamp (T+10)
> * UPDATE TABLE1 SET data='One' where id=1; is coordinated by server 3 with 
> timestamp T
> The client receives no error when the last statement is executed even though 
> the request was dropped.
> It will be really helpful if we could return an error or response to the 
> client indicating that the request was dropped. This gives the client an 
> option to handle this situation gracefully. If this is useful, I can work on 
> a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11586) Avoid Silent Insert or Update Failure In Clusters With Time Skew

2016-04-15 Thread Mukil Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukil Kesavan updated CASSANDRA-11586:
--
Issue Type: Bug  (was: Improvement)

> Avoid Silent Insert or Update Failure In Clusters With Time Skew
> 
>
> Key: CASSANDRA-11586
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11586
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, CQL
>Reporter: Mukil Kesavan
>
> It isn't uncommon to have a cluster of Cassandra servers with clock skew 
> ranging from a few milliseconds to seconds or even minutes even with NTP 
> configured on them. We use the coordinator's timestamp for all insert/update 
> requests. Currently, an update to an already existing row with an older 
> timestamp (because the request coordinator's clock is lagging behind) results 
> in a successful response to the client even though the update was dropped. 
> Here's a sample sequence of requests:
> * Consider 3 Cassandra servers with times, T+10, T+5 and T respectively
> * INSERT INTO TABLE1 (id, data) VALUES (1, "one"); is coordinated by server 1 
> with timestamp (T+10)
> * UPDATE TABLE1 SET data='One' where id=1; is coordinated by server 3 with 
> timestamp T
> The client receives no error when the last statement is executed even though 
> the request was dropped.
> It will be really helpful if we could return an error or response to the 
> client indicating that the request was dropped. This gives the client an 
> option to handle this situation gracefully. If this is useful, I can work on 
> a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11586) Insert or Update Behavior In Clusters With Time Skew

2016-04-15 Thread Mukil Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukil Kesavan updated CASSANDRA-11586:
--
Issue Type: Improvement  (was: Bug)

> Insert or Update Behavior In Clusters With Time Skew
> 
>
> Key: CASSANDRA-11586
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11586
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core, CQL
>Reporter: Mukil Kesavan
>
> It isn't uncommon to have a cluster of Cassandra servers with clock skew 
> ranging from a few milliseconds to seconds or even minutes even with NTP 
> configured on them. We use the coordinator's timestamp for all insert/update 
> requests. Currently, an update to an already existing row with an older 
> timestamp (because the request coordinator's clock is lagging behind) results 
> in a successful response to the client even though the update was dropped. 
> Here's a sample sequence of requests:
> * Consider 3 Cassandra servers with times, T+10, T+5 and T respectively
> * INSERT INTO TABLE1 (id, data) VALUES (1, "one"); is coordinated by server 1 
> with timestamp (T+10)
> * UPDATE TABLE1 SET data='One' where id=1; is coordinated by server 3 with 
> timestamp T
> The client receives no error when the last statement is executed even though 
> the request was dropped.
> It will be really helpful if we could return an error or response to the 
> client indicating that the request was dropped. This gives the client an 
> option to handle this situation gracefully. If this is useful, I can work on 
> a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11586) Insert or Update Behavior In Clusters With Time Skew

2016-04-15 Thread Mukil Kesavan (JIRA)
Mukil Kesavan created CASSANDRA-11586:
-

 Summary: Insert or Update Behavior In Clusters With Time Skew
 Key: CASSANDRA-11586
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11586
 Project: Cassandra
  Issue Type: Bug
  Components: Core, CQL
Reporter: Mukil Kesavan


It isn't uncommon to have a cluster of Cassandra servers with clock skew 
ranging from a few milliseconds to seconds or even minutes even with NTP 
configured on them. We use the coordinator's timestamp for all insert/update 
requests. Currently, an update to an already existing row with an older 
timestamp (because the request coordinator's clock is lagging behind) results 
in a successful response to the client even though the update was dropped. 
Here's a sample sequence of requests:

* Consider 3 Cassandra servers with times, T+10, T+5 and T respectively
* INSERT INTO TABLE1 (id, data) VALUES (1, "one"); is coordinated by server 1 
with timestamp (T+10)
* UPDATE TABLE1 SET data='One' where id=1; is coordinated by server 3 with 
timestamp T

The client receives no error when the last statement is executed even though 
the request was dropped.

It will be really helpful if we could return an error or response to the client 
indicating that the request was dropped. This gives the client an option to 
handle this situation gracefully. If this is useful, I can work on a patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11452) Cache implementation using LIRS eviction for in-process page cache

2016-04-15 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243842#comment-15243842
 ] 

Benedict commented on CASSANDRA-11452:
--

Yeah, another possibility is to pick a random hash table bucket's first element 
for the guard, or do this for a proportion of admissions.

Some overly complex options are also to maintain a mean/median of the last X 
LRU items, or to decide a %ile given the sketch and map size, calculate it from 
the sketch periodically, and admit only those above it.

However I'm not sure Cassandra cares too much about a concerted attacker for 
this since it would be very hard to predict what the keys would be (you'd need 
the random seed, the existing sstable layouts, other keys and the ability to 
directly induce load on the server, and even then it would be challenging).  So 
long as there is no realistic accidental way for it to happen I'd say we're 
good.

One slight variant to consider is using an xor of the victim and candidate 
hashes for our RNF.  This is likely quicker, should be perfectly safe to attack 
for any good hash function (especially post-spread), and would have the nice 
property of determinism for unit tests.

> Cache implementation using LIRS eviction for in-process page cache
> --
>
> Key: CASSANDRA-11452
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11452
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>
> Following up from CASSANDRA-5863, to make best use of caching and to avoid 
> having to explicitly marking compaction accesses as non-cacheable, we need a 
> cache implementation that uses an eviction algorithm that can better handle 
> non-recurring accesses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8523) Writes should be sent to a replacement node while it is streaming in data

2016-04-15 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243834#comment-15243834
 ] 

Jason Brown commented on CASSANDRA-8523:


I agree with [~jkni] - a new gossip state and updating FD/TMD is the best way 
to go. I'll follow up on #9244 to get better grasp of what's going on there, as 
well.

wrt strongly consistent membership, that effort focuses primarily on correct 
and linearizable changes to the cluster state machine, and not any behaviors 
that should be triggered due to those changes when they arrive at cluster 
nodes. So, I don't *think* we'd run into problems between these two efforts, 
but good to keep the same players involved :)

> Writes should be sent to a replacement node while it is streaming in data
> -
>
> Key: CASSANDRA-8523
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8523
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Richard Wagner
>Assignee: Brandon Williams
> Fix For: 2.1.x
>
>
> In our operations, we make heavy use of replace_address (or 
> replace_address_first_boot) in order to replace broken nodes. We now realize 
> that writes are not sent to the replacement nodes while they are in hibernate 
> state and streaming in data. This runs counter to what our expectations were, 
> especially since we know that writes ARE sent to nodes when they are 
> bootstrapped into the ring.
> It seems like cassandra should arrange to send writes to a node that is in 
> the process of replacing another node, just like it does for a nodes that are 
> bootstraping. I hesitate to phrase this as "we should send writes to a node 
> in hibernate" because the concept of hibernate may be useful in other 
> contexts, as per CASSANDRA-8336. Maybe a new state is needed here?
> Among other things, the fact that we don't get writes during this period 
> makes subsequent repairs more expensive, proportional to the number of writes 
> that we miss (and depending on the amount of data that needs to be streamed 
> during replacement and the time it may take to rebuild secondary indexes, we 
> could miss many many hours worth of writes). It also leaves us more exposed 
> to consistency violations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11452) Cache implementation using LIRS eviction for in-process page cache

2016-04-15 Thread Ben Manes (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243798#comment-15243798
 ] 

Ben Manes commented on CASSANDRA-11452:
---

I was able to sneak in a little 
[coding|https://github.com/ben-manes/caffeine/tree/collisions] during my 
morning commute and a much less hectic Friday. The random walk nicely passes 
Branimir's test, but I have a few eviction tests that still need fixing due to 
the non-deterministic behavior. I'll try to work on that this weekend.

Gil suggested ignoring TinyLFU for at a small probability, like 1%, to admit 
the candidate. This might have the benefit that an attacker can't use the 
maximum walking distance as the threshold of if they can break the protection. 
It also keeps the admission and eviction decoupled, e.g. making it easier to 
add the filter on top of {{LinkedHashMap}}. I could also see there being a 
benefit of using multiple strategies in tandem.

Roy plans on analyzing the problem, proposed solutions, and detailing his 
recommendations. I think this will be a good topic for him during his long 
international flight tomorrow. I'll share his thoughts.

> Cache implementation using LIRS eviction for in-process page cache
> --
>
> Key: CASSANDRA-11452
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11452
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>
> Following up from CASSANDRA-5863, to make best use of caching and to avoid 
> having to explicitly marking compaction accesses as non-cacheable, we need a 
> cache implementation that uses an eviction algorithm that can better handle 
> non-recurring accesses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11583) Exception when streaming sstables using `sstableloader`

2016-04-15 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243694#comment-15243694
 ] 

Jens Rantil commented on CASSANDRA-11583:
-

I'm speculating here, but could the issue be that we open the sstable once, but 
decrement the reference once per host we are streaming to?

> Exception when streaming sstables using `sstableloader`
> ---
>
> Key: CASSANDRA-11583
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11583
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: $ uname -a
> Linux bigdb-100 3.2.0-99-virtual #139-Ubuntu SMP Mon Feb 1 23:52:21 UTC 2016 
> x86_64 x86_64 x86_64 GNU/Linux
> I am using Datastax Enterprise 4.7.8-1 which is based on 2.1.13.
>Reporter: Jens Rantil
>
> This bug came out of CASSANDRA-11562.
> I have have a keyspace snapshotted from a 2.1.11 (DSE 4.7.5-1) node. When I'm 
> running the `sstableloader` I get the following output/exception:
> {noformat}
> # sstableloader --nodes X.X.X.20 --username YYY --password ZZZ --ignore XXX 
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2
> Established connection to initial hosts
> Opening sstables and calculating sections to stream
> Streaming relevant part of 
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/XXX-ZZZ-ka-6463-Data.db
>  
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/tink-ZZZ-ka-6464-Data.db
>  to [/X.X.X.33, /X.X.X.113, /X.X.X.32, /X.X.X.20, /X.X.X.122, /X.X.X.176, 
> /X.X.X.143, /X.X.X.172, /X.X.X.50, /X.X.X.51, /X.X.X.52, /X.X.X.71, 
> /X.X.X.53, /X.X.X.54, /X.X.X.47, /X.X.X.31, /X.X.X.8]
> progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % [/X.X.X.172]0:0/2 0  
> % [/X.X.X.20]0:0/2 0  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % 
> [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
> [/X.X.X.143]0:0/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
> [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % [/X.X.X.143]0:1/2 1  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % 
> [/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 11 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 19 % 
> [/X.X.X.143]0:1/2 11 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 19 % [/X.X.X.143]0:1/2 15 % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 15 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % 
> [/X.X.X.143]0:1/2 20 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 21 % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 21 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 3  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 42 % 
> [/X.X.X.143]0:1/2 27 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 3  % 
> [/X.X.X.71]0:1/2 6  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 
> [...]
> progress: [/X.X.X.113]0:2/2 100% [/X.X.X.143]0:2/2 100% [/X.X.X.172]0:0/2 78 
> % [/

[jira] [Comment Edited] (CASSANDRA-11583) Exception when streaming sstables using `sstableloader`

2016-04-15 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243634#comment-15243634
 ] 

Jens Rantil edited comment on CASSANDRA-11583 at 4/15/16 9:26 PM:
--

Hm, rereading 
https://github.com/apache/cassandra/blob/cassandra-2.1.13/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java#L208,
 it looks like the assert is only made when the streaming is done, right? Can I 
be comfortable that `sstableloader` finished succesfully if it doesn't print 
any error before the exception (and the assert fails in "onSuccess" method)?


was (Author: ztyx):
Hm, rereading 
https://github.com/apache/cassandra/blob/cassandra-2.1.13/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java#L208,
 it looks like the assert is only made when the streaming is done, right? Can I 
be comfortable that `sstableloader` finished succesfully if it doesn't print 
any error before the exception?

> Exception when streaming sstables using `sstableloader`
> ---
>
> Key: CASSANDRA-11583
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11583
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: $ uname -a
> Linux bigdb-100 3.2.0-99-virtual #139-Ubuntu SMP Mon Feb 1 23:52:21 UTC 2016 
> x86_64 x86_64 x86_64 GNU/Linux
> I am using Datastax Enterprise 4.7.8-1 which is based on 2.1.13.
>Reporter: Jens Rantil
>
> This bug came out of CASSANDRA-11562.
> I have have a keyspace snapshotted from a 2.1.11 (DSE 4.7.5-1) node. When I'm 
> running the `sstableloader` I get the following output/exception:
> {noformat}
> # sstableloader --nodes X.X.X.20 --username YYY --password ZZZ --ignore XXX 
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2
> Established connection to initial hosts
> Opening sstables and calculating sections to stream
> Streaming relevant part of 
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/XXX-ZZZ-ka-6463-Data.db
>  
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/tink-ZZZ-ka-6464-Data.db
>  to [/X.X.X.33, /X.X.X.113, /X.X.X.32, /X.X.X.20, /X.X.X.122, /X.X.X.176, 
> /X.X.X.143, /X.X.X.172, /X.X.X.50, /X.X.X.51, /X.X.X.52, /X.X.X.71, 
> /X.X.X.53, /X.X.X.54, /X.X.X.47, /X.X.X.31, /X.X.X.8]
> progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % [/X.X.X.172]0:0/2 0  
> % [/X.X.X.20]0:0/2 0  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % 
> [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
> [/X.X.X.143]0:0/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
> [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % [/X.X.X.143]0:1/2 1  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % 
> [/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 11 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 19 % 
> [/X.X.X.143]0:1/2 11 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 19 % [/X.X.X.143]0:1/2 15 % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 15 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % 
> [/X.X.X.143]0:1/2 20 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.1

[jira] [Commented] (CASSANDRA-11206) Support large partitions on the 3.0 sstable format

2016-04-15 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243640#comment-15243640
 ] 

T Jake Luciani commented on CASSANDRA-11206:



Looks like you still have ColumnIndex but it's been refactored into 
RowIndexWriter.
I think RowIndexWriter should be moved to and replace ColumnIndex since there is
no need to move it.

In BTW.addIndexBlock() the indexOffsets[0] is always 0 since its always skipped 
on the null case and columnIndexCount is incremented.
It looks like it was intentional but it's not easy to understand. I think it 
works out because indexSamplesSerializedSize is 0 anyway.

Please explain in RowIndexEntry.create why you are returning each of the types. 
It's not clear why indexSamples == null && columnIndexRow > 1 is significant.

It seems like you don't need indexOffsets once you reach 
column_index_cache_size_in_kb
it's only used for the non-indexes.  Does that mean the offsets aren't being 
written to the index properly? 
In the RIE example they are all appended to the end.

> Support large partitions on the 3.0 sstable format
> --
>
> Key: CASSANDRA-11206
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11206
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jonathan Ellis
>Assignee: Robert Stupp
> Fix For: 3.x
>
> Attachments: 11206-gc.png, trunk-gc.png
>
>
> Cassandra saves a sample of IndexInfo objects that store the offset within 
> each partition of every 64KB (by default) range of rows.  To find a row, we 
> binary search this sample, then scan the partition of the appropriate range.
> The problem is that this scales poorly as partitions grow: on a cache miss, 
> we deserialize the entire set of IndexInfo, which both creates a lot of GC 
> overhead (as noted in CASSANDRA-9754) but is also non-negligible i/o activity 
> (relative to reading a single 64KB row range) as partitions get truly large.
> We introduced an "offset map" in CASSANDRA-10314 that allows us to perform 
> the IndexInfo bsearch while only deserializing IndexInfo that we need to 
> compare against, i.e. log(N) deserializations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11583) Exception when streaming sstables using `sstableloader`

2016-04-15 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243634#comment-15243634
 ] 

Jens Rantil commented on CASSANDRA-11583:
-

Hm, rereading 
https://github.com/apache/cassandra/blob/cassandra-2.1.13/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java#L208,
 it looks like the assert is only made when the streaming is done, right? Can I 
be comfortable that `sstableloader` finished succesfully if it doesn't print 
any error before the exception?

> Exception when streaming sstables using `sstableloader`
> ---
>
> Key: CASSANDRA-11583
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11583
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: $ uname -a
> Linux bigdb-100 3.2.0-99-virtual #139-Ubuntu SMP Mon Feb 1 23:52:21 UTC 2016 
> x86_64 x86_64 x86_64 GNU/Linux
> I am using Datastax Enterprise 4.7.8-1 which is based on 2.1.13.
>Reporter: Jens Rantil
>
> This bug came out of CASSANDRA-11562.
> I have have a keyspace snapshotted from a 2.1.11 (DSE 4.7.5-1) node. When I'm 
> running the `sstableloader` I get the following output/exception:
> {noformat}
> # sstableloader --nodes X.X.X.20 --username YYY --password ZZZ --ignore XXX 
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2
> Established connection to initial hosts
> Opening sstables and calculating sections to stream
> Streaming relevant part of 
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/XXX-ZZZ-ka-6463-Data.db
>  
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/tink-ZZZ-ka-6464-Data.db
>  to [/X.X.X.33, /X.X.X.113, /X.X.X.32, /X.X.X.20, /X.X.X.122, /X.X.X.176, 
> /X.X.X.143, /X.X.X.172, /X.X.X.50, /X.X.X.51, /X.X.X.52, /X.X.X.71, 
> /X.X.X.53, /X.X.X.54, /X.X.X.47, /X.X.X.31, /X.X.X.8]
> progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % [/X.X.X.172]0:0/2 0  
> % [/X.X.X.20]0:0/2 0  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % 
> [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
> [/X.X.X.143]0:0/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
> [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % [/X.X.X.143]0:1/2 1  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % 
> [/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 11 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 19 % 
> [/X.X.X.143]0:1/2 11 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 19 % [/X.X.X.143]0:1/2 15 % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 15 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % 
> [/X.X.X.143]0:1/2 20 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 21 % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 21 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 3  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 42 % 
> [/X.X.X.143]0:1/2 27 % [/X.X.X.172]0:0

[jira] [Commented] (CASSANDRA-8844) Change Data Capture (CDC)

2016-04-15 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243555#comment-15243555
 ] 

Joshua McKenzie commented on CASSANDRA-8844:


bq. I was a fan of the ReplayPosition name. It stands for a more general 
concept which happens be the commit log position for us. Further to this, it 
should be a CommitLogPosition rather than ..SegmentPosition as it does not just 
specify a position within a given segment but an overall position in the log 
(for a specific keyspace). I am also wondering if it should not include a 
keyspace id / reference now that it is keyspace-specific to be able to fail 
fast on mismatch.
I appreciate the feedback here on naming but I disagree on both counts. In 
"ReplayPosition" vs. "CommitLogSegmentPosition", the former couples the name 
with an intended usage / implementation whereas the latter is strictly a 
statement of what the object is without usage context. Regarding 
CommitLogPosition vs. CommitLogSegmentPosition, the class itself contains 2 
instance variables: a segmentId and a position. Again, calling it a 
CommitLogPosition would couple the name of the class with an intended usage 
rather than leaving it modularly decoupled in my opinion.

As for adding a keyspace id / reference and failing fast, what immediate 
use-case / optimization do you have in mind where that would help us? Replay 
should be limited to files in directories and a user of the CommitLogReader 
that's working with reading CDC logs should really have an all-or-nothing 
perspective on the keyspaces in the logs they're parsing, I believe.

bq. I'd prefer to throw the WriteTimeoutException directly from allocate 
(instead of catching null in CommitLog and doing the same). Doing the check 
inside the while loop will avoid the over-allocation and do less work in the 
common case.
Changed.

bq. Do we really need to have separate buffer pools per manager? Static (or 
not) shared will offer slightly better cache locality, and it's better to block 
both commit logs if we're running beyond allowed memory (we may want to double 
the default limit).
I originally changed this code due to 
CommitLogSegmentManagerTest.testCompressedCommitLogBackpressure failing since, 
upon raising the limit to 6, the standard CLSM was "stealing" one of the 
allotted buffers from the extra 3. What I didn't really take into account was 
the fact that, given the AbstractCommitLogService is now using a 
CommitLog.sync() that essentially does a sequential sync across all CLSM, a 
delay in any of the CLSM's will lead to a delay in all of them, so having them 
operate with independent buffers doesn't make any difference.

Made the pool static and upped max to 6. I prefer having this pool discrete 
rather than embedded in FileDirectSegment.

bq. segmentManagers array: An EnumMap (which boils down to the same thing) 
would be cleaner and should not have any performance impact.
Changed. Much preferred - thanks for the heads up.

bq. shutdownBlocking: Better shutdown in parallel, i.e. initiate and await 
termination separately.
Agreed. Changed.

{quote}reCalculating cas in maybeUpdateCDCSizeCounterAsync is fishy: makes you 
think it would clear on exception in running update, which isn't the case. The 
updateCDCDirectorySize body should be wrapped in try ... finally as well to do 
that.
You could use a scheduled executor to avoid the explicit delays. Or a 
RateLimiter (we'd prefer to update ASAP when triggered, but not too often) 
instead of the delay.
updateCDCOverflowSize: use while (!reCalculating.compareAndSet(false, true)) 
{};. You should reset the value afterwards.
CDCSizeCalculator.calculateSize should return the size, and maybe made 
synchronized for a bit of additional safety.
{quote}
Changed to RateLimiter, tossed the premature optimization of the atomic bool 
protection around runnables that are going to get discarded (should all be eden 
and small), and moved the scheduling code and refactored a bit into 
CDCSizeCalculator. The class as a whole and flow are much cleaner now IMO - the 
above points should either be addressed or no longer apply after the change. 
Let me know what you think.

bq. I don't get the DirectorySizeCalculator. Why the alive and visited sets, 
the listFiles step? Either list the files and just loop through them, or do the 
walkFileTree operation – you are now doing the same work twice. Use a plain 
long instead of the atomic as the class is still thread-unsafe.
This class is actually a straight up refactor / extraction of 
{{Directories.TrueFilesSizeVisitor}} on trunk. I don't doubt this class could 
use some work (code's from CASSANDRA-6231 back in 2013) but I'd prefer to 
handle that as a follow-up ticket.

bq. Scrubber change should be reverted.
Thanks. intellij idea got over-zealous on a refactor/rename and I thought I'd 
tracked all of those down.

bq. "Permissible" change

[jira] [Updated] (CASSANDRA-11310) Allow filtering on clustering columns for queries without secondary indexes

2016-04-15 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-11310:
---
   Resolution: Fixed
Fix Version/s: (was: 3.x)
   3.6
   Status: Resolved  (was: Patch Available)

Committed into trunk at a600920cb5ee2866b09ee6c1ebae9518096e5bc4

> Allow filtering on clustering columns for queries without secondary indexes
> ---
>
> Key: CASSANDRA-11310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11310
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Alex Petrov
>  Labels: doc-impacting
> Fix For: 3.6
>
>
> Since CASSANDRA-6377 queries without index filtering non-primary key columns 
> are fully supported.
> It makes sense to also support filtering on clustering-columns.
> {code}
> CREATE TABLE emp_table2 (
> empID int,
> firstname text,
> lastname text,
> b_mon text,
> b_day text,
> b_yr text,
> PRIMARY KEY (empID, b_yr, b_mon, b_day));
> SELECT b_mon,b_day,b_yr,firstname,lastname FROM emp_table2
> WHERE b_mon='oct' ALLOW FILTERING;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11310) Allow filtering on clustering columns for queries without secondary indexes

2016-04-15 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243541#comment-15243541
 ] 

Benjamin Lerer commented on CASSANDRA-11310:


Thanks for the patch.

> Allow filtering on clustering columns for queries without secondary indexes
> ---
>
> Key: CASSANDRA-11310
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11310
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Alex Petrov
>  Labels: doc-impacting
> Fix For: 3.6
>
>
> Since CASSANDRA-6377 queries without index filtering non-primary key columns 
> are fully supported.
> It makes sense to also support filtering on clustering-columns.
> {code}
> CREATE TABLE emp_table2 (
> empID int,
> firstname text,
> lastname text,
> b_mon text,
> b_day text,
> b_yr text,
> PRIMARY KEY (empID, b_yr, b_mon, b_day));
> SELECT b_mon,b_day,b_yr,firstname,lastname FROM emp_table2
> WHERE b_mon='oct' ALLOW FILTERING;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Allow filtering on clustering columns for queries without secondary indexes

2016-04-15 Thread blerer
Allow filtering on clustering columns for queries without secondary indexes

patch by Alex Petrov; reviewed by Benjamin Lerer for CASSANDRA-11310


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a600920c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a600920c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a600920c

Branch: refs/heads/trunk
Commit: a600920cb5ee2866b09ee6c1ebae9518096e5bc4
Parents: 831bebd
Author: Alex Petrov 
Authored: Fri Apr 15 22:26:02 2016 +0200
Committer: Benjamin Lerer 
Committed: Fri Apr 15 22:26:02 2016 +0200

--
 CHANGES.txt |1 +
 .../ClusteringColumnRestrictions.java   |   80 +-
 .../restrictions/StatementRestrictions.java |   58 +-
 .../entities/FrozenCollectionsTest.java |9 +-
 .../cql3/validation/operations/SelectTest.java  | 1067 --
 5 files changed, 800 insertions(+), 415 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a600920c/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index acbbfa5..7bb97e5 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.6
+ * Allow filtering on clustering columns for queries without secondary indexes 
(CASSANDRA-11310)
  * Refactor Restriction hierarchy (CASSANDRA-11354)
  * Eliminate allocations in R/W path (CASSANDRA-11421)
  * Update Netty to 4.0.36 (CASSANDRA-11567)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a600920c/src/java/org/apache/cassandra/cql3/restrictions/ClusteringColumnRestrictions.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/restrictions/ClusteringColumnRestrictions.java
 
b/src/java/org/apache/cassandra/cql3/restrictions/ClusteringColumnRestrictions.java
index 47cf76b..ab16ebc 100644
--- 
a/src/java/org/apache/cassandra/cql3/restrictions/ClusteringColumnRestrictions.java
+++ 
b/src/java/org/apache/cassandra/cql3/restrictions/ClusteringColumnRestrictions.java
@@ -42,16 +42,28 @@ final class ClusteringColumnRestrictions extends 
RestrictionSetWrapper
  */
 protected final ClusteringComparator comparator;
 
+/**
+ * true if filtering is allowed for this restriction, 
false otherwise
+ */
+private final boolean allowFiltering;
+
 public ClusteringColumnRestrictions(CFMetaData cfm)
 {
-super(new RestrictionSet());
-this.comparator = cfm.comparator;
+this(cfm, false);
 }
 
-private ClusteringColumnRestrictions(ClusteringComparator comparator, 
RestrictionSet restrictionSet)
+public ClusteringColumnRestrictions(CFMetaData cfm, boolean allowFiltering)
+{
+this(cfm.comparator, new RestrictionSet(), allowFiltering);
+}
+
+private ClusteringColumnRestrictions(ClusteringComparator comparator,
+ RestrictionSet restrictionSet,
+ boolean allowFiltering)
 {
 super(restrictionSet);
 this.comparator = comparator;
+this.allowFiltering = allowFiltering;
 }
 
 public ClusteringColumnRestrictions mergeWith(Restriction restriction) 
throws InvalidRequestException
@@ -59,7 +71,7 @@ final class ClusteringColumnRestrictions extends 
RestrictionSetWrapper
 SingleRestriction newRestriction = (SingleRestriction) restriction;
 RestrictionSet newRestrictionSet = 
restrictions.addRestriction(newRestriction);
 
-if (!isEmpty())
+if (!isEmpty() && !allowFiltering)
 {
 SingleRestriction lastRestriction = restrictions.lastRestriction();
 ColumnDefinition lastRestrictionStart = 
lastRestriction.getFirstColumn();
@@ -76,7 +88,7 @@ final class ClusteringColumnRestrictions extends 
RestrictionSetWrapper
  newRestrictionStart.name);
 }
 
-return new ClusteringColumnRestrictions(this.comparator, 
newRestrictionSet);
+return new ClusteringColumnRestrictions(this.comparator, 
newRestrictionSet, allowFiltering);
 }
 
 private boolean hasMultiColumnSlice()
@@ -105,11 +117,10 @@ final class ClusteringColumnRestrictions extends 
RestrictionSetWrapper
 {
 MultiCBuilder builder = MultiCBuilder.create(comparator, hasIN() || 
hasMultiColumnSlice());
 int keyPosition = 0;
+
 for (SingleRestriction r : restrictions)
 {
-ColumnDefinition def = r.getFirstColumn();
-
-if (keyPosition != def.position() || r.isContains() || r.isLIKE())
+if (handleInFilter(r, keyPosition))
 break;
 
 if (r.isSlice())
@@ -155,6 +166,32 @@ final class ClusteringColumnR

[1/2] cassandra git commit: Allow filtering on clustering columns for queries without secondary indexes

2016-04-15 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/trunk 831bebdba -> a600920cb


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a600920c/test/unit/org/apache/cassandra/cql3/validation/operations/SelectTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/operations/SelectTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/operations/SelectTest.java
index 75334f0..6789baf 100644
--- a/test/unit/org/apache/cassandra/cql3/validation/operations/SelectTest.java
+++ b/test/unit/org/apache/cassandra/cql3/validation/operations/SelectTest.java
@@ -1447,65 +1447,66 @@ public class SelectTest extends CQLTester
 execute("DELETE FROM %s WHERE a = 1 AND b = 1");
 execute("DELETE FROM %s WHERE a = 2 AND b = 2");
 
-flush();
+beforeAndAfterFlush(() -> {
 
-// Checks filtering
-
assertInvalidMessage(StatementRestrictions.REQUIRES_ALLOW_FILTERING_MESSAGE,
- "SELECT * FROM %s WHERE c = 4 AND d = 8");
+// Checks filtering
+
assertInvalidMessage(StatementRestrictions.REQUIRES_ALLOW_FILTERING_MESSAGE,
+ "SELECT * FROM %s WHERE c = 4 AND d = 8");
 
-assertRows(execute("SELECT * FROM %s WHERE c = 4 AND d = 8 ALLOW 
FILTERING"),
-   row(1, 2, 1, 4, 8),
-   row(1, 4, 1, 4, 8));
+assertRows(execute("SELECT * FROM %s WHERE c = 4 AND d = 8 ALLOW 
FILTERING"),
+   row(1, 2, 1, 4, 8),
+   row(1, 4, 1, 4, 8));
 
-
assertInvalidMessage(StatementRestrictions.REQUIRES_ALLOW_FILTERING_MESSAGE,
- "SELECT * FROM %s WHERE a = 1 AND b = 4 AND d = 
8");
+
assertInvalidMessage(StatementRestrictions.REQUIRES_ALLOW_FILTERING_MESSAGE,
+ "SELECT * FROM %s WHERE a = 1 AND b = 4 AND d 
= 8");
 
-assertRows(execute("SELECT * FROM %s WHERE a = 1 AND b = 4 AND d = 8 
ALLOW FILTERING"),
-   row(1, 4, 1, 4, 8));
+assertRows(execute("SELECT * FROM %s WHERE a = 1 AND b = 4 AND d = 
8 ALLOW FILTERING"),
+   row(1, 4, 1, 4, 8));
 
-
assertInvalidMessage(StatementRestrictions.REQUIRES_ALLOW_FILTERING_MESSAGE,
- "SELECT * FROM %s WHERE s = 1 AND d = 12");
+
assertInvalidMessage(StatementRestrictions.REQUIRES_ALLOW_FILTERING_MESSAGE,
+ "SELECT * FROM %s WHERE s = 1 AND d = 12");
 
-assertRows(execute("SELECT * FROM %s WHERE s = 1 AND d = 12 ALLOW 
FILTERING"),
-   row(1, 3, 1, 6, 12));
+assertRows(execute("SELECT * FROM %s WHERE s = 1 AND d = 12 ALLOW 
FILTERING"),
+   row(1, 3, 1, 6, 12));
 
-assertInvalidMessage("IN predicates on non-primary-key columns (c) is 
not yet supported",
- "SELECT * FROM %s WHERE a IN (1, 2) AND c IN (6, 
7)");
+assertInvalidMessage("IN predicates on non-primary-key columns (c) 
is not yet supported",
+ "SELECT * FROM %s WHERE a IN (1, 2) AND c IN 
(6, 7)");
 
-assertInvalidMessage("IN predicates on non-primary-key columns (c) is 
not yet supported",
- "SELECT * FROM %s WHERE a IN (1, 2) AND c IN (6, 
7) ALLOW FILTERING");
+assertInvalidMessage("IN predicates on non-primary-key columns (c) 
is not yet supported",
+ "SELECT * FROM %s WHERE a IN (1, 2) AND c IN 
(6, 7) ALLOW FILTERING");
 
-
assertInvalidMessage(StatementRestrictions.REQUIRES_ALLOW_FILTERING_MESSAGE,
- "SELECT * FROM %s WHERE c > 4");
+
assertInvalidMessage(StatementRestrictions.REQUIRES_ALLOW_FILTERING_MESSAGE,
+ "SELECT * FROM %s WHERE c > 4");
 
-assertRows(execute("SELECT * FROM %s WHERE c > 4 ALLOW FILTERING"),
-   row(1, 3, 1, 6, 12),
-   row(2, 3, 2, 7, 12));
+assertRows(execute("SELECT * FROM %s WHERE c > 4 ALLOW FILTERING"),
+   row(1, 3, 1, 6, 12),
+   row(2, 3, 2, 7, 12));
 
-
assertInvalidMessage(StatementRestrictions.REQUIRES_ALLOW_FILTERING_MESSAGE,
-"SELECT * FROM %s WHERE s > 1");
+
assertInvalidMessage(StatementRestrictions.REQUIRES_ALLOW_FILTERING_MESSAGE,
+ "SELECT * FROM %s WHERE s > 1");
 
-assertRows(execute("SELECT * FROM %s WHERE s > 1 ALLOW FILTERING"),
-   row(2, 3, 2, 7, 12),
-   row(3, null, 3, null, null));
+assertRows(execute("SELECT * FROM %s WHERE s > 1 ALLOW FILTERING"),
+   row(2, 3, 2, 7, 12),
+   row(3, null, 3, null, null));
 
-
assertInvalidMessage(State

[jira] [Updated] (CASSANDRA-11563) dtest failure in repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test_not_intersecting_all_ranges

2016-04-15 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11563:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> dtest failure in 
> repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test_not_intersecting_all_ranges
> 
>
> Key: CASSANDRA-11563
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11563
> Project: Cassandra
>  Issue Type: Test
>Reporter: Michael Shuler
>Assignee: Philip Thompson
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/344/testReport/repair_tests.incremental_repair_test/TestIncRepair/sstable_marking_test_not_intersecting_all_ranges
> Failed on CassCI build trunk_novnode_dtest #344
> Test does not appear to deal with single-token cluster testing correctly:
> {noformat}
> Error Message
> Error starting node1.
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-I164Fa
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'num_tokens': None, 'phi_convict_threshold': 5, 'start_rpc': 'true'}
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File 
> "/home/automaton/cassandra-dtest/repair_tests/incremental_repair_test.py", 
> line 369, in sstable_marking_test_not_intersecting_all_ranges
> cluster.populate(4, use_vnodes=True).start()
>   File "/home/automaton/ccm/ccmlib/cluster.py", line 360, in start
> raise NodeError("Error starting {0}.".format(node.name), p)
> "Error starting node1.\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-I164Fa\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'num_tokens': None, 'phi_convict_threshold': 5, 'start_rpc': 
> 'true'}\n- >> end captured logging << 
> -"
> Standard Output
> [node1 ERROR] Invalid yaml. Those properties [num_tokens] are not valid
> [node3 ERROR] Invalid yaml. Those properties [num_tokens] are not valid
> [node2 ERROR] Invalid yaml. Those properties [num_tokens] are not valid
> [node4 ERROR] Invalid yaml. Those properties [num_tokens] are not valid
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11354) PrimaryKeyRestrictionSet should be refactored

2016-04-15 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-11354:
---
   Resolution: Fixed
Fix Version/s: 3.6
   Status: Resolved  (was: Patch Available)

Committed into trunk at 831bebdba86ac1956852bd216a4cc62d898c87d7

> PrimaryKeyRestrictionSet should be refactored
> -
>
> Key: CASSANDRA-11354
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11354
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 3.6
>
>
> While reviewing CASSANDRA-11310 I realized that the code of 
> {{PrimaryKeyRestrictionSet}} was really confusing.
> The main 2 issues are:
> * the fact that it is used for partition keys and clustering columns 
> restrictions whereas those types of column required different processing
> * the {{isEQ}}, {{isSlice}}, {{isIN}} and {{isContains}} methods should not 
> be there as the set of restrictions might not match any of those categories 
> when secondary indexes are used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11354) PrimaryKeyRestrictionSet should be refactored

2016-04-15 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243442#comment-15243442
 ] 

Benjamin Lerer commented on CASSANDRA-11354:


Thanks for the review.

> PrimaryKeyRestrictionSet should be refactored
> -
>
> Key: CASSANDRA-11354
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11354
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 3.6
>
>
> While reviewing CASSANDRA-11310 I realized that the code of 
> {{PrimaryKeyRestrictionSet}} was really confusing.
> The main 2 issues are:
> * the fact that it is used for partition keys and clustering columns 
> restrictions whereas those types of column required different processing
> * the {{isEQ}}, {{isSlice}}, {{isIN}} and {{isContains}} methods should not 
> be there as the set of restrictions might not match any of those categories 
> when secondary indexes are used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/4] cassandra git commit: Refactor Restriction hierarchy

2016-04-15 Thread blerer
http://git-wip-us.apache.org/repos/asf/cassandra/blob/831bebdb/test/unit/org/apache/cassandra/cql3/restrictions/ClusteringColumnRestrictionsTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/restrictions/ClusteringColumnRestrictionsTest.java
 
b/test/unit/org/apache/cassandra/cql3/restrictions/ClusteringColumnRestrictionsTest.java
new file mode 100644
index 000..5c816da
--- /dev/null
+++ 
b/test/unit/org/apache/cassandra/cql3/restrictions/ClusteringColumnRestrictionsTest.java
@@ -0,0 +1,1919 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.cql3.restrictions;
+
+import java.nio.ByteBuffer;
+import java.util.*;
+
+import com.google.common.collect.Iterables;
+import org.junit.Test;
+
+import org.apache.cassandra.config.CFMetaData;
+import org.apache.cassandra.config.ColumnDefinition;
+import org.apache.cassandra.cql3.*;
+import org.apache.cassandra.cql3.Term.MultiItemTerminal;
+import org.apache.cassandra.cql3.statements.Bound;
+
+import org.apache.cassandra.db.*;
+import org.apache.cassandra.db.marshal.AbstractType;
+import org.apache.cassandra.db.marshal.Int32Type;
+import org.apache.cassandra.db.marshal.ReversedType;
+import org.apache.cassandra.utils.ByteBufferUtil;
+
+import static java.util.Arrays.asList;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+public class ClusteringColumnRestrictionsTest
+{
+@Test
+public void testBoundsAsClusteringWithNoRestrictions()
+{
+CFMetaData cfMetaData = newCFMetaData(Sort.ASC);
+
+ClusteringColumnRestrictions restrictions = new 
ClusteringColumnRestrictions(cfMetaData);
+
+SortedSet bounds = 
restrictions.boundsAsClustering(Bound.START, QueryOptions.DEFAULT);
+assertEquals(1, bounds.size());
+assertEmptyStart(get(bounds, 0));
+
+bounds = restrictions.boundsAsClustering(Bound.END, 
QueryOptions.DEFAULT);
+assertEquals(1, bounds.size());
+assertEmptyEnd(get(bounds, 0));
+}
+
+/**
+ * Test 'clustering_0 = 1' with only one clustering column
+ */
+@Test
+public void 
testBoundsAsClusteringWithOneEqRestrictionsAndOneClusteringColumn()
+{
+CFMetaData cfMetaData = newCFMetaData(Sort.ASC);
+
+ByteBuffer clustering_0 = ByteBufferUtil.bytes(1);
+Restriction eq = newSingleEq(cfMetaData, 0, clustering_0);
+
+ClusteringColumnRestrictions restrictions = new 
ClusteringColumnRestrictions(cfMetaData);
+restrictions = restrictions.mergeWith(eq);
+
+SortedSet bounds = 
restrictions.boundsAsClustering(Bound.START, QueryOptions.DEFAULT);
+assertEquals(1, bounds.size());
+assertStartBound(get(bounds, 0), true, clustering_0);
+
+bounds = restrictions.boundsAsClustering(Bound.END, 
QueryOptions.DEFAULT);
+assertEquals(1, bounds.size());
+assertEndBound(get(bounds, 0), true, clustering_0);
+}
+
+/**
+ * Test 'clustering_1 = 1' with 2 clustering columns
+ */
+@Test
+public void 
testBoundsAsClusteringWithOneEqRestrictionsAndTwoClusteringColumns()
+{
+CFMetaData cfMetaData = newCFMetaData(Sort.ASC, Sort.ASC);
+
+ByteBuffer clustering_0 = ByteBufferUtil.bytes(1);
+Restriction eq = newSingleEq(cfMetaData, 0, clustering_0);
+
+ClusteringColumnRestrictions restrictions = new 
ClusteringColumnRestrictions(cfMetaData);
+restrictions = restrictions.mergeWith(eq);
+
+SortedSet bounds = 
restrictions.boundsAsClustering(Bound.START, QueryOptions.DEFAULT);
+assertEquals(1, bounds.size());
+assertStartBound(get(bounds, 0), true, clustering_0);
+
+bounds = restrictions.boundsAsClustering(Bound.END, 
QueryOptions.DEFAULT);
+assertEquals(1, bounds.size());
+assertEndBound(get(bounds, 0), true, clustering_0);
+}
+
+/**
+ * Test 'clustering_0 IN (1, 2, 3)' with only one clustering column
+ */
+@Test
+public void 
testBoundsAsClusteringWithOneInRestrictionsAndOneClusteringColumn()
+{
+ByteBuffer value1 = ByteBufferUtil.bytes(1);
+ByteBuffer value2 

[4/4] cassandra git commit: Refactor Restriction hierarchy

2016-04-15 Thread blerer
Refactor Restriction hierarchy

patch by Benjamin Lerer; reviewed by Tyler Hobbs for CASSANDRA-11354


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/831bebdb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/831bebdb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/831bebdb

Branch: refs/heads/trunk
Commit: 831bebdba86ac1956852bd216a4cc62d898c87d7
Parents: dc569c9
Author: Benjamin Lerer 
Authored: Fri Apr 15 21:04:25 2016 +0200
Committer: Benjamin Lerer 
Committed: Fri Apr 15 21:13:52 2016 +0200

--
 CHANGES.txt |1 +
 .../AbstractPrimaryKeyRestrictions.java |   61 -
 .../cql3/restrictions/AbstractRestriction.java  |  108 -
 .../ClusteringColumnRestrictions.java   |  179 ++
 .../ForwardingPrimaryKeyRestrictions.java   |  197 --
 .../restrictions/MultiColumnRestriction.java|   34 +-
 .../restrictions/PartitionKeyRestrictions.java  |   51 +
 .../PartitionKeySingleRestrictionSet.java   |  132 ++
 .../restrictions/PrimaryKeyRestrictionSet.java  |  339 
 .../restrictions/PrimaryKeyRestrictions.java|   46 -
 .../cql3/restrictions/Restriction.java  |   70 +-
 .../cql3/restrictions/RestrictionSet.java   |   85 +-
 .../restrictions/RestrictionSetWrapper.java |   95 +
 .../cql3/restrictions/Restrictions.java |   54 +-
 .../restrictions/SingleColumnRestriction.java   |   43 +-
 .../cql3/restrictions/SingleRestriction.java|  117 ++
 .../restrictions/StatementRestrictions.java |   62 +-
 .../cql3/restrictions/TokenFilter.java  |   85 +-
 .../cql3/restrictions/TokenRestriction.java |   85 +-
 .../apache/cassandra/cql3/statements/Bound.java |   13 +
 .../ClusteringColumnRestrictionsTest.java   | 1919 ++
 .../PrimaryKeyRestrictionSetTest.java   | 1919 --
 22 files changed, 2748 insertions(+), 2947 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/831bebdb/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index c91a4cd..acbbfa5 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.6
+ * Refactor Restriction hierarchy (CASSANDRA-11354)
  * Eliminate allocations in R/W path (CASSANDRA-11421)
  * Update Netty to 4.0.36 (CASSANDRA-11567)
  * Fix PER PARTITION LIMIT for queries requiring post-query ordering 
(CASSANDRA-11556)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/831bebdb/src/java/org/apache/cassandra/cql3/restrictions/AbstractPrimaryKeyRestrictions.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/restrictions/AbstractPrimaryKeyRestrictions.java
 
b/src/java/org/apache/cassandra/cql3/restrictions/AbstractPrimaryKeyRestrictions.java
deleted file mode 100644
index f1b5a50..000
--- 
a/src/java/org/apache/cassandra/cql3/restrictions/AbstractPrimaryKeyRestrictions.java
+++ /dev/null
@@ -1,61 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.cassandra.cql3.restrictions;
-
-import java.nio.ByteBuffer;
-import java.util.*;
-
-import org.apache.cassandra.cql3.QueryOptions;
-import org.apache.cassandra.cql3.statements.Bound;
-import org.apache.cassandra.db.ClusteringPrefix;
-import org.apache.cassandra.db.ClusteringComparator;
-import org.apache.cassandra.exceptions.InvalidRequestException;
-
-/**
- * Base class for PrimaryKeyRestrictions.
- */
-abstract class AbstractPrimaryKeyRestrictions extends AbstractRestriction 
implements PrimaryKeyRestrictions
-{
-/**
- * The composite type.
- */
-protected final ClusteringComparator comparator;
-
-public AbstractPrimaryKeyRestrictions(ClusteringComparator comparator)
-{
-this.comparator = comparator;
-}
-
-@Override
-public List bounds(Bound b, QueryOptions options) throws 
InvalidRequestException
-{
-return values(options);
-}
-
-@Override
-public final

[3/4] cassandra git commit: Refactor Restriction hierarchy

2016-04-15 Thread blerer
http://git-wip-us.apache.org/repos/asf/cassandra/blob/831bebdb/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java 
b/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
index 4b82189..032a622 100644
--- a/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
+++ b/src/java/org/apache/cassandra/cql3/restrictions/StatementRestrictions.java
@@ -68,12 +68,12 @@ public final class StatementRestrictions
 /**
  * Restrictions on partitioning columns
  */
-private PrimaryKeyRestrictions partitionKeyRestrictions;
+private PartitionKeyRestrictions partitionKeyRestrictions;
 
 /**
  * Restrictions on clustering columns
  */
-private PrimaryKeyRestrictions clusteringColumnsRestrictions;
+private ClusteringColumnRestrictions clusteringColumnsRestrictions;
 
 /**
  * Restriction on non-primary key columns (i.e. secondary index 
restrictions)
@@ -113,8 +113,8 @@ public final class StatementRestrictions
 {
 this.type = type;
 this.cfm = cfm;
-this.partitionKeyRestrictions = new 
PrimaryKeyRestrictionSet(cfm.getKeyValidatorAsClusteringComparator(), true);
-this.clusteringColumnsRestrictions = new 
PrimaryKeyRestrictionSet(cfm.comparator, false);
+this.partitionKeyRestrictions = new 
PartitionKeySingleRestrictionSet(cfm.getKeyValidatorAsClusteringComparator());
+this.clusteringColumnsRestrictions = new 
ClusteringColumnRestrictions(cfm);
 this.nonPrimaryKeyRestrictions = new RestrictionSet();
 this.notNullColumns = new HashSet<>();
 }
@@ -224,7 +224,7 @@ public final class StatementRestrictions
 if (isKeyRange && hasQueriableClusteringColumnIndex)
 usesSecondaryIndexing = true;
 
-usesSecondaryIndexing = usesSecondaryIndexing || 
clusteringColumnsRestrictions.isContains();
+usesSecondaryIndexing = usesSecondaryIndexing || 
clusteringColumnsRestrictions.hasContains();
 
 if (usesSecondaryIndexing)
 indexRestrictions.add(clusteringColumnsRestrictions);
@@ -255,12 +255,13 @@ public final class StatementRestrictions
 
 private void addRestriction(Restriction restriction)
 {
-if (restriction.isMultiColumn())
-clusteringColumnsRestrictions = 
clusteringColumnsRestrictions.mergeWith(restriction);
-else if (restriction.isOnToken())
+ColumnDefinition def = restriction.getFirstColumn();
+if (def.isPartitionKey())
 partitionKeyRestrictions = 
partitionKeyRestrictions.mergeWith(restriction);
+else if (def.isClusteringColumn())
+clusteringColumnsRestrictions = 
clusteringColumnsRestrictions.mergeWith(restriction);
 else
-addSingleColumnRestriction((SingleColumnRestriction) restriction);
+nonPrimaryKeyRestrictions = 
nonPrimaryKeyRestrictions.addRestriction((SingleRestriction) restriction);
 }
 
 public Iterable getFunctions()
@@ -276,17 +277,6 @@ public final class StatementRestrictions
 return indexRestrictions;
 }
 
-private void addSingleColumnRestriction(SingleColumnRestriction 
restriction)
-{
-ColumnDefinition def = restriction.columnDef;
-if (def.isPartitionKey())
-partitionKeyRestrictions = 
partitionKeyRestrictions.mergeWith(restriction);
-else if (def.isClusteringColumn())
-clusteringColumnsRestrictions = 
clusteringColumnsRestrictions.mergeWith(restriction);
-else
-nonPrimaryKeyRestrictions = 
nonPrimaryKeyRestrictions.addRestriction(restriction);
-}
-
 /**
  * Returns the non-PK column that are restricted.  If 
includeNotNullRestrictions is true, columns that are restricted
  * by an IS NOT NULL restriction will be included, otherwise they will not 
be included (unless another restriction
@@ -338,14 +328,14 @@ public final class StatementRestrictions
 }
 
 /**
- * Checks if the restrictions on the partition key is an IN restriction.
+ * Checks if the restrictions on the partition key has IN restrictions.
  *
- * @return true the restrictions on the partition key is an 
IN restriction, false
+ * @return true the restrictions on the partition key has an 
IN restriction, false
  * otherwise.
  */
 public boolean keyIsInRelation()
 {
-return partitionKeyRestrictions.isIN();
+return partitionKeyRestrictions.hasIN();
 }
 
 /**
@@ -463,7 +453,7 @@ public final class StatementRestrictions
   boolean 
selectsComplexColumn,
   boolean forView) throws 
InvalidRequestException
 {
-checkFalse(!type.allowClusteringColumnSlices() && 
clust

[1/4] cassandra git commit: Refactor Restriction hierarchy

2016-04-15 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/trunk dc569c9e0 -> 831bebdba


http://git-wip-us.apache.org/repos/asf/cassandra/blob/831bebdb/test/unit/org/apache/cassandra/cql3/restrictions/PrimaryKeyRestrictionSetTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/restrictions/PrimaryKeyRestrictionSetTest.java
 
b/test/unit/org/apache/cassandra/cql3/restrictions/PrimaryKeyRestrictionSetTest.java
deleted file mode 100644
index abbd36b..000
--- 
a/test/unit/org/apache/cassandra/cql3/restrictions/PrimaryKeyRestrictionSetTest.java
+++ /dev/null
@@ -1,1919 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.cassandra.cql3.restrictions;
-
-import java.nio.ByteBuffer;
-import java.util.*;
-
-import com.google.common.collect.Iterables;
-import org.junit.Test;
-
-import org.apache.cassandra.config.CFMetaData;
-import org.apache.cassandra.config.ColumnDefinition;
-import org.apache.cassandra.cql3.*;
-import org.apache.cassandra.cql3.Term.MultiItemTerminal;
-import org.apache.cassandra.cql3.statements.Bound;
-
-import org.apache.cassandra.db.*;
-import org.apache.cassandra.db.marshal.AbstractType;
-import org.apache.cassandra.db.marshal.Int32Type;
-import org.apache.cassandra.db.marshal.ReversedType;
-import org.apache.cassandra.utils.ByteBufferUtil;
-
-import static java.util.Arrays.asList;
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
-
-public class PrimaryKeyRestrictionSetTest
-{
-@Test
-public void testBoundsAsClusteringWithNoRestrictions()
-{
-CFMetaData cfMetaData = newCFMetaData(Sort.ASC);
-
-PrimaryKeyRestrictions restrictions = new 
PrimaryKeyRestrictionSet(cfMetaData.comparator, false);
-
-SortedSet bounds = 
restrictions.boundsAsClustering(Bound.START, QueryOptions.DEFAULT);
-assertEquals(1, bounds.size());
-assertEmptyStart(get(bounds, 0));
-
-bounds = restrictions.boundsAsClustering(Bound.END, 
QueryOptions.DEFAULT);
-assertEquals(1, bounds.size());
-assertEmptyEnd(get(bounds, 0));
-}
-
-/**
- * Test 'clustering_0 = 1' with only one clustering column
- */
-@Test
-public void 
testBoundsAsClusteringWithOneEqRestrictionsAndOneClusteringColumn()
-{
-CFMetaData cfMetaData = newCFMetaData(Sort.ASC);
-
-ByteBuffer clustering_0 = ByteBufferUtil.bytes(1);
-Restriction eq = newSingleEq(cfMetaData, 0, clustering_0);
-
-PrimaryKeyRestrictions restrictions = new 
PrimaryKeyRestrictionSet(cfMetaData.comparator, false);
-restrictions = restrictions.mergeWith(eq);
-
-SortedSet bounds = 
restrictions.boundsAsClustering(Bound.START, QueryOptions.DEFAULT);
-assertEquals(1, bounds.size());
-assertStartBound(get(bounds, 0), true, clustering_0);
-
-bounds = restrictions.boundsAsClustering(Bound.END, 
QueryOptions.DEFAULT);
-assertEquals(1, bounds.size());
-assertEndBound(get(bounds, 0), true, clustering_0);
-}
-
-/**
- * Test 'clustering_1 = 1' with 2 clustering columns
- */
-@Test
-public void 
testBoundsAsClusteringWithOneEqRestrictionsAndTwoClusteringColumns()
-{
-CFMetaData cfMetaData = newCFMetaData(Sort.ASC, Sort.ASC);
-
-ByteBuffer clustering_0 = ByteBufferUtil.bytes(1);
-Restriction eq = newSingleEq(cfMetaData, 0, clustering_0);
-
-PrimaryKeyRestrictions restrictions = new 
PrimaryKeyRestrictionSet(cfMetaData.comparator, false);
-restrictions = restrictions.mergeWith(eq);
-
-SortedSet bounds = 
restrictions.boundsAsClustering(Bound.START, QueryOptions.DEFAULT);
-assertEquals(1, bounds.size());
-assertStartBound(get(bounds, 0), true, clustering_0);
-
-bounds = restrictions.boundsAsClustering(Bound.END, 
QueryOptions.DEFAULT);
-assertEquals(1, bounds.size());
-assertEndBound(get(bounds, 0), true, clustering_0);
-}
-
-/**
- * Test 'clustering_0 IN (1, 2, 3)' with only one clustering column
- */
-@Test
-public void 
testBoundsAsClusteringWithOneInRestrictionsAndOneClusteringColum

[jira] [Commented] (CASSANDRA-11583) Exception when streaming sstables using `sstableloader`

2016-04-15 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243406#comment-15243406
 ] 

Jens Rantil commented on CASSANDRA-11583:
-

I've now upgraded the full cluster to 2.1.13. I am still receiving the same 
exception. So, this does not seem to be a version incompatibility issue.

What's interesting is also that I set up a temporary one-node (2.1.13) cluster. 
Importing the same sstables to that cluster worked without any exceptions. 
Also, I've excluded bad firewall being an issue (temporarily disabled it).

> Exception when streaming sstables using `sstableloader`
> ---
>
> Key: CASSANDRA-11583
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11583
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: $ uname -a
> Linux bigdb-100 3.2.0-99-virtual #139-Ubuntu SMP Mon Feb 1 23:52:21 UTC 2016 
> x86_64 x86_64 x86_64 GNU/Linux
> I am using Datastax Enterprise 4.7.8-1 which is based on 2.1.13.
>Reporter: Jens Rantil
>
> This bug came out of CASSANDRA-11562.
> I have have a keyspace snapshotted from a 2.1.11 (DSE 4.7.5-1) node. When I'm 
> running the `sstableloader` I get the following output/exception:
> {noformat}
> # sstableloader --nodes X.X.X.20 --username YYY --password ZZZ --ignore XXX 
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2
> Established connection to initial hosts
> Opening sstables and calculating sections to stream
> Streaming relevant part of 
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/XXX-ZZZ-ka-6463-Data.db
>  
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/tink-ZZZ-ka-6464-Data.db
>  to [/X.X.X.33, /X.X.X.113, /X.X.X.32, /X.X.X.20, /X.X.X.122, /X.X.X.176, 
> /X.X.X.143, /X.X.X.172, /X.X.X.50, /X.X.X.51, /X.X.X.52, /X.X.X.71, 
> /X.X.X.53, /X.X.X.54, /X.X.X.47, /X.X.X.31, /X.X.X.8]
> progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % [/X.X.X.172]0:0/2 0  
> % [/X.X.X.20]0:0/2 0  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % 
> [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
> [/X.X.X.143]0:0/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
> [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % [/X.X.X.143]0:1/2 1  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % 
> [/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 11 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 19 % 
> [/X.X.X.143]0:1/2 11 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 19 % [/X.X.X.143]0:1/2 15 % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 15 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % 
> [/X.X.X.143]0:1/2 20 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 21 % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 21 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 3  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113

[jira] [Updated] (CASSANDRA-11485) ArithmeticException in avgFunctionForDecimal

2016-04-15 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-11485:
-
   Resolution: Fixed
Fix Version/s: (was: 3.0.x)
   3.0.6
   3.6
   Status: Resolved  (was: Patch Available)

Thanks!
Committed as ce445991fab05d2ba404f6289796664dd581662a to 3.0 and merged to trunk

> ArithmeticException in avgFunctionForDecimal
> 
>
> Key: CASSANDRA-11485
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11485
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Nico Haller
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.6, 3.0.6
>
>
> I am running into issues when using avg in queries on decimal values.
> It throws an ArithmeticException in 
> org/apache/cassandra/cql3/functions/AggregateFcts.java (Line 184).
> So whenever an exact representation of the quotient is not possible it will 
> throw that error and it never returns to the querying client.
> I am not so sure if this is intended behavior or a bug, but in my opinion if 
> an exact representation of the value is not possible, it should automatically 
> round the value.
> Specifying a rounding mode when calling the divide function should solve the 
> issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: ArithmeticException in avgFunctionForDecimal

2016-04-15 Thread snazy
ArithmeticException in avgFunctionForDecimal

patch by Robert Stupp; reviewed by Tyler Hobbs for CASSANDRA-11485


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ce445991
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ce445991
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ce445991

Branch: refs/heads/trunk
Commit: ce445991fab05d2ba404f6289796664dd581662a
Parents: 9f557ff
Author: Robert Stupp 
Authored: Fri Apr 15 20:31:49 2016 +0200
Committer: Robert Stupp 
Committed: Fri Apr 15 20:31:49 2016 +0200

--
 CHANGES.txt  |  1 +
 .../cassandra/cql3/functions/AggregateFcts.java  |  8 +---
 .../validation/operations/AggregationTest.java   | 19 +++
 3 files changed, 25 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce445991/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 3b4d473..85660d9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.6
+ * ArithmeticException in avgFunctionForDecimal (CASSANDRA-11485)
  * Allow only DISTINCT queries with partition keys or static columns 
restrictions (CASSANDRA-11339)
  * LogAwareFileLister should only use OLD sstable files in current folder to 
determine disk consistency (CASSANDRA-11470)
  * Notify indexers of expired rows during compaction (CASSANDRA-11329)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce445991/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
--
diff --git a/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java 
b/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
index a1b67e1..79a08cd 100644
--- a/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
+++ b/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
@@ -19,6 +19,7 @@ package org.apache.cassandra.cql3.functions;
 
 import java.math.BigDecimal;
 import java.math.BigInteger;
+import java.math.RoundingMode;
 import java.nio.ByteBuffer;
 import java.util.ArrayList;
 import java.util.Collection;
@@ -184,9 +185,9 @@ public abstract class AggregateFcts
 public ByteBuffer compute(int protocolVersion)
 {
 if (count == 0)
-return ((DecimalType) 
returnType()).decompose(BigDecimal.ZERO);
+return 
DecimalType.instance.decompose(BigDecimal.ZERO);
 
-return ((DecimalType) 
returnType()).decompose(sum.divide(BigDecimal.valueOf(count)));
+return 
DecimalType.instance.decompose(sum.divide(BigDecimal.valueOf(count), 
BigDecimal.ROUND_HALF_EVEN));
 }
 
 public void addInput(int protocolVersion, 
List values)
@@ -197,13 +198,14 @@ public abstract class AggregateFcts
 return;
 
 count++;
-BigDecimal number = ((BigDecimal) 
argTypes().get(0).compose(value));
+BigDecimal number = 
DecimalType.instance.compose(value);
 sum = sum.add(number);
 }
 };
 }
 };
 
+
 /**
  * The SUM function for varint values.
  */

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce445991/test/unit/org/apache/cassandra/cql3/validation/operations/AggregationTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/operations/AggregationTest.java
 
b/test/unit/org/apache/cassandra/cql3/validation/operations/AggregationTest.java
index e5420c9..411d5ee 100644
--- 
a/test/unit/org/apache/cassandra/cql3/validation/operations/AggregationTest.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/operations/AggregationTest.java
@@ -18,6 +18,8 @@
 package org.apache.cassandra.cql3.validation.operations;
 
 import java.math.BigDecimal;
+import java.math.MathContext;
+import java.math.RoundingMode;
 import java.nio.ByteBuffer;
 import java.text.SimpleDateFormat;
 import java.util.Arrays;
@@ -1950,4 +1952,21 @@ public class AggregationTest extends CQLTester
 }
 }
 }
+
+@Test
+public void testArithmeticCorrectness() throws Throwable
+{
+createTable("create table %s (bucket int primary key, val decimal)");
+execute("insert into %s (bucket, val) values (1, 0.25)");
+execute("insert into %s (bucket, val) values (2, 0.25)");
+execute("insert into %s (bucket, val) values (3, 0.5);")

[1/3] cassandra git commit: ArithmeticException in avgFunctionForDecimal

2016-04-15 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 9f557ff7d -> ce445991f
  refs/heads/trunk 4e09d76e7 -> dc569c9e0


ArithmeticException in avgFunctionForDecimal

patch by Robert Stupp; reviewed by Tyler Hobbs for CASSANDRA-11485


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ce445991
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ce445991
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ce445991

Branch: refs/heads/cassandra-3.0
Commit: ce445991fab05d2ba404f6289796664dd581662a
Parents: 9f557ff
Author: Robert Stupp 
Authored: Fri Apr 15 20:31:49 2016 +0200
Committer: Robert Stupp 
Committed: Fri Apr 15 20:31:49 2016 +0200

--
 CHANGES.txt  |  1 +
 .../cassandra/cql3/functions/AggregateFcts.java  |  8 +---
 .../validation/operations/AggregationTest.java   | 19 +++
 3 files changed, 25 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce445991/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 3b4d473..85660d9 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.6
+ * ArithmeticException in avgFunctionForDecimal (CASSANDRA-11485)
  * Allow only DISTINCT queries with partition keys or static columns 
restrictions (CASSANDRA-11339)
  * LogAwareFileLister should only use OLD sstable files in current folder to 
determine disk consistency (CASSANDRA-11470)
  * Notify indexers of expired rows during compaction (CASSANDRA-11329)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce445991/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
--
diff --git a/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java 
b/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
index a1b67e1..79a08cd 100644
--- a/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
+++ b/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
@@ -19,6 +19,7 @@ package org.apache.cassandra.cql3.functions;
 
 import java.math.BigDecimal;
 import java.math.BigInteger;
+import java.math.RoundingMode;
 import java.nio.ByteBuffer;
 import java.util.ArrayList;
 import java.util.Collection;
@@ -184,9 +185,9 @@ public abstract class AggregateFcts
 public ByteBuffer compute(int protocolVersion)
 {
 if (count == 0)
-return ((DecimalType) 
returnType()).decompose(BigDecimal.ZERO);
+return 
DecimalType.instance.decompose(BigDecimal.ZERO);
 
-return ((DecimalType) 
returnType()).decompose(sum.divide(BigDecimal.valueOf(count)));
+return 
DecimalType.instance.decompose(sum.divide(BigDecimal.valueOf(count), 
BigDecimal.ROUND_HALF_EVEN));
 }
 
 public void addInput(int protocolVersion, 
List values)
@@ -197,13 +198,14 @@ public abstract class AggregateFcts
 return;
 
 count++;
-BigDecimal number = ((BigDecimal) 
argTypes().get(0).compose(value));
+BigDecimal number = 
DecimalType.instance.compose(value);
 sum = sum.add(number);
 }
 };
 }
 };
 
+
 /**
  * The SUM function for varint values.
  */

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce445991/test/unit/org/apache/cassandra/cql3/validation/operations/AggregationTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/operations/AggregationTest.java
 
b/test/unit/org/apache/cassandra/cql3/validation/operations/AggregationTest.java
index e5420c9..411d5ee 100644
--- 
a/test/unit/org/apache/cassandra/cql3/validation/operations/AggregationTest.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/operations/AggregationTest.java
@@ -18,6 +18,8 @@
 package org.apache.cassandra.cql3.validation.operations;
 
 import java.math.BigDecimal;
+import java.math.MathContext;
+import java.math.RoundingMode;
 import java.nio.ByteBuffer;
 import java.text.SimpleDateFormat;
 import java.util.Arrays;
@@ -1950,4 +1952,21 @@ public class AggregationTest extends CQLTester
 }
 }
 }
+
+@Test
+public void testArithmeticCorrectness() throws Throwable
+{
+createTable("create table %s (bucket int primary key, val decimal)");
+execute("insert into %s (bucket, val) values (1,

[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-04-15 Thread snazy
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dc569c9e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dc569c9e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dc569c9e

Branch: refs/heads/trunk
Commit: dc569c9e05eb5145801b438c08c9f681044d1c50
Parents: 4e09d76 ce44599
Author: Robert Stupp 
Authored: Fri Apr 15 20:33:55 2016 +0200
Committer: Robert Stupp 
Committed: Fri Apr 15 20:33:55 2016 +0200

--
 CHANGES.txt  |  1 +
 .../cassandra/cql3/functions/AggregateFcts.java  |  8 +---
 .../validation/operations/AggregationTest.java   | 19 +++
 3 files changed, 25 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dc569c9e/CHANGES.txt
--
diff --cc CHANGES.txt
index 7db486d,85660d9..c91a4cd
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,50 -1,5 +1,51 @@@
 -3.0.6
 +3.6
 + * Eliminate allocations in R/W path (CASSANDRA-11421)
 + * Update Netty to 4.0.36 (CASSANDRA-11567)
 + * Fix PER PARTITION LIMIT for queries requiring post-query ordering 
(CASSANDRA-11556)
 + * Allow instantiation of UDTs and tuples in UDFs (CASSANDRA-10818)
 + * Support UDT in CQLSSTableWriter (CASSANDRA-10624)
 + * Support for non-frozen user-defined types, updating
 +   individual fields of user-defined types (CASSANDRA-7423)
 + * Make LZ4 compression level configurable (CASSANDRA-11051)
 + * Allow per-partition LIMIT clause in CQL (CASSANDRA-7017)
 + * Make custom filtering more extensible with UserExpression (CASSANDRA-11295)
 + * Improve field-checking and error reporting in cassandra.yaml 
(CASSANDRA-10649)
 + * Print CAS stats in nodetool proxyhistograms (CASSANDRA-11507)
 + * More user friendly error when providing an invalid token to nodetool 
(CASSANDRA-9348)
 + * Add static column support to SASI index (CASSANDRA-11183)
 + * Support EQ/PREFIX queries in SASI CONTAINS mode without tokenization 
(CASSANDRA-11434)
 + * Support LIKE operator in prepared statements (CASSANDRA-11456)
 + * Add a command to see if a Materialized View has finished building 
(CASSANDRA-9967)
 + * Log endpoint and port associated with streaming operation (CASSANDRA-8777)
 + * Print sensible units for all log messages (CASSANDRA-9692)
 + * Upgrade Netty to version 4.0.34 (CASSANDRA-11096)
 + * Break the CQL grammar into separate Parser and Lexer (CASSANDRA-11372)
 + * Compress only inter-dc traffic by default (CASSANDRA-)
 + * Add metrics to track write amplification (CASSANDRA-11420)
 + * cassandra-stress: cannot handle "value-less" tables (CASSANDRA-7739)
 + * Add/drop multiple columns in one ALTER TABLE statement (CASSANDRA-10411)
 + * Add require_endpoint_verification opt for internode encryption 
(CASSANDRA-9220)
 + * Add auto import java.util for UDF code block (CASSANDRA-11392)
 + * Add --hex-format option to nodetool getsstables (CASSANDRA-11337)
 + * sstablemetadata should print sstable min/max token (CASSANDRA-7159)
 + * Do not wrap CassandraException in TriggerExecutor (CASSANDRA-9421)
 + * COPY TO should have higher double precision (CASSANDRA-11255)
 + * Stress should exit with non-zero status after failure (CASSANDRA-10340)
 + * Add client to cqlsh SHOW_SESSION (CASSANDRA-8958)
 + * Fix nodetool tablestats keyspace level metrics (CASSANDRA-11226)
 + * Store repair options in parent_repair_history (CASSANDRA-11244)
 + * Print current leveling in sstableofflinerelevel (CASSANDRA-9588)
 + * Change repair message for keyspaces with RF 1 (CASSANDRA-11203)
 + * Remove hard-coded SSL cipher suites and protocols (CASSANDRA-10508)
 + * Improve concurrency in CompactionStrategyManager (CASSANDRA-10099)
 + * (cqlsh) interpret CQL type for formatting blobs (CASSANDRA-11274)
 + * Refuse to start and print txn log information in case of disk
 +   corruption (CASSANDRA-10112)
 + * Resolve some eclipse-warnings (CASSANDRA-11086)
 + * (cqlsh) Show static columns in a different color (CASSANDRA-11059)
 + * Allow to remove TTLs on table with default_time_to_live (CASSANDRA-11207)
 +Merged from 3.0:
+  * ArithmeticException in avgFunctionForDecimal (CASSANDRA-11485)
   * Allow only DISTINCT queries with partition keys or static columns 
restrictions (CASSANDRA-11339)
   * LogAwareFileLister should only use OLD sstable files in current folder to 
determine disk consistency (CASSANDRA-11470)
   * Notify indexers of expired rows during compaction (CASSANDRA-11329)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/dc569c9e/src/java/org/apache/cassandra/cql3/functions/AggregateFcts.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/dc569c9e/test/unit/org

[jira] [Commented] (CASSANDRA-11497) dtest failure in sstableutil_test.SSTableUtilTest.abortedcompaction_test

2016-04-15 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243379#comment-15243379
 ] 

Philip Thompson commented on CASSANDRA-11497:
-

[~stefania], I've bumped up num_records on this test twice, and it continues to 
fail. Could this be a bug? Or do we just need to make it *much* higher?

> dtest failure in sstableutil_test.SSTableUtilTest.abortedcompaction_test
> 
>
> Key: CASSANDRA-11497
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11497
> Project: Cassandra
>  Issue Type: Test
>Reporter: Michael Shuler
>Assignee: DS Test Eng
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/637/testReport/sstableutil_test/SSTableUtilTest/abortedcompaction_test
> Failed on CassCI build cassandra-3.0_dtest #637
> Next run passed, so this could be a flaky test.
> {noformat}
> Error Message
> 0 not greater than 0
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-gbo1Uc
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'start_rpc': 'true'}
> dtest: DEBUG: About to invoke sstableutil...
> dtest: DEBUG: Listing files...
> /mnt/tmp/dtest-gbo1Uc/test/node1/data0/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-4-big-CRC.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data0/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-4-big-Data.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data0/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-4-big-Digest.crc32
> /mnt/tmp/dtest-gbo1Uc/test/node1/data0/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-4-big-Filter.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data0/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-4-big-Index.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data0/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-4-big-Statistics.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data0/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-4-big-Summary.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data0/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-4-big-TOC.txt
> /mnt/tmp/dtest-gbo1Uc/test/node1/data1/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-1-big-CRC.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data1/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-1-big-Data.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data1/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-1-big-Digest.crc32
> /mnt/tmp/dtest-gbo1Uc/test/node1/data1/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-1-big-Filter.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data1/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-1-big-Index.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data1/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-1-big-Statistics.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data1/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-1-big-Summary.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data1/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-1-big-TOC.txt
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-2-big-CRC.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-2-big-Data.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-2-big-Digest.crc32
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-2-big-Filter.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-2-big-Index.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-2-big-Statistics.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-2-big-Summary.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-2-big-TOC.txt
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-3-big-CRC.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-3-big-Data.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-3-big-Digest.crc32
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-3-big-Filter.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-3-big-Index.db
> /mnt/tmp/dtest-gbo1Uc/test/node1/data2/keyspace1/standard1-688fdfd0f83611e5ad7e97459da1b606/ma-3-big-Statistics.db
> /mnt/

[jira] [Commented] (CASSANDRA-10406) Nodetool supports to rebuild from specific ranges.

2016-04-15 Thread Dikang Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243375#comment-15243375
 ] 

Dikang Gu commented on CASSANDRA-10406:
---

[~yukim], yes, it looks good to me, thanks!

> Nodetool supports to rebuild from specific ranges.
> --
>
> Key: CASSANDRA-10406
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10406
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Dikang Gu
>Assignee: Dikang Gu
> Fix For: 2.1.x
>
> Attachments: 0001-nodetool-rebuild-support-range-tokens.patch
>
>
> Add the 'nodetool rebuildrange' command, so that if `nodetool rebuild` 
> failed, we do not need to rebuild all the ranges, and can just rebuild those 
> failed ones.
> Should be easily ported to all versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11563) dtest failure in repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test_not_intersecting_all_ranges

2016-04-15 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11563:

Status: Patch Available  (was: Open)

https://github.com/riptano/cassandra-dtest/pull/929

> dtest failure in 
> repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test_not_intersecting_all_ranges
> 
>
> Key: CASSANDRA-11563
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11563
> Project: Cassandra
>  Issue Type: Test
>Reporter: Michael Shuler
>Assignee: Philip Thompson
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/344/testReport/repair_tests.incremental_repair_test/TestIncRepair/sstable_marking_test_not_intersecting_all_ranges
> Failed on CassCI build trunk_novnode_dtest #344
> Test does not appear to deal with single-token cluster testing correctly:
> {noformat}
> Error Message
> Error starting node1.
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-I164Fa
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'num_tokens': None, 'phi_convict_threshold': 5, 'start_rpc': 'true'}
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File 
> "/home/automaton/cassandra-dtest/repair_tests/incremental_repair_test.py", 
> line 369, in sstable_marking_test_not_intersecting_all_ranges
> cluster.populate(4, use_vnodes=True).start()
>   File "/home/automaton/ccm/ccmlib/cluster.py", line 360, in start
> raise NodeError("Error starting {0}.".format(node.name), p)
> "Error starting node1.\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-I164Fa\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'num_tokens': None, 'phi_convict_threshold': 5, 'start_rpc': 
> 'true'}\n- >> end captured logging << 
> -"
> Standard Output
> [node1 ERROR] Invalid yaml. Those properties [num_tokens] are not valid
> [node3 ERROR] Invalid yaml. Those properties [num_tokens] are not valid
> [node2 ERROR] Invalid yaml. Those properties [num_tokens] are not valid
> [node4 ERROR] Invalid yaml. Those properties [num_tokens] are not valid
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11206) Support large partitions on the 3.0 sstable format

2016-04-15 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243356#comment-15243356
 ] 

Robert Stupp commented on CASSANDRA-11206:
--

Pushed another commit for the metrics. Intention of the metrics is to find the 
_sweet spot_ of {{column_index_cache_size_in_kb}}. In order to find that _sweet 
spot_ you need to know the size of the entries. The metrics below 
{{org.apache.cassandra.metrics:type=Index,name=RowIndexEntry}} are updated on 
each call to {{openWithIndex}}. But again, configuring 
{{column_index_cache_size_in_kb}} too high would result in GC pressure and 
probably in a bad key cache hit ratio.
* {{IndexedEntrySize}} histogram about the side of IndexedEntry (every type)
* {{IndexInfoCount}} histogram about the number of IndexInfo objects per 
IndexedEntry (every type)
* {{IndexInfoGets}} histogram about the number of gets of a IndexInfo objects 
per IndexedEntry (every type) (for example the number of gets for a binary 
search)


> Support large partitions on the 3.0 sstable format
> --
>
> Key: CASSANDRA-11206
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11206
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jonathan Ellis
>Assignee: Robert Stupp
> Fix For: 3.x
>
> Attachments: 11206-gc.png, trunk-gc.png
>
>
> Cassandra saves a sample of IndexInfo objects that store the offset within 
> each partition of every 64KB (by default) range of rows.  To find a row, we 
> binary search this sample, then scan the partition of the appropriate range.
> The problem is that this scales poorly as partitions grow: on a cache miss, 
> we deserialize the entire set of IndexInfo, which both creates a lot of GC 
> overhead (as noted in CASSANDRA-9754) but is also non-negligible i/o activity 
> (relative to reading a single 64KB row range) as partitions get truly large.
> We introduced an "offset map" in CASSANDRA-10314 that allows us to perform 
> the IndexInfo bsearch while only deserializing IndexInfo that we need to 
> compare against, i.e. log(N) deserializations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11585) Architecture advice on Cassandra on Kubernetes

2016-04-15 Thread Chris Love (JIRA)
Chris Love created CASSANDRA-11585:
--

 Summary: Architecture advice on Cassandra on Kubernetes
 Key: CASSANDRA-11585
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11585
 Project: Cassandra
  Issue Type: Wish
  Components: Lifecycle
 Environment: Kubernetes 1.2
Reporter: Chris Love
Priority: Minor


Would appreciate some advice on the conversation that we are having in regards 
to Snitches and Seed Providers on Kubernetes.  GitHub issue is open here: 
https://github.com/kubernetes/kubernetes/issues/24286#issuecomment-210566088

Thanks

Chris



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11563) dtest failure in repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test_not_intersecting_all_ranges

2016-04-15 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson reassigned CASSANDRA-11563:
---

Assignee: Philip Thompson  (was: DS Test Eng)

> dtest failure in 
> repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test_not_intersecting_all_ranges
> 
>
> Key: CASSANDRA-11563
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11563
> Project: Cassandra
>  Issue Type: Test
>Reporter: Michael Shuler
>Assignee: Philip Thompson
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/344/testReport/repair_tests.incremental_repair_test/TestIncRepair/sstable_marking_test_not_intersecting_all_ranges
> Failed on CassCI build trunk_novnode_dtest #344
> Test does not appear to deal with single-token cluster testing correctly:
> {noformat}
> Error Message
> Error starting node1.
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-I164Fa
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'num_tokens': None, 'phi_convict_threshold': 5, 'start_rpc': 'true'}
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File 
> "/home/automaton/cassandra-dtest/repair_tests/incremental_repair_test.py", 
> line 369, in sstable_marking_test_not_intersecting_all_ranges
> cluster.populate(4, use_vnodes=True).start()
>   File "/home/automaton/ccm/ccmlib/cluster.py", line 360, in start
> raise NodeError("Error starting {0}.".format(node.name), p)
> "Error starting node1.\n >> begin captured logging << 
> \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-I164Fa\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'num_tokens': None, 'phi_convict_threshold': 5, 'start_rpc': 
> 'true'}\n- >> end captured logging << 
> -"
> Standard Output
> [node1 ERROR] Invalid yaml. Those properties [num_tokens] are not valid
> [node3 ERROR] Invalid yaml. Those properties [num_tokens] are not valid
> [node2 ERROR] Invalid yaml. Those properties [num_tokens] are not valid
> [node4 ERROR] Invalid yaml. Those properties [num_tokens] are not valid
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11402) Alignment wrong in tpstats output for PerDiskMemtableFlushWriter

2016-04-15 Thread Joel Knighton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Knighton updated CASSANDRA-11402:
--
Status: Open  (was: Patch Available)

> Alignment wrong in tpstats output for PerDiskMemtableFlushWriter
> 
>
> Key: CASSANDRA-11402
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11402
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joel Knighton
>Assignee: Ryan Magnusson
>Priority: Trivial
>  Labels: lhf
> Fix For: 3.x
>
> Attachments: 11402-trunk.txt
>
>
> With the accompanying designation of which memtableflushwriter it is, this 
> threadpool name is too long for the hardcoded padding in tpstats output.
> We should dynamically calculate padding so that we don't need to check this 
> every time we add a threadpool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11206) Support large partitions on the 3.0 sstable format

2016-04-15 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-11206:
--
Status: Open  (was: Ready to Commit)

> Support large partitions on the 3.0 sstable format
> --
>
> Key: CASSANDRA-11206
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11206
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jonathan Ellis
>Assignee: Robert Stupp
> Fix For: 3.x
>
> Attachments: 11206-gc.png, trunk-gc.png
>
>
> Cassandra saves a sample of IndexInfo objects that store the offset within 
> each partition of every 64KB (by default) range of rows.  To find a row, we 
> binary search this sample, then scan the partition of the appropriate range.
> The problem is that this scales poorly as partitions grow: on a cache miss, 
> we deserialize the entire set of IndexInfo, which both creates a lot of GC 
> overhead (as noted in CASSANDRA-9754) but is also non-negligible i/o activity 
> (relative to reading a single 64KB row range) as partitions get truly large.
> We introduced an "offset map" in CASSANDRA-10314 that allows us to perform 
> the IndexInfo bsearch while only deserializing IndexInfo that we need to 
> compare against, i.e. log(N) deserializations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11206) Support large partitions on the 3.0 sstable format

2016-04-15 Thread Anonymous (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anonymous updated CASSANDRA-11206:
--
Status: Ready to Commit  (was: Patch Available)

> Support large partitions on the 3.0 sstable format
> --
>
> Key: CASSANDRA-11206
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11206
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jonathan Ellis
>Assignee: Robert Stupp
> Fix For: 3.x
>
> Attachments: 11206-gc.png, trunk-gc.png
>
>
> Cassandra saves a sample of IndexInfo objects that store the offset within 
> each partition of every 64KB (by default) range of rows.  To find a row, we 
> binary search this sample, then scan the partition of the appropriate range.
> The problem is that this scales poorly as partitions grow: on a cache miss, 
> we deserialize the entire set of IndexInfo, which both creates a lot of GC 
> overhead (as noted in CASSANDRA-9754) but is also non-negligible i/o activity 
> (relative to reading a single 64KB row range) as partitions get truly large.
> We introduced an "offset map" in CASSANDRA-10314 that allows us to perform 
> the IndexInfo bsearch while only deserializing IndexInfo that we need to 
> compare against, i.e. log(N) deserializations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11485) ArithmeticException in avgFunctionForDecimal

2016-04-15 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11485:

Reviewer: Tyler Hobbs

+1, patch and tests look good

> ArithmeticException in avgFunctionForDecimal
> 
>
> Key: CASSANDRA-11485
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11485
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Nico Haller
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.0.x
>
>
> I am running into issues when using avg in queries on decimal values.
> It throws an ArithmeticException in 
> org/apache/cassandra/cql3/functions/AggregateFcts.java (Line 184).
> So whenever an exact representation of the quotient is not possible it will 
> throw that error and it never returns to the querying client.
> I am not so sure if this is intended behavior or a bug, but in my opinion if 
> an exact representation of the value is not possible, it should automatically 
> round the value.
> Specifying a rounding mode when calling the divide function should solve the 
> issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11581) Cassandra connection failed

2016-04-15 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs resolved CASSANDRA-11581.
-
Resolution: Invalid

> Cassandra connection failed
> ---
>
> Key: CASSANDRA-11581
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11581
> Project: Cassandra
>  Issue Type: Bug
>Reporter: priya swaiin
>
> I am not able to connect to cassandra which installed in VM.
> Can anyone help me out  to resolve the below issue ?
> Cluster cluster = Cluster.Builder().AddContactPoint("127.0.0.1").Build();
>  Metadata metadata = cluster.Metadata;
>  ISession session = cluster.Connect();// ("demo");
>  session.Execute("insert into emp (emp_id,emp_name,emp_phone) 
> values (6, 'suman','12345678' )");
>  Console.WriteLine("Save secess");
> Console.Read();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11584) Add stats about index-entries to per sstable-stats

2016-04-15 Thread Robert Stupp (JIRA)
Robert Stupp created CASSANDRA-11584:


 Summary: Add stats about index-entries to per sstable-stats
 Key: CASSANDRA-11584
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11584
 Project: Cassandra
  Issue Type: Improvement
Reporter: Robert Stupp
Priority: Minor
 Fix For: 4.x


Knowing how big index entries (indexed or not) are could be of advantage to 
tune data modeling or to tune the .yaml config - especially after 
CASSANDRA-11206.

Nice would be:
* histogram of the serialized sizes of RowIndexEntry
* histogram of the number of IndexInfo per indexed entry




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11560) dtest failure in user_types_test.TestUserTypes.udt_subfield_test

2016-04-15 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11560:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> dtest failure in user_types_test.TestUserTypes.udt_subfield_test
> 
>
> Key: CASSANDRA-11560
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11560
> Project: Cassandra
>  Issue Type: Test
>Reporter: Michael Shuler
>Assignee: Tyler Hobbs
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1125/testReport/user_types_test/TestUserTypes/udt_subfield_test
> Failed on CassCI build trunk_dtest #1125
> Appears to be a test problem:
> {noformat}
> Error Message
> 'NoneType' object is not iterable
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-Kzg9Sk
> dtest: DEBUG: Custom init_config not found. Setting defaults.
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> - >> end captured logging << -
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 253, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/user_types_test.py", line 767, in 
> udt_subfield_test
> self.assertEqual(listify(rows[0]), [[None]])
>   File "/home/automaton/cassandra-dtest/user_types_test.py", line 25, in 
> listify
> for i in item:
> "'NoneType' object is not iterable\n >> begin captured 
> logging << \ndtest: DEBUG: cluster ccm directory: 
> /mnt/tmp/dtest-Kzg9Sk\ndtest: DEBUG: Custom init_config not found. Setting 
> defaults.\ndtest: DEBUG: Done setting configuration options:\n{   
> 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': 
> 5,\n'range_request_timeout_in_ms': 1,\n
> 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n   
>  'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': 
> 1}\n- >> end captured logging << 
> -"
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11570) Concurrent execution of prepared statement returns invalid JSON as result

2016-04-15 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-11570:

Status: Awaiting Feedback  (was: Open)

> Concurrent execution of prepared statement returns invalid JSON as result
> -
>
> Key: CASSANDRA-11570
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11570
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.2, C++ or C# driver
>Reporter: Alexander Ryabets
>Assignee: Tyler Hobbs
> Attachments: CassandraPreparedStatementsTest.zip, broken_output.txt, 
> test_neptunao.cql, valid_output.txt
>
>
> When I use prepared statement for async execution of multiple statements I 
> get JSON with broken data. Keys got totally corrupted when values seems to be 
> normal though.
> First I encoutered this issue when I were performing stress testing of our 
> project using custom script. We are using DataStax C++ driver and execute 
> statements from different fibers.
> Then I was trying to isolate problem and wrote simple C# program which starts 
> multiple Tasks in a loop. Each task uses the once created prepared statement 
> to read data from the base. As you can see results are totally mess.
> I 've attached archive with console C# project (1 cs file) which just print 
> resulting JSON to user. 
> Here is the main part of C# code.
> {noformat}
> static void Main(string[] args)
> {
>   const int task_count = 300;
>   using(var cluster = Cluster.Builder().AddContactPoints(/*contact points 
> here*/).Build())
>   {
> using(var session = cluster.Connect())
> {
>   var prepared = session.Prepare("select json * from test_neptunao.ubuntu 
> where id=?");
>   var tasks = new Task[task_count];
>   for(int i = 0; i < task_count; i++)
>   {
> tasks[i] = Query(prepared, session);
>   }
>   Task.WaitAll(tasks);
> }
>   }
>   Console.ReadKey();
> }
> private static Task Query(PreparedStatement prepared, ISession session)
> {
>   string id = GetIdOfRandomRow();
>   var stmt = prepared.Bind(id);
>   stmt.SetConsistencyLevel(ConsistencyLevel.One);
>   return session.ExecuteAsync(stmt).ContinueWith(tr =>
>   {
> foreach(var row in tr.Result)
> {
>   var value = row.GetValue(0);
>   //some kind of output
> }
>   });
> }
> {noformat}
> I also attached cql script with test DB schema.
> {noformat}
> CREATE KEYSPACE IF NOT EXISTS test_neptunao
> WITH replication = {
>   'class' : 'SimpleStrategy',
>   'replication_factor' : 3
> };
> use test_neptunao;
> create table if not exists ubuntu (
>   id timeuuid PRIMARY KEY,
>   precise_pangolin text,
>   trusty_tahr text,
>   wily_werewolf text, 
>   vivid_vervet text,
>   saucy_salamander text,
>   lucid_lynx text
> );
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11570) Concurrent execution of prepared statement returns invalid JSON as result

2016-04-15 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243173#comment-15243173
 ] 

Tyler Hobbs commented on CASSANDRA-11570:
-

This is probably a duplicate of CASSANDRA-11048.  Can you see if this 
reproduces on 3.4+?

> Concurrent execution of prepared statement returns invalid JSON as result
> -
>
> Key: CASSANDRA-11570
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11570
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.2, C++ or C# driver
>Reporter: Alexander Ryabets
>Assignee: Tyler Hobbs
> Attachments: CassandraPreparedStatementsTest.zip, broken_output.txt, 
> test_neptunao.cql, valid_output.txt
>
>
> When I use prepared statement for async execution of multiple statements I 
> get JSON with broken data. Keys got totally corrupted when values seems to be 
> normal though.
> First I encoutered this issue when I were performing stress testing of our 
> project using custom script. We are using DataStax C++ driver and execute 
> statements from different fibers.
> Then I was trying to isolate problem and wrote simple C# program which starts 
> multiple Tasks in a loop. Each task uses the once created prepared statement 
> to read data from the base. As you can see results are totally mess.
> I 've attached archive with console C# project (1 cs file) which just print 
> resulting JSON to user. 
> Here is the main part of C# code.
> {noformat}
> static void Main(string[] args)
> {
>   const int task_count = 300;
>   using(var cluster = Cluster.Builder().AddContactPoints(/*contact points 
> here*/).Build())
>   {
> using(var session = cluster.Connect())
> {
>   var prepared = session.Prepare("select json * from test_neptunao.ubuntu 
> where id=?");
>   var tasks = new Task[task_count];
>   for(int i = 0; i < task_count; i++)
>   {
> tasks[i] = Query(prepared, session);
>   }
>   Task.WaitAll(tasks);
> }
>   }
>   Console.ReadKey();
> }
> private static Task Query(PreparedStatement prepared, ISession session)
> {
>   string id = GetIdOfRandomRow();
>   var stmt = prepared.Bind(id);
>   stmt.SetConsistencyLevel(ConsistencyLevel.One);
>   return session.ExecuteAsync(stmt).ContinueWith(tr =>
>   {
> foreach(var row in tr.Result)
> {
>   var value = row.GetValue(0);
>   //some kind of output
> }
>   });
> }
> {noformat}
> I also attached cql script with test DB schema.
> {noformat}
> CREATE KEYSPACE IF NOT EXISTS test_neptunao
> WITH replication = {
>   'class' : 'SimpleStrategy',
>   'replication_factor' : 3
> };
> use test_neptunao;
> create table if not exists ubuntu (
>   id timeuuid PRIMARY KEY,
>   precise_pangolin text,
>   trusty_tahr text,
>   wily_werewolf text, 
>   vivid_vervet text,
>   saucy_salamander text,
>   lucid_lynx text
> );
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-15 Thread Brett Snyder (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brett Snyder updated CASSANDRA-11574:
-
Comment: was deleted

(was: I have a very similar setup and see this as well.  I will investigate 
today and see if I can put a patch together.)

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
> Fix For: 3.0.6
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11519) Add support for IBM POWER

2016-04-15 Thread Rei Odaira (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rei Odaira updated CASSANDRA-11519:
---
Flags: Patch

> Add support for IBM POWER
> -
>
> Key: CASSANDRA-11519
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11519
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: POWER architecture
>Reporter: Rei Odaira
>Priority: Minor
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: 11519-2.1.txt, 11519-3.0.txt
>
>
> Add support for the IBM POWER architecture (ppc, ppc64, and ppc64le) in 
> org.apache.cassandra.utils.FastByteOperations, 
> org.apache.cassandra.utils.memory.MemoryUtil, and 
> org.apache.cassandra.io.util.Memory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11576) Add support for JNA mlockall(2) on POWER

2016-04-15 Thread Rei Odaira (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rei Odaira updated CASSANDRA-11576:
---
Status: Patch Available  (was: Open)

> Add support for JNA mlockall(2) on POWER
> 
>
> Key: CASSANDRA-11576
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11576
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: POWER architecture
>Reporter: Rei Odaira
>Priority: Minor
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: 11576-2.1.txt
>
>
> org.apache.cassandra.utils.CLibrary contains hard-coded C-macro values to be 
> passed to system calls through JNA. These values are system-dependent, and as 
> far as I investigated, Linux and AIX on the IBM POWER architecture define 
> {{MCL_CURRENT}} and {{MCL_FUTURE}} (for mlockall(2)) as different values than 
> the current hard-coded values.  As a result, mlockall(2) fails on these 
> platforms.
> {code}
> WARN  18:51:51 Unknown mlockall error 22
> {code}
> I am going to provide a patch to support JNA mlockall(2) on POWER.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8844) Change Data Capture (CDC)

2016-04-15 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243142#comment-15243142
 ] 

Branimir Lambov commented on CASSANDRA-8844:


Some confusion in the read / replay part, probably my fault for not documenting 
the details well.

Replay positions (or CLSP) are given as a segment id, and uncompressed (or 
"logical" as Jason calls it) position within the segment file. For Cassandra 
2.2+ logical position does not have to match file position. When replaying, 
anything with greater segment id, or with equal segment id but greater-or-equal 
logical position, must be replayed.

The following are potential problems with that:

- [Descriptor parsed id mismatch 
error|https://github.com/apache/cassandra/compare/trunk...josh-mckenzie:8844_review#diff-9fe0bd988c4fc47a022f589f5ad72b09R148]
 doesn't look right. The replay position specifies from which id (and position 
within that id) we should replay. In addition to (parts of) the file with the 
same id, this includes all files with higher ids. Mismatch is normal.
- [Mutation before offset 
check|https://github.com/apache/cassandra/compare/trunk...josh-mckenzie:8844_review#diff-9fe0bd988c4fc47a022f589f5ad72b09R246]
 compares file position with logical segment position and is only valid for 
uncompressed files.
- [{{shouldSkipSegment}} 
JavaDoc|https://github.com/apache/cassandra/compare/trunk...josh-mckenzie:8844_review#diff-0184f4e288c68732ceb30cdc49a76c4aR73]
 should make it clear which kind of position it needs (it is used correctly).
- It's not a good thing that these weren't caught by tests.

Other remarks:

- 
[{{prepReader}}|https://github.com/apache/cassandra/compare/trunk...josh-mckenzie:8844_review#diff-9fe0bd988c4fc47a022f589f5ad72b09R108]
 is only called for pre-2.1 segments. JavaDoc does not say so. I don't think we 
want this in the handler interface, inline it at its one use site.
- 
[{{statusTracker.flagError}}|https://github.com/apache/cassandra/compare/trunk...josh-mckenzie:8844_review#diff-9fe0bd988c4fc47a022f589f5ad72b09R270]
 isn't a very fitting name for what is actually a termination request.
- [{{flagError}} and 
return|https://github.com/apache/cassandra/compare/trunk...josh-mckenzie:8844_review#diff-9fe0bd988c4fc47a022f589f5ad72b09R305]
 is inconsistent with the rest in {{readSection}}. It should also return 
regardless of the shouldStop result as there's nothing meaningful to be done 
with the rest of the section.
The old code does this differently, always breaks sync _and_ segment replay on 
errors, which AFAIR is done to make certain we don't try to replay old data in 
a partially overwritten pre-2.2 segment. Such data should have an invalid sync 
marker, though, so this change is fine, and should be an improvement as it 
could be able to scavenge more on bit rot. In the common file-not-fully-written 
case, though, you will get a second error due to this change when it tries to 
read the next section.

Pre-existing issues:

- 
[segmentId|https://github.com/apache/cassandra/compare/trunk...josh-mckenzie:8844_review#diff-9fe0bd988c4fc47a022f589f5ad72b09R120]
 confuses that it would be the one used later. We should rename this to 
{{segmentIdFromFilename}}.
- [tolerateErrorsInSection 
&=|https://github.com/apache/cassandra/compare/trunk...josh-mckenzie:8844_review#diff-9fe0bd988c4fc47a022f589f5ad72b09R186]:
 I don't think it was intended for the value to depend on previous iterations.


> Change Data Capture (CDC)
> -
>
> Key: CASSANDRA-8844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8844
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Coordination, Local Write-Read Paths
>Reporter: Tupshin Harper
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 3.x
>
>
> "In databases, change data capture (CDC) is a set of software design patterns 
> used to determine (and track) the data that has changed so that action can be 
> taken using the changed data. Also, Change data capture (CDC) is an approach 
> to data integration that is based on the identification, capture and delivery 
> of the changes made to enterprise data sources."
> -Wikipedia
> As Cassandra is increasingly being used as the Source of Record (SoR) for 
> mission critical data in large enterprises, it is increasingly being called 
> upon to act as the central hub of traffic and data flow to other systems. In 
> order to try to address the general need, we (cc [~brianmhess]), propose 
> implementing a simple data logging mechanism to enable per-table CDC patterns.
> h2. The goals:
> # Use CQL as the primary ingestion mechanism, in order to leverage its 
> Consistency Level semantics, and in order to treat it as the single 
> reliable/durable SoR for the data.
> # To provide a mechanism 

[jira] [Updated] (CASSANDRA-11576) Add support for JNA mlockall(2) on POWER

2016-04-15 Thread Rei Odaira (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rei Odaira updated CASSANDRA-11576:
---
 Flags: Patch
Attachment: 11576-2.1.txt

> Add support for JNA mlockall(2) on POWER
> 
>
> Key: CASSANDRA-11576
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11576
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: POWER architecture
>Reporter: Rei Odaira
>Priority: Minor
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: 11576-2.1.txt
>
>
> org.apache.cassandra.utils.CLibrary contains hard-coded C-macro values to be 
> passed to system calls through JNA. These values are system-dependent, and as 
> far as I investigated, Linux and AIX on the IBM POWER architecture define 
> {{MCL_CURRENT}} and {{MCL_FUTURE}} (for mlockall(2)) as different values than 
> the current hard-coded values.  As a result, mlockall(2) fails on these 
> platforms.
> {code}
> WARN  18:51:51 Unknown mlockall error 22
> {code}
> I am going to provide a patch to support JNA mlockall(2) on POWER.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11577) Traces persist for longer than 24 hours

2016-04-15 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko reassigned CASSANDRA-11577:
-

Assignee: Aleksey Yeschenko

> Traces persist for longer than 24 hours
> ---
>
> Key: CASSANDRA-11577
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11577
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Josh Wickman
>Assignee: Aleksey Yeschenko
>Priority: Minor
>
> My deployment currently has clusters on both Cassandra 1.2 (1.2.19) and 2.1 
> (2.1.11) with tracing on.  On 2.1, the trace records persist for longer than 
> the [documented 24 
> hours|https://docs.datastax.com/en/cql/3.3/cql/cql_reference/tracing_r.html]:
> {noformat}
> cqlsh> select started_at from system_traces.sessions limit 10;
>  started_at
> --
>  2016-03-11 23:28:40+
>  2016-03-14 21:09:07+
>  2016-03-14 16:42:25+
>  2016-03-14 16:13:13+
>  2016-03-14 19:12:11+
>  2016-03-14 21:25:57+
>  2016-03-29 22:45:28+
>  2016-03-14 19:56:27+
>  2016-03-09 23:31:41+
>  2016-03-10 23:08:44+
> (10 rows)
> {noformat}
> My systems on 1.2 do not exhibit this problem:
> {noformat}
> cqlsh> select started_at from system_traces.sessions limit 10;
>  started_at
> --
>  2016-04-13 22:49:31+
>  2016-04-14 18:06:45+
>  2016-04-14 07:57:00+
>  2016-04-14 04:35:05+
>  2016-04-14 03:54:20+
>  2016-04-14 10:54:38+
>  2016-04-14 18:34:04+
>  2016-04-14 12:56:57+
>  2016-04-14 01:57:20+
>  2016-04-13 21:36:01+
> {noformat}
> The event records also persist alongside the session records, for example:
> {noformat}
> cqlsh> select session_id, dateOf(event_id) from system_traces.events where 
> session_id = fc8c1e80-e7e0-11e5-a2fb-1968ff3c067b;
>  session_id   | dateOf(event_id)
> --+--
>  fc8c1e80-e7e0-11e5-a2fb-1968ff3c067b | 2016-03-11 23:28:40+
> {noformat}
> Between these versions, the table parameter {{default_time_to_live}} was 
> introduced.  The {{system_traces}} tables report the default value of 0:
> {noformat}
> cqlsh> desc table system_traces.sessions
> CREATE TABLE system_traces.sessions (
> session_id uuid PRIMARY KEY,
> coordinator inet,
> duration int,
> parameters map,
> request text,
> started_at timestamp
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = 'traced sessions'
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.SnappyCompressor'}
> AND dclocal_read_repair_chance = 0.0
> AND default_time_to_live = 0
> AND gc_grace_seconds = 0
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99.0PERCENTILE';
> {noformat}
> I suspect that {{default_time_to_live}} is superseding the mechanism used in 
> 1.2 to expire the trace records.  Evidently I cannot change this parameter 
> for this table:
> {noformat}
> cqlsh> alter table system_traces.sessions with default_time_to_live = 86400;
> Unauthorized: code=2100 [Unauthorized] message="Cannot ALTER  system_traces.sessions>"
> {noformat}
> I realize Cassandra 1.2 is no longer supported, but the problem is being 
> manifested in Cassandra 2.1 for me (I included 1.2 only for comparison).  
> Since I couldn't find an existing ticket addressing this issue, I'm concerned 
> that it may be present in more recent versions of Cassandra as well, but I 
> have not tested these.
> The persistent trace records are contributing to disk filling, and more 
> importantly, making it more difficult to analyze the trace data.  Is there a 
> workaround for this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11570) Concurrent execution of prepared statement returns invalid JSON as result

2016-04-15 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-11570:

Assignee: Tyler Hobbs

> Concurrent execution of prepared statement returns invalid JSON as result
> -
>
> Key: CASSANDRA-11570
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11570
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.2, C++ or C# driver
>Reporter: Alexander Ryabets
>Assignee: Tyler Hobbs
> Attachments: CassandraPreparedStatementsTest.zip, broken_output.txt, 
> test_neptunao.cql, valid_output.txt
>
>
> When I use prepared statement for async execution of multiple statements I 
> get JSON with broken data. Keys got totally corrupted when values seems to be 
> normal though.
> First I encoutered this issue when I were performing stress testing of our 
> project using custom script. We are using DataStax C++ driver and execute 
> statements from different fibers.
> Then I was trying to isolate problem and wrote simple C# program which starts 
> multiple Tasks in a loop. Each task uses the once created prepared statement 
> to read data from the base. As you can see results are totally mess.
> I 've attached archive with console C# project (1 cs file) which just print 
> resulting JSON to user. 
> Here is the main part of C# code.
> {noformat}
> static void Main(string[] args)
> {
>   const int task_count = 300;
>   using(var cluster = Cluster.Builder().AddContactPoints(/*contact points 
> here*/).Build())
>   {
> using(var session = cluster.Connect())
> {
>   var prepared = session.Prepare("select json * from test_neptunao.ubuntu 
> where id=?");
>   var tasks = new Task[task_count];
>   for(int i = 0; i < task_count; i++)
>   {
> tasks[i] = Query(prepared, session);
>   }
>   Task.WaitAll(tasks);
> }
>   }
>   Console.ReadKey();
> }
> private static Task Query(PreparedStatement prepared, ISession session)
> {
>   string id = GetIdOfRandomRow();
>   var stmt = prepared.Bind(id);
>   stmt.SetConsistencyLevel(ConsistencyLevel.One);
>   return session.ExecuteAsync(stmt).ContinueWith(tr =>
>   {
> foreach(var row in tr.Result)
> {
>   var value = row.GetValue(0);
>   //some kind of output
> }
>   });
> }
> {noformat}
> I also attached cql script with test DB schema.
> {noformat}
> CREATE KEYSPACE IF NOT EXISTS test_neptunao
> WITH replication = {
>   'class' : 'SimpleStrategy',
>   'replication_factor' : 3
> };
> use test_neptunao;
> create table if not exists ubuntu (
>   id timeuuid PRIMARY KEY,
>   precise_pangolin text,
>   trusty_tahr text,
>   wily_werewolf text, 
>   vivid_vervet text,
>   saucy_salamander text,
>   lucid_lynx text
> );
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11548) Anticompaction not removing old sstables

2016-04-15 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11548:
--
Status: Patch Available  (was: Open)

> Anticompaction not removing old sstables
> 
>
> Key: CASSANDRA-11548
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11548
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.1.13
>Reporter: Ruoran Wang
> Attachments: 0001-cassandra-2.1.13-potential-fix.patch
>
>
> 1. 12/29/15 https://issues.apache.org/jira/browse/CASSANDRA-10831
> Moved markCompactedSSTablesReplaced out of the loop ```for (SSTableReader 
> sstable : repairedSSTables)```
> 2. 1/18/16 https://issues.apache.org/jira/browse/CASSANDRA-10829
> Added unmarkCompacting into the loop. ```for (SSTableReader sstable : 
> repairedSSTables)```
> I think the effect of those above change might cause the 
> markCompactedSSTablesReplaced fail on 
> DataTracker.java
> {noformat}
>assert newSSTables.size() + newShadowed.size() == newSSTablesSize :
> String.format("Expecting new size of %d, got %d while 
> replacing %s by %s in %s",
>   newSSTablesSize, newSSTables.size() + 
> newShadowed.size(), oldSSTables, replacements, this);
> {noformat}
> Since change CASSANDRA-10831 moved it out. This AssertError won't be caught, 
> leaving the oldsstables not removed. (Then this might cause row out of order 
> error when doing incremental repair if there are L1 un-repaired sstables.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11544) NullPointerException if metrics reporter config file doesn't exist

2016-04-15 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11544:
--
Status: Patch Available  (was: Open)

> NullPointerException if metrics reporter config file doesn't exist
> --
>
> Key: CASSANDRA-11544
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11544
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Christopher Batey
>Assignee: Christopher Batey
>Priority: Minor
> Attachments: 
> 0001-Avoid-NPE-exception-when-metrics-reporter-config-doe.patch
>
>
> Patch attached or at 
> https://github.com/chbatey/cassandra-1/tree/npe-when-metrics-file-not-exist



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11537) Give clear error when certain nodetool commands are issued before server is ready

2016-04-15 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11537:
--
Status: Patch Available  (was: Open)

> Give clear error when certain nodetool commands are issued before server is 
> ready
> -
>
> Key: CASSANDRA-11537
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11537
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
>Priority: Minor
>
> As an ops person upgrading and servicing Cassandra servers, I require a more 
> clear message when I issue a nodetool command that the server is not ready 
> for it so that I am not confused.
> Technical description:
> If you deploy a new binary, restart, and issue nodetool 
> scrub/compact/updatess etc you get unfriendly assertion. An exception would 
> be easier to understand. Also if a user has turned assertions off it is 
> unclear what might happen. 
> {noformat}
> EC1: Throw exception to make it clear server is still in start up process. 
> :~# nodetool upgradesstables
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at org.apache.cassandra.db.Keyspace.open(Keyspace.java:97)
> at 
> org.apache.cassandra.service.StorageService.getValidKeyspace(StorageService.java:2573)
> at 
> org.apache.cassandra.service.StorageService.getValidColumnFamilies(StorageService.java:2661)
> at 
> org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2421)
> {noformat}
> EC1: 
> Patch against 2.1 (branch)
> https://github.com/apache/cassandra/compare/trunk...edwardcapriolo:exception-on-startup?expand=1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10134) Always require replace_address to replace existing address

2016-04-15 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243094#comment-15243094
 ] 

Sam Tunnicliffe commented on CASSANDRA-10134:
-

I should also mention that this ticket also uncovered a limitation in ccmlib as 
it's used by dtests. Currently, there's no way (that I could find at least) to 
specify a seed address directly, the cluster's seed list being a list of 
{{Node}} instances. When generating node config, {{Cluster}} then always uses 
the storage (i.e. listen) address in the seed list. This is a problem when 
{{broadcast_address != listen_address}}, as gossip is broadcast-centric. I 
found that {{snitch_test}} would fail because it specifies a single seed and 
that node would get stuck in SR, waiting for a response from it's own 
{{listen_address}}. The correct behaviour would be to recognise that the only 
entry in the seed list was itself and skip the shadow round completely. I've 
pushed a change to ccm 
[here|https://github.com/pcmanus/ccm/compare/master...beobal:10134] for this 
and dtest runs should use that branch.


> Always require replace_address to replace existing address
> --
>
> Key: CASSANDRA-10134
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10134
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Distributed Metadata
>Reporter: Tyler Hobbs
>Assignee: Sam Tunnicliffe
> Fix For: 3.x
>
>
> Normally, when a node is started from a clean state with the same address as 
> an existing down node, it will fail to start with an error like this:
> {noformat}
> ERROR [main] 2015-08-19 15:07:51,577 CassandraDaemon.java:554 - Exception 
> encountered during startup
> java.lang.RuntimeException: A node with address /127.0.0.3 already exists, 
> cancelling join. Use cassandra.replace_address if you want to replace this 
> node.
>   at 
> org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:543)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:783)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:720)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:611)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378) 
> [main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:537)
>  [main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:626) 
> [main/:na]
> {noformat}
> However, if {{auto_bootstrap}} is set to false or the node is in its own seed 
> list, it will not throw this error and will start normally.  The new node 
> then takes over the host ID of the old node (even if the tokens are 
> different), and the only message you will see is a warning in the other 
> nodes' logs:
> {noformat}
> logger.warn("Changing {}'s host ID from {} to {}", endpoint, storedId, 
> hostId);
> {noformat}
> This could cause an operator to accidentally wipe out the token information 
> for a down node without replacing it.  To fix this, we should check for an 
> endpoint collision even if {{auto_bootstrap}} is false or the node is a seed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10916) TestGlobalRowKeyCache.functional_test fails on Windows

2016-04-15 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-10916:

Assignee: (was: Joshua McKenzie)

> TestGlobalRowKeyCache.functional_test fails on Windows
> --
>
> Key: CASSANDRA-10916
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10916
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>  Labels: dtest, windows
> Fix For: 3.0.x
>
>
> {{global_row_key_cache_test.py:TestGlobalRowKeyCache.functional_test}} fails 
> hard on Windows when a node fails to start:
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/156/testReport/global_row_key_cache_test/TestGlobalRowKeyCache/functional_test/
> http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/140/testReport/global_row_key_cache_test/TestGlobalRowKeyCache/functional_test_2/
> I have not dug much into the failure history, so I don't know how closely the 
> failures are related.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10915) netstats_test dtest fails on Windows

2016-04-15 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-10915:

Labels: dtest  (was: dtest windows)

> netstats_test dtest fails on Windows
> 
>
> Key: CASSANDRA-10915
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10915
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>  Labels: dtest
> Fix For: 3.0.x
>
>
> jmx_test.py:TestJMX.netstats_test started failing hard on Windows about a 
> month ago:
> http://cassci.datastax.com/job/cassandra-3.0_dtest_win32/140/testReport/junit/jmx_test/TestJMX/netstats_test/history/?start=25
> http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/156/testReport/jmx_test/TestJMX/netstats_test/history/
> It fails when it is unable to connect to a node via JMX. I don't know if this 
> problem has any relationship to CASSANDRA-10913.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[cassandra] Git Push Summary

2016-04-15 Thread jake
Repository: cassandra
Updated Tags:  refs/tags/2.2.6-tentative [created] 37f63ecc5


[jira] [Commented] (CASSANDRA-10134) Always require replace_address to replace existing address

2016-04-15 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243068#comment-15243068
 ] 

Sam Tunnicliffe commented on CASSANDRA-10134:
-

One of the MV dtests uncovered a small problem for which I've pushed an 
additional commit, and otherwise CI looks good now. 

Building an MV involves writes to the {{system_distributed}} keyspace, which in 
turn requires replica info and so can't be done until we've gone through 
initialization of {{StorageService}}. In fact, in {{CassandraDaemon}} where 
build tasks for all views are submitted at startup (to force completion of any 
interrupted builds), the comment mentions that SS must be initialized first. 
However, the {{Keyspace}} constructor also triggers submission of build tasks 
for all of it's views via {{ViewManager::reload}} and this happens prior to SS 
initialization during startup. So there's a race at startup between SS 
initialization and any view build task reaching a point where it needs to 
update {{system_distributed}}; the window for this race is widened here by the 
mandatory shadow round and so 
{{MaterializedViewTest.interrupt_build_process_test}} was failing pretty 
regularly. The downside of the fix in the patch is that MV builds won't get 
submitted while gossip is stopped (via JMX or nodetool) as this marks SS as 
uninitialized. This doesn't seem like a particularly big problem to me, but if 
there are concerns over that I'm willing to revisit.


> Always require replace_address to replace existing address
> --
>
> Key: CASSANDRA-10134
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10134
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Distributed Metadata
>Reporter: Tyler Hobbs
>Assignee: Sam Tunnicliffe
> Fix For: 3.x
>
>
> Normally, when a node is started from a clean state with the same address as 
> an existing down node, it will fail to start with an error like this:
> {noformat}
> ERROR [main] 2015-08-19 15:07:51,577 CassandraDaemon.java:554 - Exception 
> encountered during startup
> java.lang.RuntimeException: A node with address /127.0.0.3 already exists, 
> cancelling join. Use cassandra.replace_address if you want to replace this 
> node.
>   at 
> org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:543)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:783)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:720)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:611)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378) 
> [main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:537)
>  [main/:na]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:626) 
> [main/:na]
> {noformat}
> However, if {{auto_bootstrap}} is set to false or the node is in its own seed 
> list, it will not throw this error and will start normally.  The new node 
> then takes over the host ID of the old node (even if the tokens are 
> different), and the only message you will see is a warning in the other 
> nodes' logs:
> {noformat}
> logger.warn("Changing {}'s host ID from {} to {}", endpoint, storedId, 
> hostId);
> {noformat}
> This could cause an operator to accidentally wipe out the token information 
> for a down node without replacing it.  To fix this, we should check for an 
> endpoint collision even if {{auto_bootstrap}} is false or the node is a seed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[cassandra] Git Push Summary

2016-04-15 Thread jake
Repository: cassandra
Updated Tags:  refs/tags/2.1.14-tentative [created] 5c5c5b44c


[jira] [Commented] (CASSANDRA-11574) COPY FROM command in cqlsh throws error

2016-04-15 Thread Brett Snyder (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243055#comment-15243055
 ] 

Brett Snyder commented on CASSANDRA-11574:
--

I have a very similar setup and see this as well.  I will investigate today and 
see if I can put a patch together.

> COPY FROM command in cqlsh throws error
> ---
>
> Key: CASSANDRA-11574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11574
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Operating System: Ubuntu Server 14.04
> JDK: Oracle JDK 8 update 77
> Python: 2.7.6
>Reporter: Mahafuzur Rahman
> Fix For: 3.0.6
>
>
> Any COPY FROM command in cqlsh is throwing the following error:
> "get_num_processes() takes no keyword arguments"
> Example command: 
> COPY inboxdata 
> (to_user_id,to_user_network,created_time,attachments,from_user_id,from_user_name,from_user_network,id,message,to_user_name,updated_time)
>  FROM 'inbox.csv';
> Similar commands worked parfectly in the previous versions such as 3.0.4



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[13/13] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2016-04-15 Thread jake
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4e09d76e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4e09d76e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4e09d76e

Branch: refs/heads/trunk
Commit: 4e09d76e7a4b65d3c582d64f41d50db89a193222
Parents: 5477083 9f557ff
Author: T Jake Luciani 
Authored: Fri Apr 15 10:46:29 2016 -0400
Committer: T Jake Luciani 
Committed: Fri Apr 15 10:46:29 2016 -0400

--

--




[03/13] cassandra git commit: 2.2.6 version bump

2016-04-15 Thread jake
2.2.6 version bump


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/220e4f62
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/220e4f62
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/220e4f62

Branch: refs/heads/cassandra-3.0
Commit: 220e4f62db7fe14c4d6c0e499c52059f7ebc5a53
Parents: 69edeaa
Author: T Jake Luciani 
Authored: Fri Apr 15 10:00:04 2016 -0400
Committer: T Jake Luciani 
Committed: Fri Apr 15 10:00:04 2016 -0400

--
 NEWS.txt | 7 +++
 build.xml| 2 +-
 debian/changelog | 6 ++
 3 files changed, 14 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/220e4f62/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 33d97ac..e8f4e66 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,13 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+2.2.6
+=
+
+Upgrading
+-
+- Nothing specific to this release, but please see 2.2 if you are upgrading
+  from a previous version.
 
 2.2.5
 =

http://git-wip-us.apache.org/repos/asf/cassandra/blob/220e4f62/build.xml
--
diff --git a/build.xml b/build.xml
index 361a057..865108e 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 
 
 
-
+
 
 
 http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree"/>

http://git-wip-us.apache.org/repos/asf/cassandra/blob/220e4f62/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 3abe349..604b810 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (2.2.6) unstable; urgency=medium
+
+  * 
+
+ -- Jake Luciani   Fri, 15 Apr 2016 09:47:38 -0400
+
 cassandra (2.2.5) unstable; urgency=medium
 
   * New release 



[08/13] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-04-15 Thread jake
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/37f63ecc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/37f63ecc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/37f63ecc

Branch: refs/heads/trunk
Commit: 37f63ecc5d3b36fc115fd7ae98e4fc1f4bc2d1d6
Parents: 220e4f6 5c5c5b4
Author: T Jake Luciani 
Authored: Fri Apr 15 10:34:48 2016 -0400
Committer: T Jake Luciani 
Committed: Fri Apr 15 10:34:48 2016 -0400

--

--




[06/13] cassandra git commit: 2.1.14 version bump

2016-04-15 Thread jake
2.1.14 version bump


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5c5c5b44
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5c5c5b44
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5c5c5b44

Branch: refs/heads/cassandra-2.2
Commit: 5c5c5b44c6d952d4d6f8170fa4ef239060275b76
Parents: c1b1d3b
Author: T Jake Luciani 
Authored: Fri Apr 15 10:30:21 2016 -0400
Committer: T Jake Luciani 
Committed: Fri Apr 15 10:30:21 2016 -0400

--
 NEWS.txt | 8 
 build.xml| 2 +-
 debian/changelog | 6 ++
 3 files changed, 15 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5c5b44/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 845801d..a6df5c0 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,14 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+2.1.14
+=
+
+Upgrading
+-
+- Nothing specific to this release, but please see 2.1 if you are upgrading
+  from a previous version.
+
 2.1.13
 ==
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5c5b44/build.xml
--
diff --git a/build.xml b/build.xml
index 9f75a9f..d9957a7 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 
 
 
-
+
 
 
 http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree"/>

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5c5b44/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index eca6c18..0ff0392 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (2.1.14) unstable; urgency=medium
+
+  * New release
+
+ -- Jake Luciani   Fri, 15 Apr 2016 10:29:30 -0400
+
 cassandra (2.1.13) unstable; urgency=medium
 
   * New release 



[05/13] cassandra git commit: 2.1.14 version bump

2016-04-15 Thread jake
2.1.14 version bump


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5c5c5b44
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5c5c5b44
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5c5c5b44

Branch: refs/heads/trunk
Commit: 5c5c5b44c6d952d4d6f8170fa4ef239060275b76
Parents: c1b1d3b
Author: T Jake Luciani 
Authored: Fri Apr 15 10:30:21 2016 -0400
Committer: T Jake Luciani 
Committed: Fri Apr 15 10:30:21 2016 -0400

--
 NEWS.txt | 8 
 build.xml| 2 +-
 debian/changelog | 6 ++
 3 files changed, 15 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5c5b44/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 845801d..a6df5c0 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,14 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+2.1.14
+=
+
+Upgrading
+-
+- Nothing specific to this release, but please see 2.1 if you are upgrading
+  from a previous version.
+
 2.1.13
 ==
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5c5b44/build.xml
--
diff --git a/build.xml b/build.xml
index 9f75a9f..d9957a7 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 
 
 
-
+
 
 
 http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree"/>

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5c5b44/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index eca6c18..0ff0392 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (2.1.14) unstable; urgency=medium
+
+  * New release
+
+ -- Jake Luciani   Fri, 15 Apr 2016 10:29:30 -0400
+
 cassandra (2.1.13) unstable; urgency=medium
 
   * New release 



[02/13] cassandra git commit: 2.2.6 version bump

2016-04-15 Thread jake
2.2.6 version bump


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/220e4f62
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/220e4f62
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/220e4f62

Branch: refs/heads/trunk
Commit: 220e4f62db7fe14c4d6c0e499c52059f7ebc5a53
Parents: 69edeaa
Author: T Jake Luciani 
Authored: Fri Apr 15 10:00:04 2016 -0400
Committer: T Jake Luciani 
Committed: Fri Apr 15 10:00:04 2016 -0400

--
 NEWS.txt | 7 +++
 build.xml| 2 +-
 debian/changelog | 6 ++
 3 files changed, 14 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/220e4f62/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 33d97ac..e8f4e66 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,13 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+2.2.6
+=
+
+Upgrading
+-
+- Nothing specific to this release, but please see 2.2 if you are upgrading
+  from a previous version.
 
 2.2.5
 =

http://git-wip-us.apache.org/repos/asf/cassandra/blob/220e4f62/build.xml
--
diff --git a/build.xml b/build.xml
index 361a057..865108e 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 
 
 
-
+
 
 
 http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree"/>

http://git-wip-us.apache.org/repos/asf/cassandra/blob/220e4f62/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 3abe349..604b810 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (2.2.6) unstable; urgency=medium
+
+  * 
+
+ -- Jake Luciani   Fri, 15 Apr 2016 09:47:38 -0400
+
 cassandra (2.2.5) unstable; urgency=medium
 
   * New release 



[04/13] cassandra git commit: 2.1.14 version bump

2016-04-15 Thread jake
2.1.14 version bump


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5c5c5b44
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5c5c5b44
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5c5c5b44

Branch: refs/heads/cassandra-3.0
Commit: 5c5c5b44c6d952d4d6f8170fa4ef239060275b76
Parents: c1b1d3b
Author: T Jake Luciani 
Authored: Fri Apr 15 10:30:21 2016 -0400
Committer: T Jake Luciani 
Committed: Fri Apr 15 10:30:21 2016 -0400

--
 NEWS.txt | 8 
 build.xml| 2 +-
 debian/changelog | 6 ++
 3 files changed, 15 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5c5b44/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 845801d..a6df5c0 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,14 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+2.1.14
+=
+
+Upgrading
+-
+- Nothing specific to this release, but please see 2.1 if you are upgrading
+  from a previous version.
+
 2.1.13
 ==
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5c5b44/build.xml
--
diff --git a/build.xml b/build.xml
index 9f75a9f..d9957a7 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 
 
 
-
+
 
 
 http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree"/>

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5c5b44/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index eca6c18..0ff0392 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (2.1.14) unstable; urgency=medium
+
+  * New release
+
+ -- Jake Luciani   Fri, 15 Apr 2016 10:29:30 -0400
+
 cassandra (2.1.13) unstable; urgency=medium
 
   * New release 



[10/13] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-04-15 Thread jake
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/37f63ecc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/37f63ecc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/37f63ecc

Branch: refs/heads/cassandra-2.2
Commit: 37f63ecc5d3b36fc115fd7ae98e4fc1f4bc2d1d6
Parents: 220e4f6 5c5c5b4
Author: T Jake Luciani 
Authored: Fri Apr 15 10:34:48 2016 -0400
Committer: T Jake Luciani 
Committed: Fri Apr 15 10:34:48 2016 -0400

--

--




[12/13] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-15 Thread jake
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9f557ff7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9f557ff7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9f557ff7

Branch: refs/heads/trunk
Commit: 9f557ff7db631ecd81c5227eace7d0dcca28e326
Parents: 6ad8745 37f63ec
Author: T Jake Luciani 
Authored: Fri Apr 15 10:35:11 2016 -0400
Committer: T Jake Luciani 
Committed: Fri Apr 15 10:35:11 2016 -0400

--

--




[07/13] cassandra git commit: 2.1.14 version bump

2016-04-15 Thread jake
2.1.14 version bump


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5c5c5b44
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5c5c5b44
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5c5c5b44

Branch: refs/heads/cassandra-2.1
Commit: 5c5c5b44c6d952d4d6f8170fa4ef239060275b76
Parents: c1b1d3b
Author: T Jake Luciani 
Authored: Fri Apr 15 10:30:21 2016 -0400
Committer: T Jake Luciani 
Committed: Fri Apr 15 10:30:21 2016 -0400

--
 NEWS.txt | 8 
 build.xml| 2 +-
 debian/changelog | 6 ++
 3 files changed, 15 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5c5b44/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 845801d..a6df5c0 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,14 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+2.1.14
+=
+
+Upgrading
+-
+- Nothing specific to this release, but please see 2.1 if you are upgrading
+  from a previous version.
+
 2.1.13
 ==
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5c5b44/build.xml
--
diff --git a/build.xml b/build.xml
index 9f75a9f..d9957a7 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 
 
 
-
+
 
 
 http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree"/>

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c5c5b44/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index eca6c18..0ff0392 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (2.1.14) unstable; urgency=medium
+
+  * New release
+
+ -- Jake Luciani   Fri, 15 Apr 2016 10:29:30 -0400
+
 cassandra (2.1.13) unstable; urgency=medium
 
   * New release 



[01/13] cassandra git commit: 2.2.6 version bump

2016-04-15 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 c1b1d3bcc -> 5c5c5b44c
  refs/heads/cassandra-2.2 69edeaa46 -> 37f63ecc5
  refs/heads/cassandra-3.0 6ad874509 -> 9f557ff7d
  refs/heads/trunk 5477083a2 -> 4e09d76e7


2.2.6 version bump


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/220e4f62
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/220e4f62
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/220e4f62

Branch: refs/heads/cassandra-2.2
Commit: 220e4f62db7fe14c4d6c0e499c52059f7ebc5a53
Parents: 69edeaa
Author: T Jake Luciani 
Authored: Fri Apr 15 10:00:04 2016 -0400
Committer: T Jake Luciani 
Committed: Fri Apr 15 10:00:04 2016 -0400

--
 NEWS.txt | 7 +++
 build.xml| 2 +-
 debian/changelog | 6 ++
 3 files changed, 14 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/220e4f62/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 33d97ac..e8f4e66 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -13,6 +13,13 @@ restore snapshots created with the previous major version 
using the
 'sstableloader' tool. You can upgrade the file format of your snapshots
 using the provided 'sstableupgrade' tool.
 
+2.2.6
+=
+
+Upgrading
+-
+- Nothing specific to this release, but please see 2.2 if you are upgrading
+  from a previous version.
 
 2.2.5
 =

http://git-wip-us.apache.org/repos/asf/cassandra/blob/220e4f62/build.xml
--
diff --git a/build.xml b/build.xml
index 361a057..865108e 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 
 
 
-
+
 
 
 http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree"/>

http://git-wip-us.apache.org/repos/asf/cassandra/blob/220e4f62/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 3abe349..604b810 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (2.2.6) unstable; urgency=medium
+
+  * 
+
+ -- Jake Luciani   Fri, 15 Apr 2016 09:47:38 -0400
+
 cassandra (2.2.5) unstable; urgency=medium
 
   * New release 



[11/13] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-04-15 Thread jake
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9f557ff7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9f557ff7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9f557ff7

Branch: refs/heads/cassandra-3.0
Commit: 9f557ff7db631ecd81c5227eace7d0dcca28e326
Parents: 6ad8745 37f63ec
Author: T Jake Luciani 
Authored: Fri Apr 15 10:35:11 2016 -0400
Committer: T Jake Luciani 
Committed: Fri Apr 15 10:35:11 2016 -0400

--

--




[09/13] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-04-15 Thread jake
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/37f63ecc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/37f63ecc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/37f63ecc

Branch: refs/heads/cassandra-3.0
Commit: 37f63ecc5d3b36fc115fd7ae98e4fc1f4bc2d1d6
Parents: 220e4f6 5c5c5b4
Author: T Jake Luciani 
Authored: Fri Apr 15 10:34:48 2016 -0400
Committer: T Jake Luciani 
Committed: Fri Apr 15 10:34:48 2016 -0400

--

--




[jira] [Updated] (CASSANDRA-11548) Anticompaction not removing old sstables

2016-04-15 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-11548:

Reviewer: Paulo Motta  (was: Marcus Eriksson)

> Anticompaction not removing old sstables
> 
>
> Key: CASSANDRA-11548
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11548
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.1.13
>Reporter: Ruoran Wang
> Attachments: 0001-cassandra-2.1.13-potential-fix.patch
>
>
> 1. 12/29/15 https://issues.apache.org/jira/browse/CASSANDRA-10831
> Moved markCompactedSSTablesReplaced out of the loop ```for (SSTableReader 
> sstable : repairedSSTables)```
> 2. 1/18/16 https://issues.apache.org/jira/browse/CASSANDRA-10829
> Added unmarkCompacting into the loop. ```for (SSTableReader sstable : 
> repairedSSTables)```
> I think the effect of those above change might cause the 
> markCompactedSSTablesReplaced fail on 
> DataTracker.java
> {noformat}
>assert newSSTables.size() + newShadowed.size() == newSSTablesSize :
> String.format("Expecting new size of %d, got %d while 
> replacing %s by %s in %s",
>   newSSTablesSize, newSSTables.size() + 
> newShadowed.size(), oldSSTables, replacements, this);
> {noformat}
> Since change CASSANDRA-10831 moved it out. This AssertError won't be caught, 
> leaving the oldsstables not removed. (Then this might cause row out of order 
> error when doing incremental repair if there are L1 un-repaired sstables.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9935) Repair fails with RuntimeException

2016-04-15 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-9935:
---
Assignee: Paulo Motta

> Repair fails with RuntimeException
> --
>
> Key: CASSANDRA-9935
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9935
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.1.8, Debian Wheezy
>Reporter: mlowicki
>Assignee: Paulo Motta
> Fix For: 2.1.x
>
> Attachments: 9935.patch, db1.sync.lati.osa.cassandra.log, 
> db5.sync.lati.osa.cassandra.log, system.log.10.210.3.117, 
> system.log.10.210.3.221, system.log.10.210.3.230
>
>
> We had problems with slow repair in 2.1.7 (CASSANDRA-9702) but after upgrade 
> to 2.1.8 it started to work faster but now it fails with:
> {code}
> ...
> [2015-07-29 20:44:03,956] Repair session 23a811b0-3632-11e5-a93e-4963524a8bde 
> for range (-5474076923322749342,-5468600594078911162] finished
> [2015-07-29 20:44:03,957] Repair session 336f8740-3632-11e5-a93e-4963524a8bde 
> for range (-8631877858109464676,-8624040066373718932] finished
> [2015-07-29 20:44:03,957] Repair session 4ccd8430-3632-11e5-a93e-4963524a8bde 
> for range (-5372806541854279315,-5369354119480076785] finished
> [2015-07-29 20:44:03,957] Repair session 59f129f0-3632-11e5-a93e-4963524a8bde 
> for range (8166489034383821955,8168408930184216281] finished
> [2015-07-29 20:44:03,957] Repair session 6ae7a9a0-3632-11e5-a93e-4963524a8bde 
> for range (6084602890817326921,6088328703025510057] finished
> [2015-07-29 20:44:03,957] Repair session 8938e4a0-3632-11e5-a93e-4963524a8bde 
> for range (-781874602493000830,-781745173070807746] finished
> [2015-07-29 20:44:03,957] Repair command #4 finished
> error: nodetool failed, check server logs
> -- StackTrace --
> java.lang.RuntimeException: nodetool failed, check server logs
> at 
> org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:290)
> at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:202)
> {code}
> After running:
> {code}
> nodetool repair --partitioner-range --parallel --in-local-dc sync
> {code}
> Last records in logs regarding repair are:
> {code}
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 09ff9e40-3632-11e5-a93e-4963524a8bde for range 
> (-7695808664784761779,-7693529816291585568] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 17d8d860-3632-11e5-a93e-4963524a8bde for range 
> (806371695398849,8065203836608925992] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 23a811b0-3632-11e5-a93e-4963524a8bde for range 
> (-5474076923322749342,-5468600594078911162] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 336f8740-3632-11e5-a93e-4963524a8bde for range 
> (-8631877858109464676,-8624040066373718932] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 4ccd8430-3632-11e5-a93e-4963524a8bde for range 
> (-5372806541854279315,-5369354119480076785] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 59f129f0-3632-11e5-a93e-4963524a8bde for range 
> (8166489034383821955,8168408930184216281] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 6ae7a9a0-3632-11e5-a93e-4963524a8bde for range 
> (6084602890817326921,6088328703025510057] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 8938e4a0-3632-11e5-a93e-4963524a8bde for range 
> (-781874602493000830,-781745173070807746] finished
> {code}
> but a bit above I see (at least two times in attached log):
> {code}
> ERROR [Thread-173887] 2015-07-29 20:44:03,853 StorageService.java:2959 - 
> Repair session 1b07ea50-3608-11e5-a93e-4963524a8bde for range 
> (5765414319217852786,5781018794516851576] failed with error 
> org.apache.cassandra.exceptions.RepairException: [repair 
> #1b07ea50-3608-11e5-a93e-4963524a8bde on sync/entity_by_id2, 
> (5765414319217852786,5781018794516851576]] Validation failed in /10.195.15.162
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.RepairException: [repair 
> #1b07ea50-3608-11e5-a93e-4963524a8bde on sync/entity_by_id2, 
> (5765414319217852786,5781018794516851576]] Validation failed in /10.195.15.162
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
> [na:1.7.0_80]
> at java.util.concurrent.FutureTask.get(FutureTask.java:188) 
> [na:1.7.0_80]
> at 
> org.apache.cassandra.service.StorageService$4.runMayThrow(StorageService.java:2950)
>  ~[apache-cassandra-2.1.8.jar:2.1.8]
> at

[jira] [Updated] (CASSANDRA-9935) Repair fails with RuntimeException

2016-04-15 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-9935:
--
Assignee: (was: Yuki Morishita)

> Repair fails with RuntimeException
> --
>
> Key: CASSANDRA-9935
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9935
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.1.8, Debian Wheezy
>Reporter: mlowicki
> Fix For: 2.1.x
>
> Attachments: 9935.patch, db1.sync.lati.osa.cassandra.log, 
> db5.sync.lati.osa.cassandra.log, system.log.10.210.3.117, 
> system.log.10.210.3.221, system.log.10.210.3.230
>
>
> We had problems with slow repair in 2.1.7 (CASSANDRA-9702) but after upgrade 
> to 2.1.8 it started to work faster but now it fails with:
> {code}
> ...
> [2015-07-29 20:44:03,956] Repair session 23a811b0-3632-11e5-a93e-4963524a8bde 
> for range (-5474076923322749342,-5468600594078911162] finished
> [2015-07-29 20:44:03,957] Repair session 336f8740-3632-11e5-a93e-4963524a8bde 
> for range (-8631877858109464676,-8624040066373718932] finished
> [2015-07-29 20:44:03,957] Repair session 4ccd8430-3632-11e5-a93e-4963524a8bde 
> for range (-5372806541854279315,-5369354119480076785] finished
> [2015-07-29 20:44:03,957] Repair session 59f129f0-3632-11e5-a93e-4963524a8bde 
> for range (8166489034383821955,8168408930184216281] finished
> [2015-07-29 20:44:03,957] Repair session 6ae7a9a0-3632-11e5-a93e-4963524a8bde 
> for range (6084602890817326921,6088328703025510057] finished
> [2015-07-29 20:44:03,957] Repair session 8938e4a0-3632-11e5-a93e-4963524a8bde 
> for range (-781874602493000830,-781745173070807746] finished
> [2015-07-29 20:44:03,957] Repair command #4 finished
> error: nodetool failed, check server logs
> -- StackTrace --
> java.lang.RuntimeException: nodetool failed, check server logs
> at 
> org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:290)
> at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:202)
> {code}
> After running:
> {code}
> nodetool repair --partitioner-range --parallel --in-local-dc sync
> {code}
> Last records in logs regarding repair are:
> {code}
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 09ff9e40-3632-11e5-a93e-4963524a8bde for range 
> (-7695808664784761779,-7693529816291585568] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 17d8d860-3632-11e5-a93e-4963524a8bde for range 
> (806371695398849,8065203836608925992] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 23a811b0-3632-11e5-a93e-4963524a8bde for range 
> (-5474076923322749342,-5468600594078911162] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - 
> Repair session 336f8740-3632-11e5-a93e-4963524a8bde for range 
> (-8631877858109464676,-8624040066373718932] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 4ccd8430-3632-11e5-a93e-4963524a8bde for range 
> (-5372806541854279315,-5369354119480076785] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 59f129f0-3632-11e5-a93e-4963524a8bde for range 
> (8166489034383821955,8168408930184216281] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 6ae7a9a0-3632-11e5-a93e-4963524a8bde for range 
> (6084602890817326921,6088328703025510057] finished
> INFO  [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - 
> Repair session 8938e4a0-3632-11e5-a93e-4963524a8bde for range 
> (-781874602493000830,-781745173070807746] finished
> {code}
> but a bit above I see (at least two times in attached log):
> {code}
> ERROR [Thread-173887] 2015-07-29 20:44:03,853 StorageService.java:2959 - 
> Repair session 1b07ea50-3608-11e5-a93e-4963524a8bde for range 
> (5765414319217852786,5781018794516851576] failed with error 
> org.apache.cassandra.exceptions.RepairException: [repair 
> #1b07ea50-3608-11e5-a93e-4963524a8bde on sync/entity_by_id2, 
> (5765414319217852786,5781018794516851576]] Validation failed in /10.195.15.162
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> org.apache.cassandra.exceptions.RepairException: [repair 
> #1b07ea50-3608-11e5-a93e-4963524a8bde on sync/entity_by_id2, 
> (5765414319217852786,5781018794516851576]] Validation failed in /10.195.15.162
> at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
> [na:1.7.0_80]
> at java.util.concurrent.FutureTask.get(FutureTask.java:188) 
> [na:1.7.0_80]
> at 
> org.apache.cassandra.service.StorageService$4.runMayThrow(StorageService.java:2950)
>  ~[apache-cassandra-2.1.8.jar:2.1.8]
> at 
> org.apache.

[jira] [Commented] (CASSANDRA-11522) batch_size_fail_threshold_in_kb shouldn't only apply to batch

2016-04-15 Thread Giampaolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242986#comment-15242986
 ] 

Giampaolo commented on CASSANDRA-11522:
---

I forgot about {{max_mutation_size_kb}}, maybe because it's not a commented 
property in the {{cassandra.yaml}} but somehow "hidden" into 
{{commitlog_segment_size_in_mb}} description.

Anyway, I agree with you, a soft limit won't be of help. Thanks for the 
clarification.

> batch_size_fail_threshold_in_kb shouldn't only apply to batch
> -
>
> Key: CASSANDRA-11522
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11522
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Giampaolo
>Priority: Minor
>  Labels: lhf
>
> I can buy that C* is not good at dealing with large (in bytes) inserts and 
> that it makes sense to provide a user configurable protection against inserts 
> larger than a certain size, but it doesn't make sense to limit this to 
> batches. It's absolutely possible to insert a single very large row and 
> internally a batch with a single statement is exactly the same than a single 
> similar insert, so rejecting the former and not the later is confusing and 
> well, wrong.
> Note that I get that batches are more likely to get big and that's where the 
> protection is most often useful, but limiting the option to batch is still 
> less useful (it's a hole in the protection) and it's going to confuse users 
> in thinking that batches to a single partition are different from single 
> inserts.
> Of course that also mean that we should rename that option to 
> {{write_size_fail_threshold_in_kb}}. Which means we probably want to add this 
> new option and just deprecate {{batch_size_fail_threshold_in_kb}} for now 
> (with removal in 4.0).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11206) Support large partitions on the 3.0 sstable format

2016-04-15 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242969#comment-15242969
 ] 

T Jake Luciani commented on CASSANDRA-11206:


I think it would make sense to expose a metric of what kind of index cache hit 
we have Shallow or Regular 

> Support large partitions on the 3.0 sstable format
> --
>
> Key: CASSANDRA-11206
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11206
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jonathan Ellis
>Assignee: Robert Stupp
> Fix For: 3.x
>
> Attachments: 11206-gc.png, trunk-gc.png
>
>
> Cassandra saves a sample of IndexInfo objects that store the offset within 
> each partition of every 64KB (by default) range of rows.  To find a row, we 
> binary search this sample, then scan the partition of the appropriate range.
> The problem is that this scales poorly as partitions grow: on a cache miss, 
> we deserialize the entire set of IndexInfo, which both creates a lot of GC 
> overhead (as noted in CASSANDRA-9754) but is also non-negligible i/o activity 
> (relative to reading a single 64KB row range) as partitions get truly large.
> We introduced an "offset map" in CASSANDRA-10314 that allows us to perform 
> the IndexInfo bsearch while only deserializing IndexInfo that we need to 
> compare against, i.e. log(N) deserializations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11421) Eliminate allocations of byte array for UTF8 String serializations

2016-04-15 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani resolved CASSANDRA-11421.

Resolution: Fixed

Resolved in CASSANDRA-11428

> Eliminate allocations of byte array for UTF8 String serializations
> --
>
> Key: CASSANDRA-11421
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11421
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Nitsan Wakart
>Assignee: Nitsan Wakart
>
> When profiling a read workload (YCSB workload c) on Cassandra 3.2.1 I noticed 
> a large part of allocation profile was generated from String.getBytes() calls 
> on CBUtil::writeString
> I have fixed up the code to use a thread local cached ByteBuffer and 
> CharsetEncoder to eliminate the allocations. This results in improved 
> allocation profile, and a mild improvement in performance.
> The fix is available here:
> https://github.com/nitsanw/cassandra/tree/fix-write-string-allocation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11422) Eliminate temporary object[] allocations in ColumnDefinition::hashCode

2016-04-15 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-11422:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Resolved in CASSANDRA-11428

> Eliminate temporary object[] allocations in ColumnDefinition::hashCode
> --
>
> Key: CASSANDRA-11422
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11422
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Nitsan Wakart
>Assignee: Nitsan Wakart
>
> ColumnDefinition::hashCode currently calls Objects.hashCode(Object...)
> This triggers the allocation of a short lived Object[] which is not 
> eliminated by EscapeAnalysis. I have implemented a fix by inlining the 
> hashcode logic and also added a caching hashcode field. This improved 
> performance on the read workload.
> Fix is available here:
> https://github.com/nitsanw/cassandra/tree/objects-hashcode-fix



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11428) Eliminate Allocations

2016-04-15 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani resolved CASSANDRA-11428.

Resolution: Fixed

committed {{04ba38bdea18d77d2ef006c6571ba8abe7c4dee8}} 

> Eliminate Allocations
> -
>
> Key: CASSANDRA-11428
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11428
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: Nitsan Wakart
>Priority: Minor
> Fix For: 3.0.x
>
> Attachments: benchmarks.tar.gz, pom.xml
>
>
> Linking relevant issues under this master ticket.  For small changes I'd like 
> to test and commit these in bulk 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11423) Eliminate Pair allocations for default DataType conversions

2016-04-15 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-11423:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Resolved in CASSANDRA-11428

> Eliminate Pair allocations for default DataType conversions
> ---
>
> Key: CASSANDRA-11423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11423
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Nitsan Wakart
>Assignee: Nitsan Wakart
>
> The method DataType::fromType returns a Pair. The common path through the 
> method is:
> {
>DataType dt = dataTypeMap.get(type);
>return new Pair(dt, null);
> }
> This results in many redundant allocation and is easy to fix by adding a 
> DataType field to cache this result per DataType and replacing the last line 
> with:
>   return dt.pair;
> see fix:
> https://github.com/nitsanw/cassandra/tree/data-type-dafault-pair



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Eliminate repeated allocation of Pair for default case Replace Objects.hashcode with handrolled to avoid allocation Eliminate allocations of byte array for UTF8 String serializat

2016-04-15 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/trunk a0d070764 -> 5477083a2


Eliminate repeated allocation of Pair for default case
Replace Objects.hashcode with handrolled to avoid allocation
Eliminate allocations of byte array for UTF8 String serializations

Patch by Nitsan Wakart; reviewed by tjake for CASSANDRA-11428


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5477083a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5477083a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5477083a

Branch: refs/heads/trunk
Commit: 5477083a2f087758034703674c49e8012f295c42
Parents: a0d0707
Author: nitsanw 
Authored: Thu Mar 24 10:54:14 2016 +0200
Committer: T Jake Luciani 
Committed: Fri Apr 15 09:35:04 2016 -0400

--
 CHANGES.txt |  1 +
 .../cassandra/config/ColumnDefinition.java  | 18 -
 .../org/apache/cassandra/transport/CBUtil.java  | 78 
 .../apache/cassandra/transport/DataType.java|  4 +-
 4 files changed, 66 insertions(+), 35 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5477083a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 43d1c3c..7db486d 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.6
+ * Eliminate allocations in R/W path (CASSANDRA-11421)
  * Update Netty to 4.0.36 (CASSANDRA-11567)
  * Fix PER PARTITION LIMIT for queries requiring post-query ordering 
(CASSANDRA-11556)
  * Allow instantiation of UDTs and tuples in UDFs (CASSANDRA-10818)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5477083a/src/java/org/apache/cassandra/config/ColumnDefinition.java
--
diff --git a/src/java/org/apache/cassandra/config/ColumnDefinition.java 
b/src/java/org/apache/cassandra/config/ColumnDefinition.java
index 2c2cbb7..a18ed3f 100644
--- a/src/java/org/apache/cassandra/config/ColumnDefinition.java
+++ b/src/java/org/apache/cassandra/config/ColumnDefinition.java
@@ -81,6 +81,8 @@ public class ColumnDefinition extends ColumnSpecification 
implements Comparable<
 private final Comparator asymmetricCellPathComparator;
 private final Comparator cellComparator;
 
+private int hash;
+
 /**
  * These objects are compared frequently, so we encode several of their 
comparison components
  * into a single long value so that this can be done efficiently
@@ -262,9 +264,21 @@ public class ColumnDefinition extends ColumnSpecification 
implements Comparable<
 @Override
 public int hashCode()
 {
-return Objects.hashCode(ksName, cfName, name, type, kind, position);
+// This achieves the same as Objects.hashcode, but avoids the object 
array allocation
+// which features significantly in the allocation profile and caches 
the result.
+int result = hash;
+if(result == 0)
+{
+result = 31 + (ksName == null ? 0 : ksName.hashCode());
+result = 31 * result + (cfName == null ? 0 : cfName.hashCode());
+result = 31 * result + (name == null ? 0 : name.hashCode());
+result = 31 * result + (type == null ? 0 : type.hashCode());
+result = 31 * result + (kind == null ? 0 : kind.hashCode());
+result = 31 * result + position;
+hash = result;
+}
+return result;
 }
-
 @Override
 public String toString()
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5477083a/src/java/org/apache/cassandra/transport/CBUtil.java
--
diff --git a/src/java/org/apache/cassandra/transport/CBUtil.java 
b/src/java/org/apache/cassandra/transport/CBUtil.java
index 800a9a8..43f4bbd 100644
--- a/src/java/org/apache/cassandra/transport/CBUtil.java
+++ b/src/java/org/apache/cassandra/transport/CBUtil.java
@@ -33,15 +33,19 @@ import java.util.List;
 import java.util.Map;
 import java.util.UUID;
 
-import io.netty.buffer.*;
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.ByteBufAllocator;
+import io.netty.buffer.ByteBufUtil;
+import io.netty.buffer.PooledByteBufAllocator;
+import io.netty.buffer.UnpooledByteBufAllocator;
 import io.netty.util.CharsetUtil;
-
+import io.netty.util.concurrent.FastThreadLocal;
 import org.apache.cassandra.config.Config;
 import org.apache.cassandra.db.ConsistencyLevel;
 import org.apache.cassandra.db.TypeSizes;
+import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.Pair;
 import org.apache.cassandra.utils.UUIDGen;
-import org.apache.cassandra.utils.ByteBufferUtil;
 
 /**
  * ByteBuf utility methods.
@@ -55,9 +59,7 @@ public abstract class CBUtil
  

[jira] [Commented] (CASSANDRA-11522) batch_size_fail_threshold_in_kb shouldn't only apply to batch

2016-04-15 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242946#comment-15242946
 ] 

Paulo Motta commented on CASSANDRA-11522:
-

>From my understanding the main concern here was to make the behavior of 
>{{batch_size_fail_threshold_in_kb}} consistent between single batch inserts 
>and ordinary inserts, but this was already fixed on CASSANDRA-10876, but 
>Sylvain will probably be able to clarify best.

We already have {{max_mutation_size_kb}} with a hard limit for mutation size, 
so IMO we shouldn't include another artificial limit or warning given it would 
be hard to define a soft limit (since it can vary with hardware and load) so it 
could potentially confuse more than help (while for batches, a few kilobytes of 
multi-partition batches can already be catastrophic so that's why the warn and 
fail threshold are important in that case).

> batch_size_fail_threshold_in_kb shouldn't only apply to batch
> -
>
> Key: CASSANDRA-11522
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11522
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Giampaolo
>Priority: Minor
>  Labels: lhf
>
> I can buy that C* is not good at dealing with large (in bytes) inserts and 
> that it makes sense to provide a user configurable protection against inserts 
> larger than a certain size, but it doesn't make sense to limit this to 
> batches. It's absolutely possible to insert a single very large row and 
> internally a batch with a single statement is exactly the same than a single 
> similar insert, so rejecting the former and not the later is confusing and 
> well, wrong.
> Note that I get that batches are more likely to get big and that's where the 
> protection is most often useful, but limiting the option to batch is still 
> less useful (it's a hole in the protection) and it's going to confuse users 
> in thinking that batches to a single partition are different from single 
> inserts.
> Of course that also mean that we should rename that option to 
> {{write_size_fail_threshold_in_kb}}. Which means we probably want to add this 
> new option and just deprecate {{batch_size_fail_threshold_in_kb}} for now 
> (with removal in 4.0).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10756) Timeout failures in NativeTransportService.testConcurrentDestroys unit test

2016-04-15 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242937#comment-15242937
 ] 

Alex Petrov commented on CASSANDRA-10756:
-

I also haven't realised that [~spo...@gmail.com] was the original author of the 
{{NativeTransportService}}.

Just in case, patch and re-running dtests:

|[trunk|https://github.com/ifesdjeen/cassandra/tree/10756-trunk]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-10756-trunk-testall/]|

I'll try running it a dozen times to make sure there's no race condition 
anymore. 

> Timeout failures in NativeTransportService.testConcurrentDestroys unit test
> ---
>
> Key: CASSANDRA-10756
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10756
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joel Knighton
>Assignee: Alex Petrov
>
> History of test on trunk 
> [here|http://cassci.datastax.com/job/trunk_testall/lastCompletedBuild/testReport/org.apache.cassandra.service/NativeTransportServiceTest/testConcurrentDestroys/history/].
> I've seen these failures across 3.0/trunk for a while. I ran the test looping 
> locally for a while and the timeout is fairly easy to reproduce. The timeout 
> appears to be an indefinite hang and not a timing issue.
> When the timeout occurs, the following stack trace is at the end of the logs 
> for the unit test.
> {code}
> ERROR [ForkJoinPool.commonPool-worker-1] 2015-11-22 21:30:53,635 Failed to 
> submit a listener notification task. Event loop shut down?
> java.util.concurrent.RejectedExecutionException: event executor terminated
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:745)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:322)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:728)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.DefaultPromise.execute(DefaultPromise.java:671) 
> [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyLateListener(DefaultPromise.java:641)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:138) 
> [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:93)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:28)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.group.DefaultChannelGroupFuture.(DefaultChannelGroupFuture.java:116)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.group.DefaultChannelGroup.close(DefaultChannelGroup.java:275)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.group.DefaultChannelGroup.close(DefaultChannelGroup.java:167)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> org.apache.cassandra.transport.Server$ConnectionTracker.closeAll(Server.java:277)
>  [main/:na]
>   at org.apache.cassandra.transport.Server.close(Server.java:180) 
> [main/:na]
>   at org.apache.cassandra.transport.Server.stop(Server.java:116) 
> [main/:na]
>   at java.util.Collections$SingletonSet.forEach(Collections.java:4767) 
> ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.service.NativeTransportService.stop(NativeTransportService.java:136)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.NativeTransportService.destroy(NativeTransportService.java:144)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.NativeTransportServiceTest.lambda$withService$102(NativeTransportServiceTest.java:201)
>  ~[classes/:na]
>   at java.util.stream.IntPipeline$3$1.accept(IntPipeline.java:233) 
> ~[na:1.8.0_60]
>   at 
> java.util.stream.Streams$RangeIntSpliterator.forEachRemaining(Streams.java:110)
>  ~[na:1.8.0_60]
>   at java.util.Spliterator$OfInt.forEachRemaining(Spliterator.java:693) 
> ~[na:1.8.0_60]
>   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) 
> ~[na:1.8.0_60]
>   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) 
> ~[na:1.8.0_60]
>   at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:747) 
> ~[na:1.8.0_60]
>   at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:721) 
> ~[na:1.8.0_60]
>   at java.util.stream.AbstractTask.compute(AbstractTask.java:316) 
> ~[na:1.8.0_60]
>   at 
> java.util.concurrent.CountedCompl

[jira] [Commented] (CASSANDRA-11258) Repair scheduling - Resource locking API

2016-04-15 Thread Marcus Olsson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242932#comment-15242932
 ] 

Marcus Olsson commented on CASSANDRA-11258:
---

I've pushed a rebased patch where I addressed your two comments from github 
(the comments got lost in the rebase so should I avoid rebase until review is 
done in the future?). I also added a removal of the locked resource when the 
lock object is closed.

Regarding my previous comment I'm starting to think it would be cleaner(for the 
code using the locks at least) to have the implementation the way Java locks 
are used instead.

> Repair scheduling - Resource locking API
> 
>
> Key: CASSANDRA-11258
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11258
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Marcus Olsson
>Assignee: Marcus Olsson
>Priority: Minor
>
> Create a resource locking API & implementation that is able to lock a 
> resource in a specified data center. It should handle priorities to avoid 
> node starvation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10805) Additional Compaction Logging

2016-04-15 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242914#comment-15242914
 ] 

Carl Yeksigian commented on CASSANDRA-10805:


I pushed a new update that includes a fix, and then also pushed a test to make 
sure that  activate the compaction logger for all column families. 
[utest|http://cassci.datastax.com/job/carlyeks-ticket-10805-logall-testall/] 
[dtest|http://cassci.datastax.com/job/carlyeks-ticket-10805-logall-dtest/]

I need to dig into the dtest results to figure out whether they are being 
caused by the new logging.

> Additional Compaction Logging
> -
>
> Key: CASSANDRA-10805
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10805
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Compaction, Observability
>Reporter: Carl Yeksigian
>Assignee: Carl Yeksigian
>Priority: Minor
> Fix For: 3.x
>
>
> Currently, viewing the results of past compactions requires parsing the log 
> and looking at the compaction history system table, which doesn't have 
> information about, for example, flushed sstables not previously compacted.
> This is a proposal to extend the information captured for compaction. 
> Initially, this would be done through a JMX call, but if it proves to be 
> useful and not much overhead, it might be a feature that could be enabled for 
> the compaction strategy all the time.
> Initial log information would include:
> - The compaction strategy type controlling each column family
> - The set of sstables included in each compaction strategy
> - Information about flushes and compactions, including times and all involved 
> sstables
> - Information about sstables, including generation, size, and tokens
> - Any additional metadata the strategy wishes to add to a compaction or an 
> sstable, like the level of an sstable or the type of compaction being 
> performed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-11582) Slow table creation

2016-04-15 Thread Chris Lohfink (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Lohfink updated CASSANDRA-11582:
--
Comment: was deleted

(was: Any chance your hitting driver issue with node down (includes cqlsh)? 
https://datastax-oss.atlassian.net/browse/PYTHON-531)

> Slow table creation
> ---
>
> Key: CASSANDRA-11582
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11582
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: C* 3.5
> OpenSUSE 42.1
> JDK 1.8u66
>Reporter: Jaroslav Kamenik
>
> In last versions of Cassandra we experienced much slower creation of tables. 
> It happens even at single PC, where there is no need to do broadcast schema 
> change etc. It works well, but it is little annoying to wait when you have to 
> recreate lots of tables... 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11582) Slow table creation

2016-04-15 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242902#comment-15242902
 ] 

Chris Lohfink commented on CASSANDRA-11582:
---

Any chance your hitting driver issue with node down (includes cqlsh)? 
https://datastax-oss.atlassian.net/browse/PYTHON-531

> Slow table creation
> ---
>
> Key: CASSANDRA-11582
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11582
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: C* 3.5
> OpenSUSE 42.1
> JDK 1.8u66
>Reporter: Jaroslav Kamenik
>
> In last versions of Cassandra we experienced much slower creation of tables. 
> It happens even at single PC, where there is no need to do broadcast schema 
> change etc. It works well, but it is little annoying to wait when you have to 
> recreate lots of tables... 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10756) Timeout failures in NativeTransportService.testConcurrentDestroys unit test

2016-04-15 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242893#comment-15242893
 ] 

Alex Petrov commented on CASSANDRA-10756:
-

In short, what is happening. When {{Server::close()}} is called, it calls out 
to {{ConnectionTracker::closeAll}}, which in turn calls 
{{DefaultChannelGroup.close}}. Although {{DefaultChannelGroup}} has it's own 
executor, when {{Futures}} for {{waitUninterruptibly}} are created within 
{{ConnectionTracker::closeAll}}, the {{Channel}}'s executor is taken 
{{workerGroup}}/{{NioExecutor}} in our case. 

So adding a guard will indeed fix the test, since call to {{close}} is 
synchronous. Alternatively, we can shutdown the {{workerGroup}} gracefully with 
quiet period and timeout. 

> Timeout failures in NativeTransportService.testConcurrentDestroys unit test
> ---
>
> Key: CASSANDRA-10756
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10756
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Joel Knighton
>Assignee: Alex Petrov
>
> History of test on trunk 
> [here|http://cassci.datastax.com/job/trunk_testall/lastCompletedBuild/testReport/org.apache.cassandra.service/NativeTransportServiceTest/testConcurrentDestroys/history/].
> I've seen these failures across 3.0/trunk for a while. I ran the test looping 
> locally for a while and the timeout is fairly easy to reproduce. The timeout 
> appears to be an indefinite hang and not a timing issue.
> When the timeout occurs, the following stack trace is at the end of the logs 
> for the unit test.
> {code}
> ERROR [ForkJoinPool.commonPool-worker-1] 2015-11-22 21:30:53,635 Failed to 
> submit a listener notification task. Event loop shut down?
> java.util.concurrent.RejectedExecutionException: event executor terminated
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:745)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:322)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:728)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.DefaultPromise.execute(DefaultPromise.java:671) 
> [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyLateListener(DefaultPromise.java:641)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:138) 
> [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:93)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.DefaultChannelPromise.addListener(DefaultChannelPromise.java:28)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.group.DefaultChannelGroupFuture.(DefaultChannelGroupFuture.java:116)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.group.DefaultChannelGroup.close(DefaultChannelGroup.java:275)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.group.DefaultChannelGroup.close(DefaultChannelGroup.java:167)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> org.apache.cassandra.transport.Server$ConnectionTracker.closeAll(Server.java:277)
>  [main/:na]
>   at org.apache.cassandra.transport.Server.close(Server.java:180) 
> [main/:na]
>   at org.apache.cassandra.transport.Server.stop(Server.java:116) 
> [main/:na]
>   at java.util.Collections$SingletonSet.forEach(Collections.java:4767) 
> ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.service.NativeTransportService.stop(NativeTransportService.java:136)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.NativeTransportService.destroy(NativeTransportService.java:144)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.NativeTransportServiceTest.lambda$withService$102(NativeTransportServiceTest.java:201)
>  ~[classes/:na]
>   at java.util.stream.IntPipeline$3$1.accept(IntPipeline.java:233) 
> ~[na:1.8.0_60]
>   at 
> java.util.stream.Streams$RangeIntSpliterator.forEachRemaining(Streams.java:110)
>  ~[na:1.8.0_60]
>   at java.util.Spliterator$OfInt.forEachRemaining(Spliterator.java:693) 
> ~[na:1.8.0_60]
>   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) 
> ~[na:1.8.0_60]
>   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) 
> ~[na:1.8.0_60]
>   at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:747) 
> ~[na:1.8.0_60]
>   at java.util.stream.ReduceOps$ReduceT

[jira] [Comment Edited] (CASSANDRA-10058) Close Java driver Client object in Hadoop and Pig classes

2016-04-15 Thread James Howe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242791#comment-15242791
 ] 

James Howe edited comment on CASSANDRA-10058 at 4/15/16 11:09 AM:
--

Still seeing this LEAK message from the Datastax driver (2.1.9) using Cassandra 
2.2.4.
We're not running on Hadoop or Pig at all.
Does it require a driver upgrade too?


was (Author: jameshowe):
Still seeing this LEAK message from the Datastax driver (2.1.9) using Cassandra 
2.2.4.
Does it require a driver upgrade too?

> Close Java driver Client object in Hadoop and Pig classes
> -
>
> Key: CASSANDRA-10058
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10058
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Liu
>Assignee: Alex Liu
> Fix For: 2.2.4, 3.0.1, 3.1
>
> Attachments: CASSANDRA-10058-2.2.txt
>
>
> I found that some Hadoop and Pig code in Cassandra doesn't close the Client 
> object, that's the cause for the following errors in java driver 2.2.0-rc1.
> {code}
> ERROR 11:37:45 LEAK: You are creating too many HashedWheelTimer instances.  
> HashedWheelTimer is a shared resource that must be reused across the JVM,so 
> that only a few instances are created.
> {code}
> We should close the Client objects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10058) Close Java driver Client object in Hadoop and Pig classes

2016-04-15 Thread James Howe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242791#comment-15242791
 ] 

James Howe edited comment on CASSANDRA-10058 at 4/15/16 11:07 AM:
--

Still seeing this LEAK message from the Datastax driver (2.1.9) using Cassandra 
2.2.4.
Does it require a driver upgrade too?


was (Author: jameshowe):
Still seeing this LEAK message from the Datastax driver (2.1.9) using Cassandra 
2.2.4.

> Close Java driver Client object in Hadoop and Pig classes
> -
>
> Key: CASSANDRA-10058
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10058
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Liu
>Assignee: Alex Liu
> Fix For: 2.2.4, 3.0.1, 3.1
>
> Attachments: CASSANDRA-10058-2.2.txt
>
>
> I found that some Hadoop and Pig code in Cassandra doesn't close the Client 
> object, that's the cause for the following errors in java driver 2.2.0-rc1.
> {code}
> ERROR 11:37:45 LEAK: You are creating too many HashedWheelTimer instances.  
> HashedWheelTimer is a shared resource that must be reused across the JVM,so 
> that only a few instances are created.
> {code}
> We should close the Client objects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10058) Close Java driver Client object in Hadoop and Pig classes

2016-04-15 Thread James Howe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242791#comment-15242791
 ] 

James Howe edited comment on CASSANDRA-10058 at 4/15/16 10:48 AM:
--

Still seeing this LEAK message from the Datastax driver (2.1.9) using Cassandra 
2.2.4.


was (Author: jameshowe):
Still seeing this LEAK message using Cassandra 2.2.4 using Datastax driver 
2.1.9.

> Close Java driver Client object in Hadoop and Pig classes
> -
>
> Key: CASSANDRA-10058
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10058
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Liu
>Assignee: Alex Liu
> Fix For: 2.2.4, 3.0.1, 3.1
>
> Attachments: CASSANDRA-10058-2.2.txt
>
>
> I found that some Hadoop and Pig code in Cassandra doesn't close the Client 
> object, that's the cause for the following errors in java driver 2.2.0-rc1.
> {code}
> ERROR 11:37:45 LEAK: You are creating too many HashedWheelTimer instances.  
> HashedWheelTimer is a shared resource that must be reused across the JVM,so 
> that only a few instances are created.
> {code}
> We should close the Client objects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >