[jira] [Created] (CASSANDRA-11109) Cassandra process killed by OS due to out of memory issue

2016-02-02 Thread Tony Xu (JIRA)
Tony Xu created CASSANDRA-11109:
---

 Summary: Cassandra process killed by OS due to out of memory issue
 Key: CASSANDRA-11109
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11109
 Project: Cassandra
  Issue Type: Bug
 Environment: Operating System: 
Amazon Linux AMI release 2015.03
Kernel \r on an \m

Instance type: m3.2xlarge
vCPUs: 8
Memory: 30G
Commitlog storage: 1 x 80 SSD Storage
Data disks: EBS io1 300GB with 1000 IOPS

Reporter: Tony Xu
 Fix For: 2.2.4
 Attachments: cassandra-env.txt, cassandra-system-log.txt, 
cassandra.yaml, system-messages.txt

After we upgraded Cassandra from 2.1.12 to 2.2.4 on one of our nodes (A three 
nodes Cassandra cluster), we've been experiencing an og-going issue with 
cassandra process running with continuously increasing memory util killed by 
OOM. 

{quote}
Feb  1 23:53:10 kernel: [24135455.025185] [19862]   494 19862 133728623  
7379077  1390680 0 java
Feb  1 23:53:10 kernel: [24135455.029678] Out of memory: Kill process 19862 
(java) score 973 or sacrifice child
Feb  1 23:53:10 kernel: [24135455.035434] Killed process 19862 (java) 
total-vm:534918588kB, anon-rss:29413728kB, file-rss:102940kB
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11110) Parser improvements for SASI

2016-02-02 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-0?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128900#comment-15128900
 ] 

Pavel Yaskevich commented on CASSANDRA-0:
-

I think this shouldn't be a big deal since it only changes for CQL backend 
logic, but it would be nice to push CASSANDRA-11067 forward first tho...

> Parser improvements for SASI
> 
>
> Key: CASSANDRA-0
> URL: https://issues.apache.org/jira/browse/CASSANDRA-0
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Pavel Yaskevich
>
> Shouldn't require ALLOW FILTERING for SASI inequalities.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11067) Improve SASI syntax

2016-02-02 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128923#comment-15128923
 ] 

Jonathan Ellis commented on CASSANDRA-11067:


I still think LIKE is a better fit here, because we're asking for a tokenized 
match, not full equality.  (I'm fine with using LIKE without wildcards, though.)

> Improve SASI syntax
> ---
>
> Key: CASSANDRA-11067
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11067
> Project: Cassandra
>  Issue Type: Task
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Pavel Yaskevich
> Fix For: 3.4
>
>
> I think everyone agrees that a LIKE operator would be ideal, but that's 
> probably not in scope for an initial 3.4 release.
> Still, I'm uncomfortable with the initial approach of overloading = to mean 
> "satisfies index expression."  The problem is that it will be very difficult 
> to back out of this behavior once people are using it.
> I propose adding a new operator in the interim instead.  Call it MATCHES, 
> maybe.  With the exact same behavior that SASI currently exposes, just with a 
> separate operator rather than being rolled into =.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11110) Parser improvements for SASI

2016-02-02 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-0?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128851#comment-15128851
 ] 

Jonathan Ellis commented on CASSANDRA-0:


Yes, if we can get this done for 3.4 it will provide an easy way to explain one 
of the core differences between SASI and "classic" 2i.

> Parser improvements for SASI
> 
>
> Key: CASSANDRA-0
> URL: https://issues.apache.org/jira/browse/CASSANDRA-0
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Pavel Yaskevich
>
> Shouldn't require ALLOW FILTERING for SASI inequalities.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11110) Parser improvements for SASI

2016-02-02 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-0?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128782#comment-15128782
 ] 

Jonathan Ellis commented on CASSANDRA-0:


Pavel, in case this is something that can be addressed independently I thought 
it would be worth breaking out.

> Parser improvements for SASI
> 
>
> Key: CASSANDRA-0
> URL: https://issues.apache.org/jira/browse/CASSANDRA-0
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Pavel Yaskevich
>
> Shouldn't require ALLOW FILTERING for SASI inequalities.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11067) Improve SASI syntax

2016-02-02 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128896#comment-15128896
 ] 

Pavel Yaskevich commented on CASSANDRA-11067:
-

[~jbellis] It was actually intentional since stemming is a splitting analyzer, 
it's going to split a sentence into individual words and steam them, so "=" is 
a valid use case there. 

> Improve SASI syntax
> ---
>
> Key: CASSANDRA-11067
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11067
> Project: Cassandra
>  Issue Type: Task
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Pavel Yaskevich
> Fix For: 3.4
>
>
> I think everyone agrees that a LIKE operator would be ideal, but that's 
> probably not in scope for an initial 3.4 release.
> Still, I'm uncomfortable with the initial approach of overloading = to mean 
> "satisfies index expression."  The problem is that it will be very difficult 
> to back out of this behavior once people are using it.
> I propose adding a new operator in the interim instead.  Call it MATCHES, 
> maybe.  With the exact same behavior that SASI currently exposes, just with a 
> separate operator rather than being rolled into =.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11110) Parser improvements for SASI

2016-02-02 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-0?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128799#comment-15128799
 ] 

Brandon Williams commented on CASSANDRA-0:
--

+1 on this, we'd have to start sending a mixed signal about AF if it's required 
here.

> Parser improvements for SASI
> 
>
> Key: CASSANDRA-0
> URL: https://issues.apache.org/jira/browse/CASSANDRA-0
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Pavel Yaskevich
>
> Shouldn't require ALLOW FILTERING for SASI inequalities.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11067) Improve SASI syntax

2016-02-02 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128742#comment-15128742
 ] 

Jonathan Ellis edited comment on CASSANDRA-11067 at 2/2/16 6:32 PM:


The example of stemming still uses the equality operator 
({{bio='distributing'}}).  Just an oversight in the doc?


was (Author: jbellis):
The example of stemming still uses the equality operator 
({{bio='distribution'}}).  Just an oversight in the doc?

> Improve SASI syntax
> ---
>
> Key: CASSANDRA-11067
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11067
> Project: Cassandra
>  Issue Type: Task
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Pavel Yaskevich
> Fix For: 3.4
>
>
> I think everyone agrees that a LIKE operator would be ideal, but that's 
> probably not in scope for an initial 3.4 release.
> Still, I'm uncomfortable with the initial approach of overloading = to mean 
> "satisfies index expression."  The problem is that it will be very difficult 
> to back out of this behavior once people are using it.
> I propose adding a new operator in the interim instead.  Call it MATCHES, 
> maybe.  With the exact same behavior that SASI currently exposes, just with a 
> separate operator rather than being rolled into =.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11112) Tracing should denote when a request is sent due to speculative retry

2016-02-02 Thread Brandon Williams (JIRA)
Brandon Williams created CASSANDRA-2:


 Summary: Tracing should denote when a request is sent due to 
speculative retry
 Key: CASSANDRA-2
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Brandon Williams


Currently there is no way to tell from a trace if a request is being sent by SR 
or not, which can lead to confusion and make it hard to reason about why a 
query took a given path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11112) Tracing should denote when a request is sent due to speculative retry

2016-02-02 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-2.
--
Resolution: Implemented

I missed this:
{noformat}
 speculating read retry on /10.208.8.123 [SharedPool-Worker-1] | 2016-02-02 
22:09:23.017000 | 10.208.8.63 |  10079
{noformat}

> Tracing should denote when a request is sent due to speculative retry
> -
>
> Key: CASSANDRA-2
> URL: https://issues.apache.org/jira/browse/CASSANDRA-2
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Brandon Williams
>
> Currently there is no way to tell from a trace if a request is being sent by 
> SR or not, which can lead to confusion and make it hard to reason about why a 
> query took a given path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11067) Improve SASI syntax

2016-02-02 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128977#comment-15128977
 ] 

Pavel Yaskevich commented on CASSANDRA-11067:
-

[~jbellis] Ok sure, I changed bio to LIKE without wildcards (which is 
effectively the same thing as equals) everywhere and rebased with the latest 
trunk.

> Improve SASI syntax
> ---
>
> Key: CASSANDRA-11067
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11067
> Project: Cassandra
>  Issue Type: Task
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Pavel Yaskevich
> Fix For: 3.4
>
>
> I think everyone agrees that a LIKE operator would be ideal, but that's 
> probably not in scope for an initial 3.4 release.
> Still, I'm uncomfortable with the initial approach of overloading = to mean 
> "satisfies index expression."  The problem is that it will be very difficult 
> to back out of this behavior once people are using it.
> I propose adding a new operator in the interim instead.  Call it MATCHES, 
> maybe.  With the exact same behavior that SASI currently exposes, just with a 
> separate operator rather than being rolled into =.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11111) Please delete old releases from mirroring system

2016-02-02 Thread Sebb (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebb updated CASSANDRA-1:
-
Environment: https://dist.apache.org/repos/dist/release/cassandra/

> Please delete old releases from mirroring system
> 
>
> Key: CASSANDRA-1
> URL: https://issues.apache.org/jira/browse/CASSANDRA-1
> Project: Cassandra
>  Issue Type: Bug
> Environment: https://dist.apache.org/repos/dist/release/cassandra/
>Reporter: Sebb
>
> To reduce the load on the ASF mirrors, projects are required to delete old 
> releases [1]
> Please can you remove all non-current releases?
> i.e. 2.0.17 3.2 3.1.1
> Thanks!
> [1] http://www.apache.org/dev/release.html#when-to-archive



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7381) Snappy Compression does not work with PowerPC 64-bit, Little Endian

2016-02-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15129006#comment-15129006
 ] 

ASF GitHub Bot commented on CASSANDRA-7381:
---

Github user iheanyi closed the pull request at:

https://github.com/apache/cassandra/pull/37


> Snappy Compression does not work with PowerPC 64-bit, Little Endian
> ---
>
> Key: CASSANDRA-7381
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7381
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Iheanyi Ekechukwu
>Assignee: Iheanyi Ekechukwu
>Priority: Minor
> Fix For: 2.1 rc2
>
> Attachments: trunk-7381.txt
>
>
> In PowerPC 64-bit, Little Endian, CompressedRandomAccessReaderTest, 
> CompressedInputStreamTest, and CFMetaDataTest fail due to the included 
> snappy-java JAR missing the ppc64le native library.
> Testing on Ubuntu 14.04, ppc64le.
> The specific fix for Snappy-Java and adding the native library can be found 
> at https://github.com/xerial/snappy-java/pull/67.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11111) Please delete old releases from mirroring system

2016-02-02 Thread Sebb (JIRA)
Sebb created CASSANDRA-1:


 Summary: Please delete old releases from mirroring system
 Key: CASSANDRA-1
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1
 Project: Cassandra
  Issue Type: Bug
Reporter: Sebb


To reduce the load on the ASF mirrors, projects are required to delete old 
releases [1]

Please can you remove all non-current releases?

i.e. 2.0.17 3.2 3.1.1

Thanks!

[1] http://www.apache.org/dev/release.html#when-to-archive




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11111) Please delete old releases from mirroring system

2016-02-02 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reassigned CASSANDRA-1:


Assignee: T Jake Luciani

> Please delete old releases from mirroring system
> 
>
> Key: CASSANDRA-1
> URL: https://issues.apache.org/jira/browse/CASSANDRA-1
> Project: Cassandra
>  Issue Type: Bug
> Environment: https://dist.apache.org/repos/dist/release/cassandra/
>Reporter: Sebb
>Assignee: T Jake Luciani
>
> To reduce the load on the ASF mirrors, projects are required to delete old 
> releases [1]
> Please can you remove all non-current releases?
> i.e. 2.0.17 3.2 3.1.1
> Thanks!
> [1] http://www.apache.org/dev/release.html#when-to-archive



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11106) Experiment with strategies for picking compaction candidates in LCS

2016-02-02 Thread Wei Deng (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Deng updated CASSANDRA-11106:
-
Labels: lcs  (was: )

> Experiment with strategies for picking compaction candidates in LCS
> ---
>
> Key: CASSANDRA-11106
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11106
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>  Labels: lcs
>
> Ideas taken here: http://rocksdb.org/blog/2921/compaction_pri/
> Current strategy in LCS is that we keep track of the token that was last 
> compacted and then we start a compaction with the sstable containing the next 
> token (kOldestSmallestSeqFirst in the blog post above)
> The rocksdb blog post above introduces a few ideas how this could be improved:
> * pick the 'coldest' sstable (sstable with the oldest max timestamp) - we 
> want to keep the hot data (recently updated) in the lower levels to avoid 
> write amplification
> * pick the sstable with the highest tombstone ratio, we want to get 
> tombstones to the top level as quickly as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11113) DateTieredCompactionStrategy.getMaximalTask compacts repaired and unrepaired sstables together

2016-02-02 Thread Blake Eggleston (JIRA)
Blake Eggleston created CASSANDRA-3:
---

 Summary: DateTieredCompactionStrategy.getMaximalTask compacts 
repaired and unrepaired sstables together
 Key: CASSANDRA-3
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3
 Project: Cassandra
  Issue Type: Bug
Reporter: Blake Eggleston
Assignee: Blake Eggleston
 Fix For: 3.0.x


[DateTieredCompactionStrategy.getMaximalTask|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/db/compaction/DateTieredCompactionStrategy.java#L393-393]
 creates a compaction task for all of a tables sstables, instead of just the 
repaired/unrepaired subset it's responsible for.

This compacts repaired and unrepaired sstables together, effectively demoting 
repaired data to unrepaired. Also, since both the repaired and unrepaired 
strategy instances are trying to  compact the same sstables, there's a 1 minute 
delay waiting for {{CompactionManager.waitForCessation}} to time out before 
anything happens. 

Here's the script I used to duplicate: 
https://gist.github.com/bdeggleston/324f4f0df1b7273d8fd5



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11089) cassandra-stress should allow specifying the Java driver's protocol version to be used

2016-02-02 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-11089:
-
Labels: stress  (was: )

> cassandra-stress should allow specifying the Java driver's protocol version 
> to be used
> --
>
> Key: CASSANDRA-11089
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11089
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Eduard Tudenhoefner
>Assignee: Eduard Tudenhoefner
>  Labels: stress
> Fix For: 3.x
>
>
> It would be useful to use *cassandra-stress* that is coming with C* 3.x 
> against a C* 2.x cluster. In order for that to work, we should allow 
> specifying the Java driver's protocol version to be used for the connection.
> See also 
> https://github.com/apache/cassandra/blob/cassandra-3.0/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java#L118-118



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11069) Materialised views require all collections to be selected.

2016-02-02 Thread Wei Deng (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Deng updated CASSANDRA-11069:
-
Labels: materializedviews  (was: )

> Materialised views require all collections to be selected.
> --
>
> Key: CASSANDRA-11069
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11069
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Vassil Lunchev
>  Labels: materializedviews
>
> Running Cassandra 3.0.2
> Using the official example from: 
> http://www.datastax.com/dev/blog/new-in-cassandra-3-0-materialized-views
> The only difference is that I have added a map column to the base table.
> {code:cql}
> CREATE TABLE scores
> (
>   user TEXT,
>   game TEXT,
>   year INT,
>   month INT,
>   day INT,
>   score INT,
>   a_map map,
>   PRIMARY KEY (user, game, year, month, day)
> );
> CREATE MATERIALIZED VIEW alltimehigh AS
>SELECT user FROM scores
>WHERE game IS NOT NULL AND score IS NOT NULL AND user IS NOT NULL AND 
> year IS NOT NULL AND month IS NOT NULL AND day IS NOT NULL
>PRIMARY KEY (game, score, user, year, month, day)
>WITH CLUSTERING ORDER BY (score desc);
> INSERT INTO scores (user, game, year, month, day, score) VALUES ('pcmanus', 
> 'Coup', 2015, 06, 02, 2000);
> SELECT * FROM scores;
> SELECT * FROM alltimehigh;
> {code}
> All of the above works perfectly fine. Until you insert a row where the 
> 'a_map' column is not null.
> {code:cql}
> INSERT INTO scores (user, game, year, month, day, score, a_map) VALUES 
> ('pcmanus_2', 'Coup', 2015, 06, 02, 2000, {1: 'text'});
> {code}
> This results in:
> {code}
> Traceback (most recent call last):
>   File "/Users/vassil/apache-cassandra-3.0.2/bin/cqlsh.py", line 1258, in 
> perform_simple_statement
> result = future.result()
>   File 
> "/Users/vassil/apache-cassandra-3.0.2/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
>  line 3122, in result
> raise self._final_exception
> WriteFailure: code=1500 [Replica(s) failed to execute write] 
> message="Operation failed - received 0 responses and 1 failures" 
> info={'failures': 1, 'received_responses': 0, 'required_responses': 1, 
> 'consistency': 'ONE'}
> {code}
> Selecting the base table and the materialised view is also interesting:
> {code}
> SELECT * FROM scores;
> SELECT * FROM alltimehigh;
> {code}
> The result is:
> {code}
> cqlsh:tests> SELECT * FROM scores;
>  user| game | year | month | day | a_map | score
> -+--+--+---+-+---+---
>  pcmanus | Coup | 2015 | 6 |   2 |  null |  2000
> (1 rows)
> cqlsh:tests> SELECT * FROM alltimehigh;
>  game | score | user  | year | month | day
> --+---+---+--+---+-
>  Coup |  2000 |   pcmanus | 2015 | 6 |   2
>  Coup |  2000 | pcmanus_2 | 2015 | 6 |   2
> (2 rows)
> {code}
> In the logs you can see:
> {code:java}
> ERROR [SharedPool-Worker-2] 2016-01-26 03:25:27,456 Keyspace.java:484 - 
> Unknown exception caught while attempting to update MaterializedView! 
> tests.scores
> java.lang.IllegalStateException: [ColumnDefinition{name=a_map, 
> type=org.apache.cassandra.db.marshal.MapType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type),
>  kind=REGULAR, position=-1}] is not a subset of []
>   at 
> org.apache.cassandra.db.Columns$Serializer.encodeBitmap(Columns.java:531) 
> ~[apache-cassandra-3.0.2.jar:3.0.2]
>   at 
> org.apache.cassandra.db.Columns$Serializer.serializedSubsetSize(Columns.java:483)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serializedRowBodySize(UnfilteredSerializer.java:275)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serializedSize(UnfilteredSerializer.java:247)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serializedSize(UnfilteredSerializer.java:234)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
>   at 
> org.apache.cassandra.db.rows.UnfilteredSerializer.serializedSize(UnfilteredSerializer.java:227)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serializedSize(UnfilteredRowIteratorSerializer.java:169)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serializedSize(PartitionUpdate.java:683)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
>   at 
> org.apache.cassandra.db.Mutation$MutationSerializer.serializedSize(Mutation.java:354)
>  ~[apache-cassandra-3.0.2.jar:3.0.2]
>   at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:259) 
> 

[jira] [Updated] (CASSANDRA-11113) DateTieredCompactionStrategy.getMaximalTask compacts repaired and unrepaired sstables together

2016-02-02 Thread Wei Deng (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Deng updated CASSANDRA-3:
-
Labels: dtcs  (was: )

> DateTieredCompactionStrategy.getMaximalTask compacts repaired and unrepaired 
> sstables together
> --
>
> Key: CASSANDRA-3
> URL: https://issues.apache.org/jira/browse/CASSANDRA-3
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>  Labels: dtcs
> Fix For: 3.0.x
>
>
> [DateTieredCompactionStrategy.getMaximalTask|https://github.com/apache/cassandra/blob/cassandra-3.0/src/java/org/apache/cassandra/db/compaction/DateTieredCompactionStrategy.java#L393-393]
>  creates a compaction task for all of a tables sstables, instead of just the 
> repaired/unrepaired subset it's responsible for.
> This compacts repaired and unrepaired sstables together, effectively demoting 
> repaired data to unrepaired. Also, since both the repaired and unrepaired 
> strategy instances are trying to  compact the same sstables, there's a 1 
> minute delay waiting for {{CompactionManager.waitForCessation}} to time out 
> before anything happens. 
> Here's the script I used to duplicate: 
> https://gist.github.com/bdeggleston/324f4f0df1b7273d8fd5



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11110) Parser improvements for SASI

2016-02-02 Thread Jonathan Ellis (JIRA)
Jonathan Ellis created CASSANDRA-0:
--

 Summary: Parser improvements for SASI
 Key: CASSANDRA-0
 URL: https://issues.apache.org/jira/browse/CASSANDRA-0
 Project: Cassandra
  Issue Type: New Feature
  Components: CQL
Reporter: Jonathan Ellis
Assignee: Pavel Yaskevich


Shouldn't require ALLOW FILTERING for SASI inequalities.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11067) Improve SASI syntax

2016-02-02 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128742#comment-15128742
 ] 

Jonathan Ellis commented on CASSANDRA-11067:


The example of stemming still uses the equality operator 
({{bio='distribution'}}).  Just an oversight in the doc?

> Improve SASI syntax
> ---
>
> Key: CASSANDRA-11067
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11067
> Project: Cassandra
>  Issue Type: Task
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Pavel Yaskevich
> Fix For: 3.4
>
>
> I think everyone agrees that a LIKE operator would be ideal, but that's 
> probably not in scope for an initial 3.4 release.
> Still, I'm uncomfortable with the initial approach of overloading = to mean 
> "satisfies index expression."  The problem is that it will be very difficult 
> to back out of this behavior once people are using it.
> I propose adding a new operator in the interim instead.  Call it MATCHES, 
> maybe.  With the exact same behavior that SASI currently exposes, just with a 
> separate operator rather than being rolled into =.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11043) Secondary indexes doesn't properly validate custom expressions

2016-02-02 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128755#comment-15128755
 ] 

Sam Tunnicliffe commented on CASSANDRA-11043:
-

[~adelapena], sorry about the slow response - you're right though, the 
intention is to validate those expressions earlier and reject before getting to 
the point of execution. Unfortunately, the unit test covering that is not 
properly representative of running on a real server so it didn't catch the 
problem you described. I think we also need to add back the early validation of 
non-custom expressions, as although the custom expression syntax is a better 
fit for indexes which don't map to a specific column, for backwards 
compatibility it's still possible to do things in the old style, by creating a 
fake column & adding an index on it. The fix should be reasonably 
straightforward, but I'd like to have a think about how to test this better 
(using custom indexes etc in dtests is not so straightforward). I'll aim to 
have a patch ready sometime this week.

> Secondary indexes doesn't properly validate custom expressions
> --
>
> Key: CASSANDRA-11043
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11043
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL, Local Write-Read Paths
>Reporter: Andrés de la Peña
>  Labels: 21, index, validation
> Attachments: test-index.zip
>
>
> It seems that 
> [CASSANDRA-7575|https://issues.apache.org/jira/browse/CASSANDRA-7575] is 
> broken in Cassandra 3.x. As stated in the secondary indexes' API 
> documentation, custom index implementations should perform any validation of 
> query expressions at {{Index#searcherFor(ReadCommand)}}, throwing an 
> {{InvalidRequestException}} if the expressions are not valid. I assume these 
> validation errors should produce an {{InvalidRequest}} error on cqlsh, or 
> raise an {{InvalidQueryException}} on Java driver. However, when 
> {{Index#searcherFor(ReadCommand)}} throws its {{InvalidRequestException}}, I 
> get this cqlsh output:
> {noformat}
> Traceback (most recent call last):
>   File "bin/cqlsh.py", line 1246, in perform_simple_statement
> result = future.result()
>   File 
> "/Users/adelapena/stratio/platform/src/cassandra-3.2.1/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
>  line 3122, in result
> raise self._final_exception
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
> {noformat}
> I attach a dummy index implementation to reproduce the error:
> {noformat}
> CREATE KEYSPACE test with replication = {'class' : 'SimpleStrategy', 
> 'replication_factor' : '1' }; 
> CREATE TABLE test.test (id int PRIMARY KEY, value varchar); 
> CREATE CUSTOM INDEX test_index ON test.test() USING 'com.stratio.TestIndex'; 
> SELECT * FROM test.test WHERE expr(test_index,'ok');
> SELECT * FROM test.test WHERE expr(test_index,'error');
> {noformat}
> This is specially problematic when using Cassandra Java Driver, because one 
> of these server exceptions can produce subsequent queries fail (even if they 
> are valid) with a no host available exception.
> Maybe the validation method added with 
> [CASSANDRA-7575|https://issues.apache.org/jira/browse/CASSANDRA-7575] should 
> be restored, unless there is a way to properly manage the exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11110) Parser improvements for SASI

2016-02-02 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-0?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128929#comment-15128929
 ] 

Jonathan Ellis commented on CASSANDRA-0:


Agreed, and reviewing that is next on Sam's list.

> Parser improvements for SASI
> 
>
> Key: CASSANDRA-0
> URL: https://issues.apache.org/jira/browse/CASSANDRA-0
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Pavel Yaskevich
>
> Shouldn't require ALLOW FILTERING for SASI inequalities.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11053) COPY FROM on large datasets: fix progress report and debug performance

2016-02-02 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11053:
-
Attachment: parent_profile_2.txt
worker_profiles_2.txt
copy_from_large_benchmark_with_latest_results.txt

> COPY FROM on large datasets: fix progress report and debug performance
> --
>
> Key: CASSANDRA-11053
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11053
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: copy_from_large_benchmark.txt, 
> copy_from_large_benchmark_with_latest_results.txt, parent_profile.txt, 
> parent_profile_2.txt, worker_profiles.txt, worker_profiles_2.txt
>
>
> Running COPY from on a large dataset (20G divided in 20M records) revealed 
> two issues:
> * The progress report is incorrect, it is very slow until almost the end of 
> the test at which point it catches up extremely quickly.
> * The performance in rows per second is similar to running smaller tests with 
> a smaller cluster locally (approx 35,000 rows per second). As a comparison, 
> cassandra-stress manages 50,000 rows per second under the same set-up, 
> therefore resulting 1.5 times faster. 
> See attached file _copy_from_large_benchmark.txt_ for the benchmark details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11069) Materialised views require all collections to be selected.

2016-02-02 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-11069:

Description: 
Running Cassandra 3.0.2

Using the official example from: 
http://www.datastax.com/dev/blog/new-in-cassandra-3-0-materialized-views
The only difference is that I have added a map column to the base table.

{code}
CREATE TABLE scores
(
  user TEXT,
  game TEXT,
  year INT,
  month INT,
  day INT,
  score INT,
  a_map map,
  PRIMARY KEY (user, game, year, month, day)
);

CREATE MATERIALIZED VIEW alltimehigh AS
   SELECT user FROM scores
   WHERE game IS NOT NULL AND score IS NOT NULL AND user IS NOT NULL AND 
year IS NOT NULL AND month IS NOT NULL AND day IS NOT NULL
   PRIMARY KEY (game, score, user, year, month, day)
   WITH CLUSTERING ORDER BY (score desc);

INSERT INTO scores (user, game, year, month, day, score) VALUES ('pcmanus', 
'Coup', 2015, 06, 02, 2000);
SELECT * FROM scores;
SELECT * FROM alltimehigh;
{code}

All of the above works perfectly fine. Until you insert a row where the 'a_map' 
column is not null.

{code}
INSERT INTO scores (user, game, year, month, day, score, a_map) VALUES 
('pcmanus_2', 'Coup', 2015, 06, 02, 2000, {1: 'text'});
{code}

This results in:
{code}
Traceback (most recent call last):
  File "/Users/vassil/apache-cassandra-3.0.2/bin/cqlsh.py", line 1258, in 
perform_simple_statement
result = future.result()
  File 
"/Users/vassil/apache-cassandra-3.0.2/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
 line 3122, in result
raise self._final_exception
WriteFailure: code=1500 [Replica(s) failed to execute write] message="Operation 
failed - received 0 responses and 1 failures" info={'failures': 1, 
'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
{code}

Selecting the base table and the materialised view is also interesting:
{code}
SELECT * FROM scores;
SELECT * FROM alltimehigh;
{code}

The result is:
{code}
cqlsh:tests> SELECT * FROM scores;

 user| game | year | month | day | a_map | score
-+--+--+---+-+---+---
 pcmanus | Coup | 2015 | 6 |   2 |  null |  2000

(1 rows)
cqlsh:tests> SELECT * FROM alltimehigh;

 game | score | user  | year | month | day
--+---+---+--+---+-
 Coup |  2000 |   pcmanus | 2015 | 6 |   2
 Coup |  2000 | pcmanus_2 | 2015 | 6 |   2

(2 rows)
{code}

In the logs you can see:
{code:java}
ERROR [SharedPool-Worker-2] 2016-01-26 03:25:27,456 Keyspace.java:484 - Unknown 
exception caught while attempting to update MaterializedView! tests.scores
java.lang.IllegalStateException: [ColumnDefinition{name=a_map, 
type=org.apache.cassandra.db.marshal.MapType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type),
 kind=REGULAR, position=-1}] is not a subset of []
at 
org.apache.cassandra.db.Columns$Serializer.encodeBitmap(Columns.java:531) 
~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.Columns$Serializer.serializedSubsetSize(Columns.java:483)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serializedRowBodySize(UnfilteredSerializer.java:275)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serializedSize(UnfilteredSerializer.java:247)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serializedSize(UnfilteredSerializer.java:234)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredSerializer.serializedSize(UnfilteredSerializer.java:227)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serializedSize(UnfilteredRowIteratorSerializer.java:169)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serializedSize(PartitionUpdate.java:683)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.Mutation$MutationSerializer.serializedSize(Mutation.java:354)
 ~[apache-cassandra-3.0.2.jar:3.0.2]
at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:259) 
~[apache-cassandra-3.0.2.jar:3.0.2]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:461) 
[apache-cassandra-3.0.2.jar:3.0.2]
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) 
[apache-cassandra-3.0.2.jar:3.0.2]
at org.apache.cassandra.db.Mutation.apply(Mutation.java:210) 
[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.service.StorageProxy.mutateMV(StorageProxy.java:703) 
~[apache-cassandra-3.0.2.jar:3.0.2]
at 
org.apache.cassandra.db.view.ViewManager.pushViewReplicaUpdates(ViewManager.java:149)
 

[jira] [Commented] (CASSANDRA-11053) COPY FROM on large datasets: fix progress report and debug performance

2016-02-02 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15127864#comment-15127864
 ] 

Stefania commented on CASSANDRA-11053:
--

Reading from standard input and parameters like {{maxrows}} and {{skiprows}} 
also become problematic. {{skiprows}} can be relative to a file but 
{{maxrows}}, as well as other max parameters, would require synchronization 
across processes.



> COPY FROM on large datasets: fix progress report and debug performance
> --
>
> Key: CASSANDRA-11053
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11053
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: copy_from_large_benchmark.txt, parent_profile.txt, 
> worker_profiles.txt
>
>
> Running COPY from on a large dataset (20G divided in 20M records) revealed 
> two issues:
> * The progress report is incorrect, it is very slow until almost the end of 
> the test at which point it catches up extremely quickly.
> * The performance in rows per second is similar to running smaller tests with 
> a smaller cluster locally (approx 35,000 rows per second). As a comparison, 
> cassandra-stress manages 50,000 rows per second under the same set-up, 
> therefore resulting 1.5 times faster. 
> See attached file _copy_from_large_benchmark.txt_ for the benchmark details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11105) cassandra-stress tool - InvalidQueryException: Batch too large

2016-02-02 Thread Ralf Steppacher (JIRA)
Ralf Steppacher created CASSANDRA-11105:
---

 Summary: cassandra-stress tool - InvalidQueryException: Batch too 
large
 Key: CASSANDRA-11105
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11105
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Cassandra 2.2.4, Java 8, CentOS 6.5
Reporter: Ralf Steppacher
 Attachments: batch_too_large.yaml

I am using Cassandra 2.2.4 and I am struggling to get the cassandra-stress tool 
to work for my test scenario. I have followed the example on 
http://www.datastax.com/dev/blog/improved-cassandra-2-1-stress-tool-benchmark-any-schema
 to create a yaml file describing my test.

I am collecting events per user id (text, partition key). Events have a session 
type (text), event type (text), and creation time (timestamp) (clustering keys, 
in that order). Plus some more attributes required for rendering the events in 
a UI. For testing purposes I ended up with the following column spec and insert 
distribution:
{noformat}
columnspec:
  - name: created_at
cluster: uniform(10..1)
  - name: event_type
size: uniform(5..10)
population: uniform(1..30)
cluster: uniform(1..30)
  - name: session_type
size: fixed(5)
population: uniform(1..4)
cluster: uniform(1..4)
  - name: user_id
size: fixed(15)
population: uniform(1..100)
  - name: message
size: uniform(10..100)
population: uniform(1..100B)

insert:
  partitions: fixed(1)
  batchtype: UNLOGGED
  select: fixed(1)/120
{noformat}

Running stress tool for just the insert prints 
{noformat}
Generating batches with [1..1] partitions and [0..1] rows (of [10..120] 
total rows in the partitions)
{noformat}
and then immediately starts flooding me with 
{{com.datastax.driver.core.exceptions.InvalidQueryException: Batch too large}}. 

Why I should be exceeding the {{batch_size_fail_threshold_in_kb: 50}} in the 
{{cassandra.yaml}} I do not understand. My understanding is that the stress 
tool should generate one row per batch. The size of a single row should not 
exceed {{8+10*3+5*3+15*3+100*3 = 398 bytes}}. Assuming a worst case of all text 
characters being 3 byte unicode characters. 

This is how I start the attached user scenario:
{noformat}
[rsteppac@centos bin]$ ./cassandra-stress user profile=../batch_too_large.yaml 
ops\(insert=1\) -log level=verbose 
file=~/centos_event_by_patient_session_event_timestamp_insert_only.log -node 
10.211.55.8
INFO  08:00:07 Did not find Netty's native epoll transport in the classpath, 
defaulting to NIO.
INFO  08:00:08 Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy 
(if this is incorrect, please provide the correct datacenter name with 
DCAwareRoundRobinPolicy constructor)
INFO  08:00:08 New Cassandra host /10.211.55.8:9042 added
Connected to cluster: Titan_DEV
Datatacenter: datacenter1; Host: /10.211.55.8; Rack: rack1
Created schema. Sleeping 1s for propagation.
Generating batches with [1..1] partitions and [0..1] rows (of [10..120] 
total rows in the partitions)
com.datastax.driver.core.exceptions.InvalidQueryException: Batch too large
at 
com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
at 
com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:271)
at 
com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:185)
at 
com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:55)
at 
org.apache.cassandra.stress.operations.userdefined.SchemaInsert$JavaDriverRun.run(SchemaInsert.java:87)
at 
org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:159)
at 
org.apache.cassandra.stress.operations.userdefined.SchemaInsert.run(SchemaInsert.java:119)
at 
org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:309)
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Batch too 
large
at 
com.datastax.driver.core.Responses$Error.asException(Responses.java:125)
at 
com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:120)
at 
com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:186)
at 
com.datastax.driver.core.RequestHandler.access$2300(RequestHandler.java:45)
at 
com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:752)
at 
com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:576)
at 
com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1003)
at 
com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:925)
at 

[jira] [Updated] (CASSANDRA-10715) Filtering on NULL returns ReadFailure exception

2016-02-02 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-10715:

Description: 
This is an issue I first noticed through the C# driver, but I was able to repro 
on cqlsh, leading me to believe this is a Cassandra bug.

Given the following schema:
{noformat}
CREATE TABLE "TestKeySpace_4928dc892922"."coolMovies" (
unique_movie_title text,
movie_maker text,
director text,
list list,
"mainGuy" text,
"yearMade" int,
PRIMARY KEY ((unique_movie_title, movie_maker), director)
) WITH CLUSTERING ORDER BY (director ASC)
{noformat}

Executing a SELECT with FILTERING on a non-PK column, using a NULL as the 
argument:
{noformat}
SELECT "mainGuy", "movie_maker", "unique_movie_title", "list", "director", 
"yearMade" FROM "coolMovies" WHERE "mainGuy" = null ALLOW FILTERING
{noformat}

returns a ReadFailure exception:
{noformat}
cqlsh:TestKeySpace_4c8f2cf8d5cc> SELECT "mainGuy", "movie_maker", 
"unique_movie_title", "list", "director", "yearMade" FROM "coolMovies" WHERE 
"mainGuy" = null ALLOW FILTERING;
←[0;1;31mTraceback (most recent call last):
  File "C:\Users\Kishan\.ccm\repository\3.0.0\bin\\cqlsh.py", line 1216, in 
perform_simple_statement
result = future.result()
  File 
"C:\Users\Kishan\.ccm\repository\3.0.0\bin\..\lib\cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip\cassandra-driver-3.0.0a3.post0-3f15725\cassandra\cluster.py",
 line 3118, in result
raise self._final_exception
ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
failed - received 0 responses and 1 failures" info={'failures': 1, 
'received_responses': 0, 'required_responses': 1, 'cons
istency': 'ONE'}
←[0m
{noformat}

Cassandra log shows:
{noformat}
WARN  [SharedPool-Worker-2] 2015-11-16 13:51:00,259 
AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-2,10,main]: {}
java.lang.AssertionError: null
at 
org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:581)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:243)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:95) 
~[apache-cassandra-3.0.0.jar:3.0.0]
at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:86) 
~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:21) 
~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.transform.Transformation.add(Transformation.java:136) 
~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:102) 
~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:233)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:227)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:293)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:65) 
~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:288) 
~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1692)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2346)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_60]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 ~[apache-cassandra-3.0.0.jar:3.0.0]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-3.0.0.jar:3.0.0]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
{noformat}
In C* < 3.0.0 (such as 2.2.3), this same 

[jira] [Updated] (CASSANDRA-10715) Filtering on NULL returns ReadFailure exception

2016-02-02 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-10715:

Attachment: 0001-Allow-null-values-in-filtered-searches-reuse-Operato.patch

Here's a (very rough) sketch: to reuse the null handling logic from 
ColumnCondition within Operator and removing the null checks and assertions for 
some types in RowFilter.

If you think the general idea is fine, I'll check the possible edge cases and 
proceed with it.

> Filtering on NULL returns ReadFailure exception
> ---
>
> Key: CASSANDRA-10715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10715
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: C* 3.0.0 | cqlsh | C# driver 3.0.0beta2 | Windows 2012 R2
>Reporter: Kishan Karunaratne
>Assignee: Benjamin Lerer
> Attachments: 
> 0001-Allow-null-values-in-filtered-searches-reuse-Operato.patch
>
>
> This is an issue I first noticed through the C# driver, but I was able to 
> repro on cqlsh, leading me to believe this is a Cassandra bug.
> Given the following schema:
> {noformat}
> CREATE TABLE "TestKeySpace_4928dc892922"."coolMovies" (
> unique_movie_title text,
> movie_maker text,
> director text,
> list list,
> "mainGuy" text,
> "yearMade" int,
> PRIMARY KEY ((unique_movie_title, movie_maker), director)
> ) WITH CLUSTERING ORDER BY (director ASC)
> {noformat}
> Executing a SELECT with FILTERING on a non-PK column, using a NULL as the 
> argument:
> {noformat}
> SELECT "mainGuy", "movie_maker", "unique_movie_title", "list", "director", 
> "yearMade" FROM "coolMovies" WHERE "mainGuy" = null ALLOW FILTERING
> {noformat}
> returns a ReadFailure exception:
> {noformat}
> cqlsh:TestKeySpace_4c8f2cf8d5cc> SELECT "mainGuy", "movie_maker", 
> "unique_movie_title", "list", "director", "yearMade" FROM "coolMovies" WHERE 
> "mainGuy" = null ALLOW FILTERING;
> ←[0;1;31mTraceback (most recent call last):
>   File "C:\Users\Kishan\.ccm\repository\3.0.0\bin\\cqlsh.py", line 1216, in 
> perform_simple_statement
> result = future.result()
>   File 
> "C:\Users\Kishan\.ccm\repository\3.0.0\bin\..\lib\cassandra-driver-internal-only-3.0.0a3.post0-3f15725.zip\cassandra-driver-3.0.0a3.post0-3f15725\cassandra\cluster.py",
>  line 3118, in result
> raise self._final_exception
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'cons
> istency': 'ONE'}
> ←[0m
> {noformat}
> Cassandra log shows:
> {noformat}
> WARN  [SharedPool-Worker-2] 2015-11-16 13:51:00,259 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-2,10,main]: {}
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:581)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:243)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:95) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:86) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:21) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.Transformation.add(Transformation.java:136) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:102)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:233)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:227)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:293)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:136)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:128)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:123)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>  

[jira] [Updated] (CASSANDRA-11105) cassandra-stress tool - InvalidQueryException: Batch too large

2016-02-02 Thread Ralf Steppacher (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ralf Steppacher updated CASSANDRA-11105:

Description: 
I am using Cassandra 2.2.4 and I am struggling to get the cassandra-stress tool 
to work for my test scenario. I have followed the example on 
http://www.datastax.com/dev/blog/improved-cassandra-2-1-stress-tool-benchmark-any-schema
 to create a yaml file describing my test (attached).

I am collecting events per user id (text, partition key). Events have a session 
type (text), event type (text), and creation time (timestamp) (clustering keys, 
in that order). Plus some more attributes required for rendering the events in 
a UI. For testing purposes I ended up with the following column spec and insert 
distribution:
{noformat}
columnspec:
  - name: created_at
cluster: uniform(10..1)
  - name: event_type
size: uniform(5..10)
population: uniform(1..30)
cluster: uniform(1..30)
  - name: session_type
size: fixed(5)
population: uniform(1..4)
cluster: uniform(1..4)
  - name: user_id
size: fixed(15)
population: uniform(1..100)
  - name: message
size: uniform(10..100)
population: uniform(1..100B)

insert:
  partitions: fixed(1)
  batchtype: UNLOGGED
  select: fixed(1)/120
{noformat}

Running stress tool for just the insert prints 
{noformat}
Generating batches with [1..1] partitions and [0..1] rows (of [10..120] 
total rows in the partitions)
{noformat}
and then immediately starts flooding me with 
{{com.datastax.driver.core.exceptions.InvalidQueryException: Batch too large}}. 

Why I should be exceeding the {{batch_size_fail_threshold_in_kb: 50}} in the 
{{cassandra.yaml}} I do not understand. My understanding is that the stress 
tool should generate one row per batch. The size of a single row should not 
exceed {{8+10*3+5*3+15*3+100*3 = 398 bytes}}. Assuming a worst case of all text 
characters being 3 byte unicode characters. 

This is how I start the attached user scenario:
{noformat}
[rsteppac@centos bin]$ ./cassandra-stress user profile=../batch_too_large.yaml 
ops\(insert=1\) -log level=verbose 
file=~/centos_event_by_patient_session_event_timestamp_insert_only.log -node 
10.211.55.8
INFO  08:00:07 Did not find Netty's native epoll transport in the classpath, 
defaulting to NIO.
INFO  08:00:08 Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy 
(if this is incorrect, please provide the correct datacenter name with 
DCAwareRoundRobinPolicy constructor)
INFO  08:00:08 New Cassandra host /10.211.55.8:9042 added
Connected to cluster: Titan_DEV
Datatacenter: datacenter1; Host: /10.211.55.8; Rack: rack1
Created schema. Sleeping 1s for propagation.
Generating batches with [1..1] partitions and [0..1] rows (of [10..120] 
total rows in the partitions)
com.datastax.driver.core.exceptions.InvalidQueryException: Batch too large
at 
com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
at 
com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:271)
at 
com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:185)
at 
com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:55)
at 
org.apache.cassandra.stress.operations.userdefined.SchemaInsert$JavaDriverRun.run(SchemaInsert.java:87)
at 
org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:159)
at 
org.apache.cassandra.stress.operations.userdefined.SchemaInsert.run(SchemaInsert.java:119)
at 
org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:309)
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Batch too 
large
at 
com.datastax.driver.core.Responses$Error.asException(Responses.java:125)
at 
com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:120)
at 
com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:186)
at 
com.datastax.driver.core.RequestHandler.access$2300(RequestHandler.java:45)
at 
com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:752)
at 
com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:576)
at 
com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1003)
at 
com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:925)
at 
com.datastax.shaded.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at 
com.datastax.shaded.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
at 

[jira] [Comment Edited] (CASSANDRA-11053) COPY FROM on large datasets: fix progress report and debug performance

2016-02-02 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15127864#comment-15127864
 ] 

Stefania edited comment on CASSANDRA-11053 at 2/2/16 8:48 AM:
--

Reading from standard input and parameters like {{maxrows}} and {{skiprows}} 
also become problematic. {{skiprows}} can be relative to a file but 
{{maxrows}}, as well as other max parameters, would require synchronization 
across processes.

I also would like to point out that the approach of cassandra-loader and COPY 
FROM are currently quite different:

* cassandra-loader treats every file independently; options such as maxRows and 
so forth are on a per file basis.

* COPY FROM treats files as a whole so options are on a global basis but this 
would have to change to achieve the same performance. 




was (Author: stefania):
Reading from standard input and parameters like {{maxrows}} and {{skiprows}} 
also become problematic. {{skiprows}} can be relative to a file but 
{{maxrows}}, as well as other max parameters, would require synchronization 
across processes.



> COPY FROM on large datasets: fix progress report and debug performance
> --
>
> Key: CASSANDRA-11053
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11053
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: copy_from_large_benchmark.txt, parent_profile.txt, 
> worker_profiles.txt
>
>
> Running COPY from on a large dataset (20G divided in 20M records) revealed 
> two issues:
> * The progress report is incorrect, it is very slow until almost the end of 
> the test at which point it catches up extremely quickly.
> * The performance in rows per second is similar to running smaller tests with 
> a smaller cluster locally (approx 35,000 rows per second). As a comparison, 
> cassandra-stress manages 50,000 rows per second under the same set-up, 
> therefore resulting 1.5 times faster. 
> See attached file _copy_from_large_benchmark.txt_ for the benchmark details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9328) WriteTimeoutException thrown when LWT concurrency > 1, despite the query duration taking MUCH less than cas_contention_timeout_in_ms

2016-02-02 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-9328.
---
   Resolution: Won't Fix
Fix Version/s: (was: 2.1.x)

bq. If this is a known issue, and there is no other ticket to represent this 
issue, then please tell me again why you want to close it?

Because there is no sense keeping open a bug report for something that is 
working as designed.  Closing as wontfix.

> WriteTimeoutException thrown when LWT concurrency > 1, despite the query 
> duration taking MUCH less than cas_contention_timeout_in_ms
> 
>
> Key: CASSANDRA-9328
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9328
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Aaron Whiteside
> Attachments: CassandraLWTTest.java, CassandraLWTTest2.java
>
>
> WriteTimeoutException thrown when LWT concurrency > 1, despite the query 
> duration taking MUCH less than cas_contention_timeout_in_ms.
> Unit test attached, run against a 3 node cluster running 2.1.5.
> If you reduce the threadCount to 1, you never see a WriteTimeoutException. If 
> the WTE is due to not being able to communicate with other nodes, why does 
> the concurrency >1 cause inter-node communication to fail?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11052) Cannot use Java 8 lambda expression inside UDF code body

2016-02-02 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128398#comment-15128398
 ] 

DOAN DuyHai commented on CASSANDRA-11052:
-

Since the server is running with JDK8, I don't see any reason to ban usage of 
Java8 stream and lambda

> Cannot use Java 8 lambda expression inside UDF code body
> 
>
> Key: CASSANDRA-11052
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11052
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: DOAN DuyHai
>Assignee: Robert Stupp
> Fix For: 3.x
>
> Attachments: 11052.patch
>
>
> When creating the following **UDF** using Java 8 lambda syntax
> {code:sql}
>  CREATE FUNCTION IF NOT EXISTS music.udf(state map, styles 
> list)
>  RETURNS NULL ON NULL INPUT
>  RETURNS map
>  LANGUAGE java
>  AS $$
>styles.forEach((Object o) -> {
>String style = (String)o;
>if(state.containsKey(style)) {
> state.put(style, (Long)state.get(style)+1);
>} else {
> state.put(style, 1L);   
>}
>});
>
>return state;
>  $$;
> {code}
>  I got the following exception:
> {code:java}
> Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Could 
> not compile function 'music.udf' from Java source: 
> org.apache.cassandra.exceptions.InvalidRequestException: Java source 
> compilation failed:
> Line 2: The type java.util.function.Consumer cannot be resolved. It is 
> indirectly referenced from required .class files
> Line 2: The method forEach(Consumer) from the type Iterable refers to the 
> missing type Consumer
> Line 2: The target type of this expression must be a functional interface
>   at 
> com.datastax.driver.core.Responses$Error.asException(Responses.java:136)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:179)
>   at 
> com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:184)
>   at 
> com.datastax.driver.core.RequestHandler.access$2500(RequestHandler.java:43)
>   at 
> com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:798)
>   at 
> com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:617)
>   at 
> com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1005)
>   at 
> com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:928)
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
>   at 
> io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
>   at 
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
>   at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
>   at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:263)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
>   at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
>   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
>   ... 1 more
> {code}
>  It looks like the compiler requires importing java.util.Consumer but I have 
> checked the source code and compiler options 

[jira] [Created] (CASSANDRA-11107) rpc_address is required for native protocol

2016-02-02 Thread n0rad (JIRA)
n0rad created CASSANDRA-11107:
-

 Summary: rpc_address is required for native protocol
 Key: CASSANDRA-11107
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11107
 Project: Cassandra
  Issue Type: Bug
  Components: Configuration
Reporter: n0rad
Priority: Minor


I'm starting cassandra on a container with this /etc/hosts

{quote}
127.0.0.1rkt-235c219a-f0dc-4958-9e03-5afe2581bbe1 localhost
::1  rkt-235c219a-f0dc-4958-9e03-5afe2581bbe1 localhost
{quote}

I have the default configuration except :
{quote}
 - seeds: "10.1.1.1"
listen_address : 10.1.1.1
{quote}

cassandra will start listening on *127.0.0.1:9042*

if I set *rpc_address:10.1.1.1* , even if *start_rpc: false*, cassandra will 
listen on 10.1.1.1

Since rpc is not started, I assumed that *rpc_address* and 
*broadcast_rpc_address* will be ignored

It took me a while to figure that. There may be something to do around this






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/6] cassandra git commit: Fix deserialization of read commands in mixed clusters

2016-02-02 Thread samt
Fix deserialization of read commands in mixed clusters

Patch by Sam Tunnicliffe; reviewed by Sylvain Lebresne for
CASSANDRA-11087


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b21df5b7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b21df5b7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b21df5b7

Branch: refs/heads/cassandra-3.3
Commit: b21df5b70a22ed8bef61dd34d77d71bdfe475922
Parents: b087b4c
Author: Sam Tunnicliffe 
Authored: Thu Jan 28 17:23:23 2016 +
Committer: Sam Tunnicliffe 
Committed: Tue Feb 2 14:33:12 2016 +

--
 CHANGES.txt   |  1 +
 src/java/org/apache/cassandra/db/ReadCommand.java | 17 -
 2 files changed, 17 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b21df5b7/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 47d2dc1..dcbce5b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.3
+ * Fix deserialization of legacy read commands (CASSANDRA-11087)
  * Fix incorrect computation of deletion time in sstable metadata 
(CASSANDRA-11102)
  * Avoid memory leak when collecting sstable metadata (CASSANDRA-11026)
  * Mutations do not block for completion under view lock contention 
(CASSANDRA-10779)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b21df5b7/src/java/org/apache/cassandra/db/ReadCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/ReadCommand.java 
b/src/java/org/apache/cassandra/db/ReadCommand.java
index f21d100..97c3d07 100644
--- a/src/java/org/apache/cassandra/db/ReadCommand.java
+++ b/src/java/org/apache/cassandra/db/ReadCommand.java
@@ -1308,8 +1308,23 @@ public abstract class ReadCommand implements ReadQuery
 "Fill name in filter (hex): " + 
ByteBufferUtil.bytesToHex(buffer), metadata.cfId);
 }
 
-if (!cellName.clustering.equals(Clustering.STATIC_CLUSTERING))
+// If we're querying for a static column, we may also need to 
read it
+// as if it were a thrift dynamic column (because the column 
metadata,
+// which makes it a static column in 3.0+, may have been added 
*after*
+// some values were written). Note that all cql queries on 
non-compact
+// tables used slice & not name filters prior to 3.0 so this 
path is
+// not taken for non-compact tables. It is theoretically 
possible to
+// get here via thrift, hence the check on 
metadata.isStaticCompactTable.
+// See CASSANDRA-11087.
+if (metadata.isStaticCompactTable() && 
cellName.clustering.equals(Clustering.STATIC_CLUSTERING))
+{
+clusterings.add(new 
Clustering(cellName.column.name.bytes));
+selectionBuilder.add(metadata.compactValueColumn());
+}
+else
+{
 clusterings.add(cellName.clustering);
+}
 
 selectionBuilder.add(cellName.column);
 }



[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-02-02 Thread samt
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f70c3538
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f70c3538
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f70c3538

Branch: refs/heads/trunk
Commit: f70c353852fa29622d573a0824fcdb90c0742306
Parents: e96fae3 b21df5b
Author: Sam Tunnicliffe 
Authored: Tue Feb 2 14:34:51 2016 +
Committer: Sam Tunnicliffe 
Committed: Tue Feb 2 14:38:45 2016 +

--
 CHANGES.txt|  1 +
 src/java/org/apache/cassandra/db/ReadCommand.java  | 17 -
 .../cassandra/db/ReadCommandVerbHandler.java   |  2 +-
 3 files changed, 18 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f70c3538/CHANGES.txt
--
diff --cc CHANGES.txt
index 53208c0,dcbce5b..b67fae2
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,8 -1,5 +1,9 @@@
 -3.0.3
 +3.3
 + * Avoid infinite loop if owned range is smaller than number of
 +   data dirs (CASSANDRA-11034)
 + * Avoid bootstrap hanging when existing nodes have no data to stream 
(CASSANDRA-11010)
 +Merged from 3.0:
+  * Fix deserialization of legacy read commands (CASSANDRA-11087)
   * Fix incorrect computation of deletion time in sstable metadata 
(CASSANDRA-11102)
   * Avoid memory leak when collecting sstable metadata (CASSANDRA-11026)
   * Mutations do not block for completion under view lock contention 
(CASSANDRA-10779)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f70c3538/src/java/org/apache/cassandra/db/ReadCommand.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f70c3538/src/java/org/apache/cassandra/db/ReadCommandVerbHandler.java
--
diff --cc src/java/org/apache/cassandra/db/ReadCommandVerbHandler.java
index 42cd309,9cde8dc..e2a9678
--- a/src/java/org/apache/cassandra/db/ReadCommandVerbHandler.java
+++ b/src/java/org/apache/cassandra/db/ReadCommandVerbHandler.java
@@@ -50,15 -47,9 +50,15 @@@ public class ReadCommandVerbHandler imp
  response = command.createResponse(iterator);
  }
  
 -MessageOut reply = new 
MessageOut<>(MessagingService.Verb.REQUEST_RESPONSE, response, serializer());
 +if (!command.complete())
 +{
 +Tracing.trace("Discarding partial response to {} (timed out)", 
message.from);
 +MessagingService.instance().incrementDroppedMessages(message, 
System.currentTimeMillis() - message.constructionTime.timestamp);
 +return;
 +}
  
  Tracing.trace("Enqueuing response to {}", message.from);
- MessageOut reply = new 
MessageOut<>(MessagingService.Verb.REQUEST_RESPONSE, response, 
ReadResponse.serializer);
++MessageOut reply = new 
MessageOut<>(MessagingService.Verb.REQUEST_RESPONSE, response, serializer());
  MessagingService.instance().sendReply(reply, id, message.from);
  }
  }



[3/6] cassandra git commit: Fix deserialization of read commands in mixed clusters

2016-02-02 Thread samt
Fix deserialization of read commands in mixed clusters

Patch by Sam Tunnicliffe; reviewed by Sylvain Lebresne for
CASSANDRA-11087


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b21df5b7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b21df5b7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b21df5b7

Branch: refs/heads/trunk
Commit: b21df5b70a22ed8bef61dd34d77d71bdfe475922
Parents: b087b4c
Author: Sam Tunnicliffe 
Authored: Thu Jan 28 17:23:23 2016 +
Committer: Sam Tunnicliffe 
Committed: Tue Feb 2 14:33:12 2016 +

--
 CHANGES.txt   |  1 +
 src/java/org/apache/cassandra/db/ReadCommand.java | 17 -
 2 files changed, 17 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b21df5b7/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 47d2dc1..dcbce5b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.3
+ * Fix deserialization of legacy read commands (CASSANDRA-11087)
  * Fix incorrect computation of deletion time in sstable metadata 
(CASSANDRA-11102)
  * Avoid memory leak when collecting sstable metadata (CASSANDRA-11026)
  * Mutations do not block for completion under view lock contention 
(CASSANDRA-10779)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b21df5b7/src/java/org/apache/cassandra/db/ReadCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/ReadCommand.java 
b/src/java/org/apache/cassandra/db/ReadCommand.java
index f21d100..97c3d07 100644
--- a/src/java/org/apache/cassandra/db/ReadCommand.java
+++ b/src/java/org/apache/cassandra/db/ReadCommand.java
@@ -1308,8 +1308,23 @@ public abstract class ReadCommand implements ReadQuery
 "Fill name in filter (hex): " + 
ByteBufferUtil.bytesToHex(buffer), metadata.cfId);
 }
 
-if (!cellName.clustering.equals(Clustering.STATIC_CLUSTERING))
+// If we're querying for a static column, we may also need to 
read it
+// as if it were a thrift dynamic column (because the column 
metadata,
+// which makes it a static column in 3.0+, may have been added 
*after*
+// some values were written). Note that all cql queries on 
non-compact
+// tables used slice & not name filters prior to 3.0 so this 
path is
+// not taken for non-compact tables. It is theoretically 
possible to
+// get here via thrift, hence the check on 
metadata.isStaticCompactTable.
+// See CASSANDRA-11087.
+if (metadata.isStaticCompactTable() && 
cellName.clustering.equals(Clustering.STATIC_CLUSTERING))
+{
+clusterings.add(new 
Clustering(cellName.column.name.bytes));
+selectionBuilder.add(metadata.compactValueColumn());
+}
+else
+{
 clusterings.add(cellName.clustering);
+}
 
 selectionBuilder.add(cellName.column);
 }



[1/6] cassandra git commit: Fix deserialization of read commands in mixed clusters

2016-02-02 Thread samt
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 b087b4c7d -> b21df5b70
  refs/heads/cassandra-3.3 e96fae3ca -> f70c35385
  refs/heads/trunk 27d88ee33 -> 0a83e6aa8


Fix deserialization of read commands in mixed clusters

Patch by Sam Tunnicliffe; reviewed by Sylvain Lebresne for
CASSANDRA-11087


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b21df5b7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b21df5b7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b21df5b7

Branch: refs/heads/cassandra-3.0
Commit: b21df5b70a22ed8bef61dd34d77d71bdfe475922
Parents: b087b4c
Author: Sam Tunnicliffe 
Authored: Thu Jan 28 17:23:23 2016 +
Committer: Sam Tunnicliffe 
Committed: Tue Feb 2 14:33:12 2016 +

--
 CHANGES.txt   |  1 +
 src/java/org/apache/cassandra/db/ReadCommand.java | 17 -
 2 files changed, 17 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b21df5b7/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 47d2dc1..dcbce5b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.3
+ * Fix deserialization of legacy read commands (CASSANDRA-11087)
  * Fix incorrect computation of deletion time in sstable metadata 
(CASSANDRA-11102)
  * Avoid memory leak when collecting sstable metadata (CASSANDRA-11026)
  * Mutations do not block for completion under view lock contention 
(CASSANDRA-10779)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b21df5b7/src/java/org/apache/cassandra/db/ReadCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/ReadCommand.java 
b/src/java/org/apache/cassandra/db/ReadCommand.java
index f21d100..97c3d07 100644
--- a/src/java/org/apache/cassandra/db/ReadCommand.java
+++ b/src/java/org/apache/cassandra/db/ReadCommand.java
@@ -1308,8 +1308,23 @@ public abstract class ReadCommand implements ReadQuery
 "Fill name in filter (hex): " + 
ByteBufferUtil.bytesToHex(buffer), metadata.cfId);
 }
 
-if (!cellName.clustering.equals(Clustering.STATIC_CLUSTERING))
+// If we're querying for a static column, we may also need to 
read it
+// as if it were a thrift dynamic column (because the column 
metadata,
+// which makes it a static column in 3.0+, may have been added 
*after*
+// some values were written). Note that all cql queries on 
non-compact
+// tables used slice & not name filters prior to 3.0 so this 
path is
+// not taken for non-compact tables. It is theoretically 
possible to
+// get here via thrift, hence the check on 
metadata.isStaticCompactTable.
+// See CASSANDRA-11087.
+if (metadata.isStaticCompactTable() && 
cellName.clustering.equals(Clustering.STATIC_CLUSTERING))
+{
+clusterings.add(new 
Clustering(cellName.column.name.bytes));
+selectionBuilder.add(metadata.compactValueColumn());
+}
+else
+{
 clusterings.add(cellName.clustering);
+}
 
 selectionBuilder.add(cellName.column);
 }



[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-02-02 Thread samt
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f70c3538
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f70c3538
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f70c3538

Branch: refs/heads/cassandra-3.3
Commit: f70c353852fa29622d573a0824fcdb90c0742306
Parents: e96fae3 b21df5b
Author: Sam Tunnicliffe 
Authored: Tue Feb 2 14:34:51 2016 +
Committer: Sam Tunnicliffe 
Committed: Tue Feb 2 14:38:45 2016 +

--
 CHANGES.txt|  1 +
 src/java/org/apache/cassandra/db/ReadCommand.java  | 17 -
 .../cassandra/db/ReadCommandVerbHandler.java   |  2 +-
 3 files changed, 18 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f70c3538/CHANGES.txt
--
diff --cc CHANGES.txt
index 53208c0,dcbce5b..b67fae2
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,8 -1,5 +1,9 @@@
 -3.0.3
 +3.3
 + * Avoid infinite loop if owned range is smaller than number of
 +   data dirs (CASSANDRA-11034)
 + * Avoid bootstrap hanging when existing nodes have no data to stream 
(CASSANDRA-11010)
 +Merged from 3.0:
+  * Fix deserialization of legacy read commands (CASSANDRA-11087)
   * Fix incorrect computation of deletion time in sstable metadata 
(CASSANDRA-11102)
   * Avoid memory leak when collecting sstable metadata (CASSANDRA-11026)
   * Mutations do not block for completion under view lock contention 
(CASSANDRA-10779)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f70c3538/src/java/org/apache/cassandra/db/ReadCommand.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f70c3538/src/java/org/apache/cassandra/db/ReadCommandVerbHandler.java
--
diff --cc src/java/org/apache/cassandra/db/ReadCommandVerbHandler.java
index 42cd309,9cde8dc..e2a9678
--- a/src/java/org/apache/cassandra/db/ReadCommandVerbHandler.java
+++ b/src/java/org/apache/cassandra/db/ReadCommandVerbHandler.java
@@@ -50,15 -47,9 +50,15 @@@ public class ReadCommandVerbHandler imp
  response = command.createResponse(iterator);
  }
  
 -MessageOut reply = new 
MessageOut<>(MessagingService.Verb.REQUEST_RESPONSE, response, serializer());
 +if (!command.complete())
 +{
 +Tracing.trace("Discarding partial response to {} (timed out)", 
message.from);
 +MessagingService.instance().incrementDroppedMessages(message, 
System.currentTimeMillis() - message.constructionTime.timestamp);
 +return;
 +}
  
  Tracing.trace("Enqueuing response to {}", message.from);
- MessageOut reply = new 
MessageOut<>(MessagingService.Verb.REQUEST_RESPONSE, response, 
ReadResponse.serializer);
++MessageOut reply = new 
MessageOut<>(MessagingService.Verb.REQUEST_RESPONSE, response, serializer());
  MessagingService.instance().sendReply(reply, id, message.from);
  }
  }



[6/6] cassandra git commit: Merge branch 'cassandra-3.3' into trunk

2016-02-02 Thread samt
Merge branch 'cassandra-3.3' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0a83e6aa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0a83e6aa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0a83e6aa

Branch: refs/heads/trunk
Commit: 0a83e6aa8ebaf6adcc3e23235300d90c66731cf6
Parents: 27d88ee f70c353
Author: Sam Tunnicliffe 
Authored: Tue Feb 2 14:39:31 2016 +
Committer: Sam Tunnicliffe 
Committed: Tue Feb 2 14:47:38 2016 +

--
 CHANGES.txt|  1 +
 src/java/org/apache/cassandra/db/ReadCommand.java  | 17 -
 .../cassandra/db/ReadCommandVerbHandler.java   |  2 +-
 3 files changed, 18 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0a83e6aa/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/0a83e6aa/src/java/org/apache/cassandra/db/ReadCommand.java
--
diff --cc src/java/org/apache/cassandra/db/ReadCommand.java
index d1a3e3a,a0b3d55..3adee9f
--- a/src/java/org/apache/cassandra/db/ReadCommand.java
+++ b/src/java/org/apache/cassandra/db/ReadCommand.java
@@@ -1367,8 -1367,23 +1367,23 @@@ public abstract class ReadCommand exten
  "Fill name in filter (hex): " + 
ByteBufferUtil.bytesToHex(buffer), metadata.cfId);
  }
  
- if (!cellName.clustering.equals(Clustering.STATIC_CLUSTERING))
+ // If we're querying for a static column, we may also need to 
read it
+ // as if it were a thrift dynamic column (because the column 
metadata,
+ // which makes it a static column in 3.0+, may have been 
added *after*
+ // some values were written). Note that all cql queries on 
non-compact
+ // tables used slice & not name filters prior to 3.0 so this 
path is
+ // not taken for non-compact tables. It is theoretically 
possible to
+ // get here via thrift, hence the check on 
metadata.isStaticCompactTable.
+ // See CASSANDRA-11087.
+ if (metadata.isStaticCompactTable() && 
cellName.clustering.equals(Clustering.STATIC_CLUSTERING))
+ {
 -clusterings.add(new 
Clustering(cellName.column.name.bytes));
++
clusterings.add(Clustering.make(cellName.column.name.bytes));
+ selectionBuilder.add(metadata.compactValueColumn());
+ }
+ else
+ {
  clusterings.add(cellName.clustering);
+ }
  
  selectionBuilder.add(cellName.column);
  }



[jira] [Created] (CASSANDRA-11108) Fix failure of cql_tests.MiscellaneousCQLTester.large_collection_errors_test on 2.1 and 2.2

2016-02-02 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-11108:


 Summary: Fix failure of 
cql_tests.MiscellaneousCQLTester.large_collection_errors_test on 2.1 and 2.2
 Key: CASSANDRA-11108
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11108
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne


The aforementioned test fails on 2.1 and 2.2 (the only branch on which it is 
run actually) due to https://datastax-oss.atlassian.net/browse/PYTHON-459. That 
ticket has been fixed but I don't think the version incorporating it has been 
released yet. This ticket is so we don't got forget to act once said version is 
released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10411) Add/drop multiple columns in one ALTER TABLE statement

2016-02-02 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10411:
-
Reviewer: Robert Stupp

> Add/drop multiple columns in one ALTER TABLE statement
> --
>
> Key: CASSANDRA-10411
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10411
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Bryn Cooke
>Assignee: Amit Singh Chowdhery
>Priority: Minor
>  Labels: patch
> Attachments: Cassandra-10411-trunk.diff, cassandra-10411.diff
>
>
> Currently it is only possible to add one column at a time in an alter table 
> statement. It would be great if we could add multiple columns at a time.
> The primary reason for this is that adding each column individually seems to 
> take a significant amount of time (at least on my development machine), I 
> know all the columns I want to add, but don't know them until after the 
> initial table is created.
> As a secondary consideration it brings CQL slightly closer to SQL where most 
> databases can handle adding multiple columns in one statement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11102) Data lost during compaction

2016-02-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128037#comment-15128037
 ] 

Sylvain Lebresne commented on CASSANDRA-11102:
--

I think all we want is to make sure we always update the deletion time stats if 
the row {{LivenessInfo}} is not empty, so just removing the condition in 
{{MetadataCollection.update(LivenessInfo newInfo)}} as in [this 
patch|https://github.com/pcmanus/cassandra/commits/11102]. That's actually 
consistent with how we treat cells which makes sense. I've started a run of CI 
on this to check if it's not breaking something else but they seems to be in 
the queue so I'll update once I get the results.

> Data lost during compaction
> ---
>
> Key: CASSANDRA-11102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11102
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.2.1 (single node, 5 node cluster)
> JDK 8
>Reporter: Jaroslav Kamenik
>Assignee: Marcus Eriksson
>Priority: Blocker
> Fix For: 3.0.3, 3.3
>
>
> We have experienced data loses in some tables during few weeks since update 
> to cassandra 3.0. I thing I successfully found test case now. 
> Step one - test table:
> CREATE TABLE aaa (
> r int,
> c1 int,
> c2 ascii,
> PRIMARY KEY (r, c1, c2));
> Step two - run few queries:
>   insert into aaa (r, c1, c2) values (1,2,'A');
>   delete from aaa where r=1 and c1=2 and c2='B';
>   insert into aaa (r, c1, c2) values (2,3,'A');
>   delete from aaa where r=2 and c1=3 and c2='B';
>   insert into aaa (r, c1, c2) values (3,4,'A');
>   delete from aaa where r=3 and c1=4 and c2='B';
>   insert into aaa (r, c1, c2) values (4,5,'A');
>   delete from aaa where r=4 and c1=5 and c2='B';
> It creates 4 rows (select count says 4) and 4 tombstones.
> Step 3 - Restart Cassandra
> You will see new files written into C* data folder. I tried sstable-tools to 
> print table structure, it shows 4 rows, data and tombstones are there.
> Step 4 - set GC grace to 1 to force tombstone removing during compaction.
> alter table aaa with GC_GRACE_SECONDS = 1;
> Step 5 - Compact tables
> ./nodetool compact
> aaa files dissapeares during compaction. 
> select count(*) says 0
> compaction history says
> ... aaa  2016-02-01T14:24:01.433   329   0   {}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11106) Experiment with strategies for picking compaction candidates in LCS

2016-02-02 Thread Marcus Eriksson (JIRA)
Marcus Eriksson created CASSANDRA-11106:
---

 Summary: Experiment with strategies for picking compaction 
candidates in LCS
 Key: CASSANDRA-11106
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11106
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson


Ideas taken here: http://rocksdb.org/blog/2921/compaction_pri/

Current strategy in LCS is that we keep track of the token that was last 
compacted and then we start a compaction with the sstable containing the next 
token (kOldestSmallestSeqFirst in the blog post above)

The rocksdb blog post above introduces a few ideas how this could be improved:
* pick the 'coldest' sstable (sstable with the oldest max timestamp) - we want 
to keep the hot data (recently updated) in the lower levels to avoid write 
amplification
* pick the sstable with the highest tombstone ratio, we want to get tombstones 
to the top level as quickly as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11053) COPY FROM on large datasets: fix progress report and debug performance

2016-02-02 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15127990#comment-15127990
 ] 

Stefania commented on CASSANDRA-11053:
--

Another approach I am looking at is to continue reading in the parent process, 
possibly via memory mapped files, and to only move the csv decoding to the 
worker processes. This would be less disruptive in the existing design. I also 
note that we will still need to improve worker processes performance as well, 
since they only spend about 30 seconds receiving, something else needs to 
improve. Since most of the time consuming methods are in the driver I would 
like to try and get the cythonized driver to work as well.

Sorry for the long chain of comments. However I would really appreciate any 
further ideas, without taking too much of your time.

> COPY FROM on large datasets: fix progress report and debug performance
> --
>
> Key: CASSANDRA-11053
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11053
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: copy_from_large_benchmark.txt, parent_profile.txt, 
> worker_profiles.txt
>
>
> Running COPY from on a large dataset (20G divided in 20M records) revealed 
> two issues:
> * The progress report is incorrect, it is very slow until almost the end of 
> the test at which point it catches up extremely quickly.
> * The performance in rows per second is similar to running smaller tests with 
> a smaller cluster locally (approx 35,000 rows per second). As a comparison, 
> cassandra-stress manages 50,000 rows per second under the same set-up, 
> therefore resulting 1.5 times faster. 
> See attached file _copy_from_large_benchmark.txt_ for the benchmark details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10819) Generic Java UDF types

2016-02-02 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-10819:
-
Reviewer: DOAN DuyHai

> Generic Java UDF types
> --
>
> Key: CASSANDRA-10819
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10819
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
>  Labels: UDF, doc-impacting
> Fix For: 3.x
>
>
> At the moment we only generate raw type signatures for Java UDF methods. E.g. 
> a CQL argument type {{map}} is just mapped to {{java.util.Map}} 
> but could be mapped to {{java.util.Map}}.
> It's a probably simple but nice improvement and feels to be a LHF.
> Depending on the complexity it might be doable for 3.0.x, too.
> Thanks for the heads-up, [~doanduyhai]!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10922) Inconsistent query results

2016-02-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128501#comment-15128501
 ] 

Sylvain Lebresne commented on CASSANDRA-10922:
--

bq.  we can run any commands and provide all info that can help you

The schema and the result of both tracing and scrubbing would be a good start

bq. Could you please describe how to run scrub, or turn on tracing

For scrub, see 
https://docs.datastax.com/en/cassandra/2.2/cassandra/tools/toolsScrub.html. For 
tracing, just do {{TRACING ON}} in cqlsh before issuing the query and you'll 
get a trace printed with the result of the query.

> Inconsistent query results
> --
>
> Key: CASSANDRA-10922
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10922
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Maxim Podkolzine
>Priority: Critical
>
> I have a DB created with Cassandra 2.2.3. And currently I'm running it by 
> Cassandra 3.0.2.
> The value of a particular cell is returned depending on the query I run (in 
> cqlsh):
> - returned when iterate all columns, i.e.
> SELECT value FROM "3xupsource".Content WHERE databaseid=0x2112 LIMIT 2
> (I can see the columns 0x and 0x0100 there, the values seem 
> correct)
> - not returned when I specify a particular column
> SELECT value FROM "3xupsource".Content WHERE databaseid=0x2112 AND 
> columnid=0x0100
> Other queries like SELECT value FROM "3xupsource".Content WHERE 
> databaseid=0x2112 AND columnid=0x work consistently.
> There is nothing in Cassandra error log, so it does not look like a 
> corruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9318) Bound the number of in-flight requests at the coordinator

2016-02-02 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-9318:
-
Fix Version/s: (was: 2.2.x)
   (was: 2.1.x)

> Bound the number of in-flight requests at the coordinator
> -
>
> Key: CASSANDRA-9318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9318
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths, Streaming and Messaging
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
>
> It's possible to somewhat bound the amount of load accepted into the cluster 
> by bounding the number of in-flight requests and request bytes.
> An implementation might do something like track the number of outstanding 
> bytes and requests and if it reaches a high watermark disable read on client 
> connections until it goes back below some low watermark.
> Need to make sure that disabling read on the client connection won't 
> introduce other issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10070) Automatic repair scheduling

2016-02-02 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128529#comment-15128529
 ] 

Jonathan Ellis commented on CASSANDRA-10070:


[~devdazed], you had some great suggestions above.  Do you have time to look at 
the draft Marcus attached?

> Automatic repair scheduling
> ---
>
> Key: CASSANDRA-10070
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10070
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Olsson
>Assignee: Marcus Olsson
>Priority: Minor
> Fix For: 3.x
>
> Attachments: Distributed Repair Scheduling.doc
>
>
> Scheduling and running repairs in a Cassandra cluster is most often a 
> required task, but this can both be hard for new users and it also requires a 
> bit of manual configuration. There are good tools out there that can be used 
> to simplify things, but wouldn't this be a good feature to have inside of 
> Cassandra? To automatically schedule and run repairs, so that when you start 
> up your cluster it basically maintains itself in terms of normal 
> anti-entropy, with the possibility for manual configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11078) upgrade_supercolumns_test dtests failing on 2.1

2016-02-02 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson reassigned CASSANDRA-11078:
---

Assignee: Philip Thompson  (was: DS Test Eng)

> upgrade_supercolumns_test dtests failing on 2.1
> ---
>
> Key: CASSANDRA-11078
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11078
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Philip Thompson
> Fix For: 3.x
>
>
> This tests in this module fail 
> [here|https://github.com/riptano/cassandra-dtest/blob/18647a3e167f127795e2fe63d73305dddf103716/upgrade_supercolumns_test.py#L213]
>  and 
> [here|https://github.com/riptano/cassandra-dtest/blob/529cd71ad5ac4c2f28ccb5560ddc068f604c7b28/upgrade_supercolumns_test.py#L106]
>  when a call to {{start}} with {{wait_other_notice=True}} times out. It 
> happens consistently on the upgrade path from cassandra-2.1 to 2.2. I haven't 
> seen clear evidence as to whether this is a test failure or a C* bug, so I'll 
> mark it as a test error for the TE team to debug.
> I don't have a CassCI link for this failure - the changes to the tests 
> haven't been merged yet.
> EDIT: changing the title of this ticket since there are multiple similar 
> failures. The failing tests are
> {code}
> upgrade_supercolumns_test.py:TestSCUpgrade.upgrade_with_counters_test failing
> upgrade_supercolumns_test.py:TestSCUpgrade.upgrade_with_index_creation_test
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10779) Mutations do not block for completion under view lock contention

2016-02-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128424#comment-15128424
 ] 

Sylvain Lebresne commented on CASSANDRA-10779:
--

I don't know if that's why you re-opened, but this seems to have broken 
{{materialized_views_test.TestMaterializedViewsConsistency.single_partition_consistent_reads_after_write_test}}.

> Mutations do not block for completion under view lock contention
> 
>
> Key: CASSANDRA-10779
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10779
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
> Environment: Windows 7 64-bit, Cassandra v3.0.0, Java 1.8u60
>Reporter: Will Zhang
>Assignee: Carl Yeksigian
> Fix For: 3.0.3, 3.3
>
>
> Hi guys,
> I encountered the following warning message when I was testing to upgrade 
> from v2.2.2 to v3.0.0. 
> It looks like a write time-out but in an uncaught exception. Could this be an 
> easy fix?
> Log file section below. Thank you!
> {code}
>   WARN  [SharedPool-Worker-64] 2015-11-26 14:04:24,678 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-64,10,main]: {}
> org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - 
> received only 0 responses.
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:427) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:386) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at org.apache.cassandra.db.Mutation.apply(Mutation.java:205) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.db.Keyspace.lambda$apply$59(Keyspace.java:435) 
> ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_60]
>   at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
>   at 
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.0.jar:3.0.0]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
>   INFO  [IndexSummaryManager:1] 2015-11-26 14:41:10,527 
> IndexSummaryManager.java:257 - Redistributing index summaries
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-02-02 Thread samt
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c5feeda6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c5feeda6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c5feeda6

Branch: refs/heads/trunk
Commit: c5feeda6a968284d15f2baaf95b871fcced32284
Parents: f70c353 f51e983
Author: Sam Tunnicliffe 
Authored: Tue Feb 2 15:44:50 2016 +
Committer: Sam Tunnicliffe 
Committed: Tue Feb 2 15:44:50 2016 +

--
 CHANGES.txt| 1 +
 .../org/apache/cassandra/index/internal/keys/KeysSearcher.java | 2 ++
 2 files changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c5feeda6/CHANGES.txt
--
diff --cc CHANGES.txt
index b67fae2,ef0da4c..5119626
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,8 -1,5 +1,9 @@@
 -3.0.3
 +3.3
 + * Avoid infinite loop if owned range is smaller than number of
 +   data dirs (CASSANDRA-11034)
 + * Avoid bootstrap hanging when existing nodes have no data to stream 
(CASSANDRA-11010)
 +Merged from 3.0:
+  * Filter keys searcher results by target range (CASSANDRA-11104)
   * Fix deserialization of legacy read commands (CASSANDRA-11087)
   * Fix incorrect computation of deletion time in sstable metadata 
(CASSANDRA-11102)
   * Avoid memory leak when collecting sstable metadata (CASSANDRA-11026)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c5feeda6/src/java/org/apache/cassandra/index/internal/keys/KeysSearcher.java
--



[1/6] cassandra git commit: Filter keys searcher results by target range

2016-02-02 Thread samt
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 b21df5b70 -> f51e98399
  refs/heads/cassandra-3.3 f70c35385 -> c5feeda6a
  refs/heads/trunk 0a83e6aa8 -> eef0ddfab


Filter keys searcher results by target range

Patch by Sam Tunnicliffe; reviewed by Sylvain Lebresne for
CASSANDRA-11104


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f51e9839
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f51e9839
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f51e9839

Branch: refs/heads/cassandra-3.0
Commit: f51e98399ea3b78bdea81c6fb8bd62fda14af43c
Parents: b21df5b
Author: Sam Tunnicliffe 
Authored: Mon Feb 1 21:01:26 2016 +
Committer: Sam Tunnicliffe 
Committed: Tue Feb 2 15:27:08 2016 +

--
 CHANGES.txt| 1 +
 .../org/apache/cassandra/index/internal/keys/KeysSearcher.java | 2 ++
 2 files changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f51e9839/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index dcbce5b..ef0da4c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.3
+ * Filter keys searcher results by target range (CASSANDRA-11104)
  * Fix deserialization of legacy read commands (CASSANDRA-11087)
  * Fix incorrect computation of deletion time in sstable metadata 
(CASSANDRA-11102)
  * Avoid memory leak when collecting sstable metadata (CASSANDRA-11026)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f51e9839/src/java/org/apache/cassandra/index/internal/keys/KeysSearcher.java
--
diff --git 
a/src/java/org/apache/cassandra/index/internal/keys/KeysSearcher.java 
b/src/java/org/apache/cassandra/index/internal/keys/KeysSearcher.java
index b60d2d9..f00bb27 100644
--- a/src/java/org/apache/cassandra/index/internal/keys/KeysSearcher.java
+++ b/src/java/org/apache/cassandra/index/internal/keys/KeysSearcher.java
@@ -86,6 +86,8 @@ public class KeysSearcher extends CassandraIndexSearcher
 {
 Row hit = indexHits.next();
 DecoratedKey key = 
index.baseCfs.decorateKey(hit.clustering().get(0));
+if (!command.selectsKey(key))
+continue;
 
 SinglePartitionReadCommand dataCmd = 
SinglePartitionReadCommand.create(isForThrift(),

index.baseCfs.metadata,



[jira] [Updated] (CASSANDRA-11026) OOM due to HeapByteBuffer instances

2016-02-02 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-11026:
-
Summary: OOM due to HeapByteBuffer instances  (was: OOM)

> OOM due to HeapByteBuffer instances
> ---
>
> Key: CASSANDRA-11026
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11026
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Maxim Podkolzine
>Assignee: Sylvain Lebresne
> Fix For: 3.0.3, 3.3
>
> Attachments: Screenshot.png, dump.png
>
>
> Cassandra 3.0.2 fails with OOM. The heapdump shows large number of 
> HeapByteBuffer instances, each retaining 1Mb (see the details on the 
> screenshot). Overall retained size is ~2Gb.
> We can provide the additional info and the whole heapdump if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10972) File based hints don't implement backpressure and can OOM

2016-02-02 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128461#comment-15128461
 ] 

Ariel Weisberg commented on CASSANDRA-10972:


I could go either way WRT to it being a feature or bug. I suppose if you have a 
production cluster and nodes are OOMing it's a bug and if you don't it's a 
feature. If no one is beating down the door complaining then it's a feature.

I rebased the branches and added a 3.3 commit and started the tests in addition 
to the changes you mentioned.
|[3.0 
code|https://github.com/apache/cassandra/compare/cassandra-3.0...aweisberg:CASSANDRA-10972-3.0?expand=1]|[utests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-10972-3.0-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-10972-3.0-dtest/]|
|[3.3 
code|https://github.com/apache/cassandra/compare/cassandra-3.3...aweisberg:CASSANDRA-10972-3.3?expand=1]|[utests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-10972-3.3-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-10972-3.3-dtest/]|
|[trunk 
code|https://github.com/apache/cassandra/compare/trunk...aweisberg:CASSANDRA-10972-trunk?expand=1]|[utests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-10972-trunk-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-10972-trunk-dtest/]|


> File based hints don't implement backpressure and can OOM
> -
>
> Key: CASSANDRA-10972
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10972
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
> Fix For: 3.0.x, 3.x
>
>
> This is something I reproduced in practice. I have what I think is a 
> reasonable implementation of backpressure, but still need to put together a 
> unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8233) Additional file handling capabilities for COPY FROM

2016-02-02 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-8233:
-

Assignee: Stefania  (was: Tyler Hobbs)

Stefania, was all of this covered in your tickets for 3.2?

> Additional file handling capabilities for COPY FROM
> ---
>
> Key: CASSANDRA-8233
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8233
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Robin Schumacher
>Assignee: Stefania
>Priority: Minor
> Fix For: 2.1.x
>
>
> To compete better with other RDBMS-styled loaders, COPY needs to include some 
> additional file handling capabilities: 
> - the ability to skip file errors, write out 'bad' rows to file, skip blank 
> lines, and set a max error count before a load is terminated. 
> - Allow for columns that have quotes, but strip off the quotes before load.
> - Set the end of record delimiter. 
> - Be able to ignore file header/other starting line(s). 
> - Have date and time format handling abilities (with date/time delimiters)  
> - Handle carriage returns in data
> - Skip column in file



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11102) Data lost during compaction

2016-02-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128129#comment-15128129
 ] 

Sylvain Lebresne commented on CASSANDRA-11102:
--

Hum, but that's because the test is wrong. That is, {{RowUpdateUpdater}} by 
default inserts a "row marker" (a non empty {{LivenessInfo}}), so the delete 
done by that test is equivalent to doing both an insert for the partition key 
and a delete for just the {{col1}} column, which definitively mean that the 
sstable should have a {{maxLocalDeletionTime}} of {{MAX_VALUE}}. I modified the 
test in my patch to do what it meant to do and no insert any row marker, and 
the test pass with those changes.

> Data lost during compaction
> ---
>
> Key: CASSANDRA-11102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11102
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.2.1 (single node, 5 node cluster)
> JDK 8
>Reporter: Jaroslav Kamenik
>Assignee: Marcus Eriksson
>Priority: Blocker
> Fix For: 3.0.3, 3.3
>
>
> We have experienced data loses in some tables during few weeks since update 
> to cassandra 3.0. I thing I successfully found test case now. 
> Step one - test table:
> CREATE TABLE aaa (
> r int,
> c1 int,
> c2 ascii,
> PRIMARY KEY (r, c1, c2));
> Step two - run few queries:
>   insert into aaa (r, c1, c2) values (1,2,'A');
>   delete from aaa where r=1 and c1=2 and c2='B';
>   insert into aaa (r, c1, c2) values (2,3,'A');
>   delete from aaa where r=2 and c1=3 and c2='B';
>   insert into aaa (r, c1, c2) values (3,4,'A');
>   delete from aaa where r=3 and c1=4 and c2='B';
>   insert into aaa (r, c1, c2) values (4,5,'A');
>   delete from aaa where r=4 and c1=5 and c2='B';
> It creates 4 rows (select count says 4) and 4 tombstones.
> Step 3 - Restart Cassandra
> You will see new files written into C* data folder. I tried sstable-tools to 
> print table structure, it shows 4 rows, data and tombstones are there.
> Step 4 - set GC grace to 1 to force tombstone removing during compaction.
> alter table aaa with GC_GRACE_SECONDS = 1;
> Step 5 - Compact tables
> ./nodetool compact
> aaa files dissapeares during compaction. 
> select count(*) says 0
> compaction history says
> ... aaa  2016-02-01T14:24:01.433   329   0   {}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9988) Introduce leaf-only iterator

2016-02-02 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-9988:

Reviewer:   (was: Benedict)

> Introduce leaf-only iterator
> 
>
> Key: CASSANDRA-9988
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9988
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Benedict
>Priority: Minor
>  Labels: patch
> Fix For: 3.x
>
> Attachments: trunk-9988.txt
>
>
> In many cases we have small btrees, small enough to fit in a single leaf 
> page. In this case it _may_ be more efficient to specialise our iterator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10697) Leak detected while running offline scrub

2016-02-02 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-10697:
-
Assignee: (was: Benedict)

> Leak detected while running offline scrub
> -
>
> Key: CASSANDRA-10697
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10697
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: C* 2.1.9 on Debian Wheezy
>Reporter: mlowicki
>Priority: Critical
>
> I got couple of those:
> {code}
> ERROR 05:09:15 LEAK DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@3b60e162) to class 
> org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier@1433208674:/var/lib/cassandra/data/sync/entity2-e24b5040199b11e5a30f75bb514ae072/sync-entity2-ka-405434
>  was not released before the reference was garbage collected
> {code}
> and then:
> {code}
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:99)
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:81)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:353)
> at java.io.RandomAccessFile.readFully(RandomAccessFile.java:444)
> at java.io.RandomAccessFile.readFully(RandomAccessFile.java:424)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:378)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:348)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:327)
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:397)
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:381)
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52)
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableIdentityIterator.java:120)
> at 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:202)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> com.google.common.collect.Iterators$7.computeNext(Iterators.java:645)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.ColumnIndex$Builder.buildForCompaction(ColumnIndex.java:165)
> at 
> org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:121)
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:192)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:127)
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.tryAppend(SSTableRewriter.java:158)
> at 
> org.apache.cassandra.db.compaction.Scrubber.scrub(Scrubber.java:220)
> at 
> org.apache.cassandra.tools.StandaloneScrubber.main(StandaloneScrubber.java:116)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11102) Data lost during compaction

2016-02-02 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128079#comment-15128079
 ] 

Marcus Eriksson commented on CASSANDRA-11102:
-

I did the same first, but it makes 
TTLExpiryTest.testCheckForExpiredSSTableBlockers fail. maxLocalDeletionTime for 
all sstables gets collected as Integer.MAX_VALUE. In this test we generate a 
single cell tombstone, which makes liveness info be non-expiring even if the 
row has tombstones

> Data lost during compaction
> ---
>
> Key: CASSANDRA-11102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11102
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.2.1 (single node, 5 node cluster)
> JDK 8
>Reporter: Jaroslav Kamenik
>Assignee: Marcus Eriksson
>Priority: Blocker
> Fix For: 3.0.3, 3.3
>
>
> We have experienced data loses in some tables during few weeks since update 
> to cassandra 3.0. I thing I successfully found test case now. 
> Step one - test table:
> CREATE TABLE aaa (
> r int,
> c1 int,
> c2 ascii,
> PRIMARY KEY (r, c1, c2));
> Step two - run few queries:
>   insert into aaa (r, c1, c2) values (1,2,'A');
>   delete from aaa where r=1 and c1=2 and c2='B';
>   insert into aaa (r, c1, c2) values (2,3,'A');
>   delete from aaa where r=2 and c1=3 and c2='B';
>   insert into aaa (r, c1, c2) values (3,4,'A');
>   delete from aaa where r=3 and c1=4 and c2='B';
>   insert into aaa (r, c1, c2) values (4,5,'A');
>   delete from aaa where r=4 and c1=5 and c2='B';
> It creates 4 rows (select count says 4) and 4 tombstones.
> Step 3 - Restart Cassandra
> You will see new files written into C* data folder. I tried sstable-tools to 
> print table structure, it shows 4 rows, data and tombstones are there.
> Step 4 - set GC grace to 1 to force tombstone removing during compaction.
> alter table aaa with GC_GRACE_SECONDS = 1;
> Step 5 - Compact tables
> ./nodetool compact
> aaa files dissapeares during compaction. 
> select count(*) says 0
> compaction history says
> ... aaa  2016-02-01T14:24:01.433   329   0   {}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9669) If sstable flushes complete out of order, on restart we can fail to replay necessary commit log records

2016-02-02 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-9669:

Assignee: (was: Benedict)

> If sstable flushes complete out of order, on restart we can fail to replay 
> necessary commit log records
> ---
>
> Key: CASSANDRA-9669
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9669
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benedict
>Priority: Critical
>  Labels: correctness
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> While {{postFlushExecutor}} ensures it never expires CL entries out-of-order, 
> on restart we simply take the maximum replay position of any sstable on disk, 
> and ignore anything prior. 
> It is quite possible for there to be two flushes triggered for a given table, 
> and for the second to finish first by virtue of containing a much smaller 
> quantity of live data (or perhaps the disk is just under less pressure). If 
> we crash before the first sstable has been written, then on restart the data 
> it would have represented will disappear, since we will not replay the CL 
> records.
> This looks to be a bug present since time immemorial, and also seems pretty 
> serious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11072) Further unify Partition access methods, by removing searchIterator()

2016-02-02 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-11072:
-
Assignee: (was: Benedict)

> Further unify Partition access methods, by removing searchIterator()
> 
>
> Key: CASSANDRA-11072
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11072
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Priority: Minor
> Fix For: 3.x
>
>
> After CASSANDRA-9986, the access paths can be further incrementally 
> simplified.  In particular, the {{searchIterator}} method is only not 
> trivially replaced in one location, and by removing it we can simplify the 
> {{SearchIterator}} hierarchy, and merge the {{ClusteringIndexFilter}} 
> hierarchy, as both use {{Slices}} now.
> In the one remaining location, three approaches are possible (of which I have 
> implemented two in the attached patch; one in the first diff, one in the 
> last):
> # Apply a transformation to the UnfilteredRowIterator composed of a slices 
> query
> # Call {{getRow}} repeatedly
> # Provide access to an {{unfilteredSearchIterator}} for this method only, 
> since we do not need to perform the complex filtration for this access path
> These are in decreasing order of costliness (CPU-wise); I don't have a fixed 
> preference.
> This is just a step towards further necessary improvements. IMO, this should 
> be followed by:
> # Supporting efficient "slicing" of a SearchIterator, so that the internal 
> iteration of slices within {{unfilteredRowIterator}} is made cheap enough to 
> not warrant a separate path (this would help all slice queries)
> ## Merging Slice and Clustering hierarchies, perhaps by making Slice an 
> interface and having Clustering implement it.
> ## Specialising Slices when it contains only Clustering, so that it can 
> implement NavigableSet (most likely by having it backed by, or 
> extend, BTreeSet)
> ## Thus, saving a lot of shuffling and reconstruction costs around our 
> filters, and reducing the duplication of concepts in 
> {{ClusteringIndexNamesFilter}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10972) File based hints don't implement backpressure and can OOM

2016-02-02 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-10972:
-
Reviewer:   (was: Benedict)

> File based hints don't implement backpressure and can OOM
> -
>
> Key: CASSANDRA-10972
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10972
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
> Fix For: 3.0.x, 3.x
>
>
> This is something I reproduced in practice. I have what I think is a 
> reasonable implementation of backpressure, but still need to put together a 
> unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9167) Improve bloom-filter false-positive-ratio

2016-02-02 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128161#comment-15128161
 ] 

Benedict commented on CASSANDRA-9167:
-

[~snazy]: sorry for completely letting this fester.  It looks good to me, the 
only question is if we _want_ to trade CPU and memory-indirection costs.  In 
principle we do, since this is to prevent disk accesses.  However for many of 
our benchmark in-memory workloads this is probably only increasing our costs.  
However the cost increase is likely minimal compared to our other costs, so on 
balance with the present state of affairs I'd say this is a pretty darn 
positive change.

That said, I think it would be neater if we could permit a degree of 
configurability, where we weigh the marginal cost increase versus the marginal 
false positive gain for any added hash function, and provide users the option 
of configuring the tradeoff between the two.

That also said, it's perhaps overthinking things for now, and this positive if 
unglamorous change should have been included a long time ago IMO.

Assuming it goes in roughly as-is, a few nits/suggestions:

# Add a 'd' suffix to the literals
# Re-calculate the buckets per-element after rounding the hash count, instead 
of adding a fudge-factor (after all, half the time that fudge factor is wasting 
space, the other half it may be insufficient)?

> Improve bloom-filter false-positive-ratio
> -
>
> Key: CASSANDRA-9167
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9167
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
>  Labels: perfomance
>
> {{org.apache.cassandra.utils.BloomCalculations}} performs some table lookups 
> to calculate the bloom filter specification (size, # of hashes). Using the 
> exact maths for that computation brings a better false-positive-ratio (the 
> maths usually returns higher numbers for hash-counts).
> TL;DR increasing the number of hash-rounds brings a nice improvement. Finally 
> it's a trade-off between CPU and I/O.
> ||false-positive-chance||elements||capacity||hash count 
> new||false-positive-ratio new||hash count current||false-positive-ratio 
> current||improvement
> |0.1|1|50048|3|0.0848|3|0.0848|0
> |0.1|10|500032|3|0.09203|3|0.09203|0
> |0.1|100|564|3|0.0919|3|0.0919|0
> |0.1|1000|5064|3|0.09182|3|0.09182|0
> |0.1|1|50064|3|0.091874|3|0.091874|0
> |0.01|1|100032|7|0.0092|5|0.0107|0.1630434783
> |0.01|10|164|7|0.00818|5|0.00931|0.1381418093
> |0.01|100|1064|7|0.008072|5|0.009405|0.1651387512
> |0.01|1000|10064|7|0.008174|5|0.009375|0.146929288
> |0.01|1|100064|7|0.008197|5|0.009428|0.150176894
> |0.001|1|150080|10|0.0008|7|0.001|0.25
> |0.001|10|1500032|10|0.0006|7|0.00094|0.57
> |0.001|100|1564|10|0.000717|7|0.000991|0.3821478382
> |0.001|1000|15064|10|0.000743|7|0.000992|0.33512786
> |0.001|1|150064|10|0.000741|7|0.001002|0.3522267206
> |0.0001|1|200064|13|0|10|0.0002|#DIV/0!
> |0.0001|10|264|13|0.4|10|0.0001|1.5
> |0.0001|100|2064|13|0.75|10|0.91|0.21
> |0.0001|1000|20064|13|0.69|10|0.87|0.2608695652
> |0.0001|1|200064|13|0.68|10|0.9|0.3235294118
> If we decide to allow more hash-rounds, it could be nicely back-ported even 
> to 2.0 without affecting existing sstables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-02-02 Thread slebresne
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a7125d2a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a7125d2a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a7125d2a

Branch: refs/heads/trunk
Commit: a7125d2a2af02e68de53fa0e6b01383daac27140
Parents: 30d3b29 c83d108
Author: Sylvain Lebresne 
Authored: Tue Feb 2 14:20:33 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Feb 2 14:20:33 2016 +0100

--
 CHANGES.txt |  1 +
 .../io/sstable/metadata/MetadataCollector.java  | 12 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a7125d2a/CHANGES.txt
--
diff --cc CHANGES.txt
index bad296b,da6f5cc..2600d56
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,8 -1,5 +1,9 @@@
 -3.0.3
 +3.3
 + * Avoid infinite loop if owned range is smaller than number of
 +   data dirs (CASSANDRA-11034)
 + * Avoid bootstrap hanging when existing nodes have no data to stream 
(CASSANDRA-11010)
 +Merged from 3.0:
+  * Avoid memory leak when collecting sstable metadata (CASSANDRA-11026)
   * Mutations do not block for completion under view lock contention 
(CASSANDRA-10779)
   * Invalidate legacy schema tables when unloading them (CASSANDRA-11071)
   * (cqlsh) handle INSERT and UPDATE statements with LWT conditions correctly



[2/6] cassandra git commit: Minimize buffers when using them in sstable metadata

2016-02-02 Thread slebresne
Minimize buffers when using them in sstable metadata

patch by slebresne; reviewed by benedict for CASSANDRA-11026


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c83d108a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c83d108a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c83d108a

Branch: refs/heads/cassandra-3.3
Commit: c83d108a30f3c6b670b32d94359df251bf931234
Parents: 839a5ba
Author: Sylvain Lebresne 
Authored: Wed Jan 27 17:30:14 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Feb 2 14:18:48 2016 +0100

--
 CHANGES.txt |  1 +
 .../io/sstable/metadata/MetadataCollector.java  | 12 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c83d108a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index bed8703..da6f5cc 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.3
+ * Avoid memory leak when collecting sstable metadata (CASSANDRA-11026)
  * Mutations do not block for completion under view lock contention 
(CASSANDRA-10779)
  * Invalidate legacy schema tables when unloading them (CASSANDRA-11071)
  * (cqlsh) handle INSERT and UPDATE statements with LWT conditions correctly

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c83d108a/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
--
diff --git 
a/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java 
b/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
index 1c93f58..3947dc8 100644
--- a/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
+++ b/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
@@ -39,6 +39,7 @@ import org.apache.cassandra.io.sstable.Component;
 import org.apache.cassandra.io.sstable.SSTable;
 import org.apache.cassandra.io.sstable.format.SSTableReader;
 import org.apache.cassandra.service.ActiveRepairService;
+import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.EstimatedHistogram;
 import org.apache.cassandra.utils.MurmurHash;
 import org.apache.cassandra.utils.StreamingHistogram;
@@ -232,12 +233,19 @@ public class MetadataCollector implements 
PartitionStatisticsCollector
 {
 AbstractType type = comparator.subtype(i);
 ByteBuffer newValue = clustering.get(i);
-minClusteringValues[i] = min(minClusteringValues[i], newValue, 
type);
-maxClusteringValues[i] = max(maxClusteringValues[i], newValue, 
type);
+minClusteringValues[i] = maybeMinimize(min(minClusteringValues[i], 
newValue, type));
+maxClusteringValues[i] = maybeMinimize(max(maxClusteringValues[i], 
newValue, type));
 }
 return this;
 }
 
+private static ByteBuffer maybeMinimize(ByteBuffer buffer)
+{
+// ByteBuffer.minimalBufferFor doesn't handle null, but we can get it 
in this case since it's possible
+// for some clustering values to be null
+return buffer == null ? null : ByteBufferUtil.minimalBufferFor(buffer);
+}
+
 private static ByteBuffer min(ByteBuffer b1, ByteBuffer b2, 
AbstractType comparator)
 {
 if (b1 == null)



[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-02-02 Thread slebresne
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a7125d2a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a7125d2a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a7125d2a

Branch: refs/heads/cassandra-3.3
Commit: a7125d2a2af02e68de53fa0e6b01383daac27140
Parents: 30d3b29 c83d108
Author: Sylvain Lebresne 
Authored: Tue Feb 2 14:20:33 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Feb 2 14:20:33 2016 +0100

--
 CHANGES.txt |  1 +
 .../io/sstable/metadata/MetadataCollector.java  | 12 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a7125d2a/CHANGES.txt
--
diff --cc CHANGES.txt
index bad296b,da6f5cc..2600d56
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,8 -1,5 +1,9 @@@
 -3.0.3
 +3.3
 + * Avoid infinite loop if owned range is smaller than number of
 +   data dirs (CASSANDRA-11034)
 + * Avoid bootstrap hanging when existing nodes have no data to stream 
(CASSANDRA-11010)
 +Merged from 3.0:
+  * Avoid memory leak when collecting sstable metadata (CASSANDRA-11026)
   * Mutations do not block for completion under view lock contention 
(CASSANDRA-10779)
   * Invalidate legacy schema tables when unloading them (CASSANDRA-11071)
   * (cqlsh) handle INSERT and UPDATE statements with LWT conditions correctly



[6/6] cassandra git commit: Merge branch 'cassandra-3.3' into trunk

2016-02-02 Thread slebresne
Merge branch 'cassandra-3.3' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/92242d78
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/92242d78
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/92242d78

Branch: refs/heads/trunk
Commit: 92242d782fdb583cf4e281bc1cd1f163de783b44
Parents: b24076d a7125d2
Author: Sylvain Lebresne 
Authored: Tue Feb 2 14:20:43 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Feb 2 14:20:43 2016 +0100

--
 CHANGES.txt |  1 +
 .../io/sstable/metadata/MetadataCollector.java  | 12 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/92242d78/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/92242d78/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
--



[1/6] cassandra git commit: Minimize buffers when using them in sstable metadata

2016-02-02 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 839a5bab2 -> c83d108a3
  refs/heads/cassandra-3.3 30d3b29ab -> a7125d2a2
  refs/heads/trunk b24076d11 -> 92242d782


Minimize buffers when using them in sstable metadata

patch by slebresne; reviewed by benedict for CASSANDRA-11026


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c83d108a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c83d108a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c83d108a

Branch: refs/heads/cassandra-3.0
Commit: c83d108a30f3c6b670b32d94359df251bf931234
Parents: 839a5ba
Author: Sylvain Lebresne 
Authored: Wed Jan 27 17:30:14 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Feb 2 14:18:48 2016 +0100

--
 CHANGES.txt |  1 +
 .../io/sstable/metadata/MetadataCollector.java  | 12 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c83d108a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index bed8703..da6f5cc 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.3
+ * Avoid memory leak when collecting sstable metadata (CASSANDRA-11026)
  * Mutations do not block for completion under view lock contention 
(CASSANDRA-10779)
  * Invalidate legacy schema tables when unloading them (CASSANDRA-11071)
  * (cqlsh) handle INSERT and UPDATE statements with LWT conditions correctly

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c83d108a/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
--
diff --git 
a/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java 
b/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
index 1c93f58..3947dc8 100644
--- a/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
+++ b/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
@@ -39,6 +39,7 @@ import org.apache.cassandra.io.sstable.Component;
 import org.apache.cassandra.io.sstable.SSTable;
 import org.apache.cassandra.io.sstable.format.SSTableReader;
 import org.apache.cassandra.service.ActiveRepairService;
+import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.EstimatedHistogram;
 import org.apache.cassandra.utils.MurmurHash;
 import org.apache.cassandra.utils.StreamingHistogram;
@@ -232,12 +233,19 @@ public class MetadataCollector implements 
PartitionStatisticsCollector
 {
 AbstractType type = comparator.subtype(i);
 ByteBuffer newValue = clustering.get(i);
-minClusteringValues[i] = min(minClusteringValues[i], newValue, 
type);
-maxClusteringValues[i] = max(maxClusteringValues[i], newValue, 
type);
+minClusteringValues[i] = maybeMinimize(min(minClusteringValues[i], 
newValue, type));
+maxClusteringValues[i] = maybeMinimize(max(maxClusteringValues[i], 
newValue, type));
 }
 return this;
 }
 
+private static ByteBuffer maybeMinimize(ByteBuffer buffer)
+{
+// ByteBuffer.minimalBufferFor doesn't handle null, but we can get it 
in this case since it's possible
+// for some clustering values to be null
+return buffer == null ? null : ByteBufferUtil.minimalBufferFor(buffer);
+}
+
 private static ByteBuffer min(ByteBuffer b1, ByteBuffer b2, 
AbstractType comparator)
 {
 if (b1 == null)



[3/6] cassandra git commit: Minimize buffers when using them in sstable metadata

2016-02-02 Thread slebresne
Minimize buffers when using them in sstable metadata

patch by slebresne; reviewed by benedict for CASSANDRA-11026


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c83d108a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c83d108a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c83d108a

Branch: refs/heads/trunk
Commit: c83d108a30f3c6b670b32d94359df251bf931234
Parents: 839a5ba
Author: Sylvain Lebresne 
Authored: Wed Jan 27 17:30:14 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Feb 2 14:18:48 2016 +0100

--
 CHANGES.txt |  1 +
 .../io/sstable/metadata/MetadataCollector.java  | 12 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c83d108a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index bed8703..da6f5cc 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.3
+ * Avoid memory leak when collecting sstable metadata (CASSANDRA-11026)
  * Mutations do not block for completion under view lock contention 
(CASSANDRA-10779)
  * Invalidate legacy schema tables when unloading them (CASSANDRA-11071)
  * (cqlsh) handle INSERT and UPDATE statements with LWT conditions correctly

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c83d108a/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
--
diff --git 
a/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java 
b/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
index 1c93f58..3947dc8 100644
--- a/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
+++ b/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
@@ -39,6 +39,7 @@ import org.apache.cassandra.io.sstable.Component;
 import org.apache.cassandra.io.sstable.SSTable;
 import org.apache.cassandra.io.sstable.format.SSTableReader;
 import org.apache.cassandra.service.ActiveRepairService;
+import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.EstimatedHistogram;
 import org.apache.cassandra.utils.MurmurHash;
 import org.apache.cassandra.utils.StreamingHistogram;
@@ -232,12 +233,19 @@ public class MetadataCollector implements 
PartitionStatisticsCollector
 {
 AbstractType type = comparator.subtype(i);
 ByteBuffer newValue = clustering.get(i);
-minClusteringValues[i] = min(minClusteringValues[i], newValue, 
type);
-maxClusteringValues[i] = max(maxClusteringValues[i], newValue, 
type);
+minClusteringValues[i] = maybeMinimize(min(minClusteringValues[i], 
newValue, type));
+maxClusteringValues[i] = maybeMinimize(max(maxClusteringValues[i], 
newValue, type));
 }
 return this;
 }
 
+private static ByteBuffer maybeMinimize(ByteBuffer buffer)
+{
+// ByteBuffer.minimalBufferFor doesn't handle null, but we can get it 
in this case since it's possible
+// for some clustering values to be null
+return buffer == null ? null : ByteBufferUtil.minimalBufferFor(buffer);
+}
+
 private static ByteBuffer min(ByteBuffer b1, ByteBuffer b2, 
AbstractType comparator)
 {
 if (b1 == null)



[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-02-02 Thread slebresne
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8996b64e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8996b64e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8996b64e

Branch: refs/heads/cassandra-3.3
Commit: 8996b64e43a3a89dd0648f584298e2e65383b0c3
Parents: a7125d2 df3d0b0
Author: Sylvain Lebresne 
Authored: Tue Feb 2 14:22:59 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Feb 2 14:22:59 2016 +0100

--
 CHANGES.txt   | 1 +
 .../cassandra/io/sstable/metadata/MetadataCollector.java  | 7 ++-
 .../org/apache/cassandra/db/compaction/TTLExpiryTest.java | 2 ++
 3 files changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8996b64e/CHANGES.txt
--
diff --cc CHANGES.txt
index 2600d56,3423927..c0acd3d
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,8 -1,5 +1,9 @@@
 -3.0.3
 +3.3
 + * Avoid infinite loop if owned range is smaller than number of
 +   data dirs (CASSANDRA-11034)
 + * Avoid bootstrap hanging when existing nodes have no data to stream 
(CASSANDRA-11010)
 +Merged from 3.0:
+  * Fix incorrect computation of deletion time in sstable metadata 
(CASSANDRA-11102)
   * Avoid memory leak when collecting sstable metadata (CASSANDRA-11026)
   * Mutations do not block for completion under view lock contention 
(CASSANDRA-10779)
   * Invalidate legacy schema tables when unloading them (CASSANDRA-11071)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8996b64e/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java
--



[3/6] cassandra git commit: Inconditionally update the sstable deletion info for live LivenessInfo

2016-02-02 Thread slebresne
Inconditionally update the sstable deletion info for live LivenessInfo

patch by slebresne; reviewed by krummas for CASSANDRA-11102


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/df3d0b00
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/df3d0b00
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/df3d0b00

Branch: refs/heads/trunk
Commit: df3d0b00b0d9a0725b3f1681e6ce9ffe6d330de4
Parents: c83d108
Author: Sylvain Lebresne 
Authored: Tue Feb 2 10:27:49 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Feb 2 14:22:20 2016 +0100

--
 CHANGES.txt   | 1 +
 .../cassandra/io/sstable/metadata/MetadataCollector.java  | 7 ++-
 .../org/apache/cassandra/db/compaction/TTLExpiryTest.java | 2 ++
 3 files changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/df3d0b00/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index da6f5cc..3423927 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.3
+ * Fix incorrect computation of deletion time in sstable metadata 
(CASSANDRA-11102)
  * Avoid memory leak when collecting sstable metadata (CASSANDRA-11026)
  * Mutations do not block for completion under view lock contention 
(CASSANDRA-10779)
  * Invalidate legacy schema tables when unloading them (CASSANDRA-11071)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/df3d0b00/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
--
diff --git 
a/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java 
b/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
index 3947dc8..c2b0caf 100644
--- a/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
+++ b/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
@@ -169,11 +169,8 @@ public class MetadataCollector implements 
PartitionStatisticsCollector
 return;
 
 updateTimestamp(newInfo.timestamp());
-if (newInfo.isExpiring())
-{
-updateTTL(newInfo.ttl());
-updateLocalDeletionTime(newInfo.localExpirationTime());
-}
+updateTTL(newInfo.ttl());
+updateLocalDeletionTime(newInfo.localExpirationTime());
 }
 
 public void update(Cell cell)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/df3d0b00/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java 
b/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java
index 7dd3da0..b264553 100644
--- a/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java
+++ b/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java
@@ -258,6 +258,7 @@ public class TTLExpiryTest
 cfs.metadata.gcGraceSeconds(0);
 
 new RowUpdateBuilder(cfs.metadata, System.currentTimeMillis(), "test")
+.noRowMarker()
 .add("col1", ByteBufferUtil.EMPTY_BYTE_BUFFER)
 .build()
 .applyUnsafe();
@@ -267,6 +268,7 @@ public class TTLExpiryTest
 for (int i = 0; i < 10; i++)
 {
 new RowUpdateBuilder(cfs.metadata, System.currentTimeMillis(), 
"test")
+.noRowMarker()
 .delete("col1")
 .build()
 .applyUnsafe();



[2/6] cassandra git commit: Inconditionally update the sstable deletion info for live LivenessInfo

2016-02-02 Thread slebresne
Inconditionally update the sstable deletion info for live LivenessInfo

patch by slebresne; reviewed by krummas for CASSANDRA-11102


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/df3d0b00
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/df3d0b00
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/df3d0b00

Branch: refs/heads/cassandra-3.3
Commit: df3d0b00b0d9a0725b3f1681e6ce9ffe6d330de4
Parents: c83d108
Author: Sylvain Lebresne 
Authored: Tue Feb 2 10:27:49 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Feb 2 14:22:20 2016 +0100

--
 CHANGES.txt   | 1 +
 .../cassandra/io/sstable/metadata/MetadataCollector.java  | 7 ++-
 .../org/apache/cassandra/db/compaction/TTLExpiryTest.java | 2 ++
 3 files changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/df3d0b00/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index da6f5cc..3423927 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.3
+ * Fix incorrect computation of deletion time in sstable metadata 
(CASSANDRA-11102)
  * Avoid memory leak when collecting sstable metadata (CASSANDRA-11026)
  * Mutations do not block for completion under view lock contention 
(CASSANDRA-10779)
  * Invalidate legacy schema tables when unloading them (CASSANDRA-11071)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/df3d0b00/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
--
diff --git 
a/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java 
b/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
index 3947dc8..c2b0caf 100644
--- a/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
+++ b/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
@@ -169,11 +169,8 @@ public class MetadataCollector implements 
PartitionStatisticsCollector
 return;
 
 updateTimestamp(newInfo.timestamp());
-if (newInfo.isExpiring())
-{
-updateTTL(newInfo.ttl());
-updateLocalDeletionTime(newInfo.localExpirationTime());
-}
+updateTTL(newInfo.ttl());
+updateLocalDeletionTime(newInfo.localExpirationTime());
 }
 
 public void update(Cell cell)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/df3d0b00/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java 
b/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java
index 7dd3da0..b264553 100644
--- a/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java
+++ b/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java
@@ -258,6 +258,7 @@ public class TTLExpiryTest
 cfs.metadata.gcGraceSeconds(0);
 
 new RowUpdateBuilder(cfs.metadata, System.currentTimeMillis(), "test")
+.noRowMarker()
 .add("col1", ByteBufferUtil.EMPTY_BYTE_BUFFER)
 .build()
 .applyUnsafe();
@@ -267,6 +268,7 @@ public class TTLExpiryTest
 for (int i = 0; i < 10; i++)
 {
 new RowUpdateBuilder(cfs.metadata, System.currentTimeMillis(), 
"test")
+.noRowMarker()
 .delete("col1")
 .build()
 .applyUnsafe();



[1/6] cassandra git commit: Inconditionally update the sstable deletion info for live LivenessInfo

2016-02-02 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 c83d108a3 -> df3d0b00b
  refs/heads/cassandra-3.3 a7125d2a2 -> 8996b64e4
  refs/heads/trunk 92242d782 -> be1efd283


Inconditionally update the sstable deletion info for live LivenessInfo

patch by slebresne; reviewed by krummas for CASSANDRA-11102


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/df3d0b00
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/df3d0b00
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/df3d0b00

Branch: refs/heads/cassandra-3.0
Commit: df3d0b00b0d9a0725b3f1681e6ce9ffe6d330de4
Parents: c83d108
Author: Sylvain Lebresne 
Authored: Tue Feb 2 10:27:49 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Feb 2 14:22:20 2016 +0100

--
 CHANGES.txt   | 1 +
 .../cassandra/io/sstable/metadata/MetadataCollector.java  | 7 ++-
 .../org/apache/cassandra/db/compaction/TTLExpiryTest.java | 2 ++
 3 files changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/df3d0b00/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index da6f5cc..3423927 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.3
+ * Fix incorrect computation of deletion time in sstable metadata 
(CASSANDRA-11102)
  * Avoid memory leak when collecting sstable metadata (CASSANDRA-11026)
  * Mutations do not block for completion under view lock contention 
(CASSANDRA-10779)
  * Invalidate legacy schema tables when unloading them (CASSANDRA-11071)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/df3d0b00/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
--
diff --git 
a/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java 
b/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
index 3947dc8..c2b0caf 100644
--- a/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
+++ b/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
@@ -169,11 +169,8 @@ public class MetadataCollector implements 
PartitionStatisticsCollector
 return;
 
 updateTimestamp(newInfo.timestamp());
-if (newInfo.isExpiring())
-{
-updateTTL(newInfo.ttl());
-updateLocalDeletionTime(newInfo.localExpirationTime());
-}
+updateTTL(newInfo.ttl());
+updateLocalDeletionTime(newInfo.localExpirationTime());
 }
 
 public void update(Cell cell)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/df3d0b00/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java 
b/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java
index 7dd3da0..b264553 100644
--- a/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java
+++ b/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java
@@ -258,6 +258,7 @@ public class TTLExpiryTest
 cfs.metadata.gcGraceSeconds(0);
 
 new RowUpdateBuilder(cfs.metadata, System.currentTimeMillis(), "test")
+.noRowMarker()
 .add("col1", ByteBufferUtil.EMPTY_BYTE_BUFFER)
 .build()
 .applyUnsafe();
@@ -267,6 +268,7 @@ public class TTLExpiryTest
 for (int i = 0; i < 10; i++)
 {
 new RowUpdateBuilder(cfs.metadata, System.currentTimeMillis(), 
"test")
+.noRowMarker()
 .delete("col1")
 .build()
 .applyUnsafe();



[6/6] cassandra git commit: Merge branch 'cassandra-3.3' into trunk

2016-02-02 Thread slebresne
Merge branch 'cassandra-3.3' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/be1efd28
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/be1efd28
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/be1efd28

Branch: refs/heads/trunk
Commit: be1efd28392fe9a4cc28be8d91eb6a685651a8f8
Parents: 92242d7 8996b64
Author: Sylvain Lebresne 
Authored: Tue Feb 2 14:23:07 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Feb 2 14:23:07 2016 +0100

--
 CHANGES.txt   | 1 +
 .../cassandra/io/sstable/metadata/MetadataCollector.java  | 7 ++-
 .../org/apache/cassandra/db/compaction/TTLExpiryTest.java | 2 ++
 3 files changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/be1efd28/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/be1efd28/src/java/org/apache/cassandra/io/sstable/metadata/MetadataCollector.java
--



[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-02-02 Thread slebresne
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8996b64e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8996b64e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8996b64e

Branch: refs/heads/trunk
Commit: 8996b64e43a3a89dd0648f584298e2e65383b0c3
Parents: a7125d2 df3d0b0
Author: Sylvain Lebresne 
Authored: Tue Feb 2 14:22:59 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Feb 2 14:22:59 2016 +0100

--
 CHANGES.txt   | 1 +
 .../cassandra/io/sstable/metadata/MetadataCollector.java  | 7 ++-
 .../org/apache/cassandra/db/compaction/TTLExpiryTest.java | 2 ++
 3 files changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8996b64e/CHANGES.txt
--
diff --cc CHANGES.txt
index 2600d56,3423927..c0acd3d
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,8 -1,5 +1,9 @@@
 -3.0.3
 +3.3
 + * Avoid infinite loop if owned range is smaller than number of
 +   data dirs (CASSANDRA-11034)
 + * Avoid bootstrap hanging when existing nodes have no data to stream 
(CASSANDRA-11010)
 +Merged from 3.0:
+  * Fix incorrect computation of deletion time in sstable metadata 
(CASSANDRA-11102)
   * Avoid memory leak when collecting sstable metadata (CASSANDRA-11026)
   * Mutations do not block for completion under view lock contention 
(CASSANDRA-10779)
   * Invalidate legacy schema tables when unloading them (CASSANDRA-11071)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8996b64e/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java
--



[jira] [Commented] (CASSANDRA-11103) In CQL, can not create table with no predefined column

2016-02-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128251#comment-15128251
 ] 

Sylvain Lebresne commented on CASSANDRA-11103:
--

The equivalent of having non predefined column in Thrift is to use a clustering 
column in CQL. And when I say "equivalent", I mean that they are internally 
exactly the same thing. You can also see 
http://www.datastax.com/dev/blog/cql3-for-cassandra-experts or 
http://www.datastax.com/dev/blog/thrift-to-cql3 for more details, and the user 
mailing list/irc channel are there if you need more help, but there is no loss 
of functionality.

> In CQL, can not create table with no predefined column
> --
>
> Key: CASSANDRA-11103
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11103
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Li
>
> We have a service layer that provides Cassandra access to our (thousands of) 
> edge and backend servers. The service provides simple API to set/get data in 
> the form of List, while Tag is a structure of (name, value, ttl, 
> timestamp) that maps to the data of a Cassandra column.
> This service layer acts as a connection pool proxy to Cassandra, provides 
> easy access, central usage / resource / performance monitoring, access 
> control. Apps accessing this layer can create column family through an admin 
> tool which creates the CF using Thrift client, and set/get data (using 
> List) into/from the column family.
> With the latest CQL, it seems not possible to create column family without 
> predetermined column names. One option for us is to create table with a 
> column of type Map. However, a Map column has two unpleasant implications:
> 1. Every column has to be prefixed with the name of the map column, which is 
> unnatural and redundant. 
> 2. The data type of all columns has to be the same. The ability to store data 
> in native format is lost.
> It seems the fact that CQL can not create table without predefined column 
> represents loss of function that is available in Thrift based client. It's 
> almost a show stopper for us, preventing us to migrate from Thrift base 
> client to the new Java client.
> Attachments



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-11103) In CQL, can not create table with no predefined column

2016-02-02 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-11103.
--
Resolution: Not A Problem

> In CQL, can not create table with no predefined column
> --
>
> Key: CASSANDRA-11103
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11103
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Li
>
> We have a service layer that provides Cassandra access to our (thousands of) 
> edge and backend servers. The service provides simple API to set/get data in 
> the form of List, while Tag is a structure of (name, value, ttl, 
> timestamp) that maps to the data of a Cassandra column.
> This service layer acts as a connection pool proxy to Cassandra, provides 
> easy access, central usage / resource / performance monitoring, access 
> control. Apps accessing this layer can create column family through an admin 
> tool which creates the CF using Thrift client, and set/get data (using 
> List) into/from the column family.
> With the latest CQL, it seems not possible to create column family without 
> predetermined column names. One option for us is to create table with a 
> column of type Map. However, a Map column has two unpleasant implications:
> 1. Every column has to be prefixed with the name of the map column, which is 
> unnatural and redundant. 
> 2. The data type of all columns has to be the same. The ability to store data 
> in native format is lost.
> It seems the fact that CQL can not create table without predefined column 
> represents loss of function that is available in Thrift based client. It's 
> almost a show stopper for us, preventing us to migrate from Thrift base 
> client to the new Java client.
> Attachments



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/6] cassandra git commit: Add regression test for CASSANDRA-11102

2016-02-02 Thread slebresne
Add regression test for CASSANDRA-11102


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bc3ea669
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bc3ea669
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bc3ea669

Branch: refs/heads/trunk
Commit: bc3ea66925429b743b672d417700d17e9936b187
Parents: df3d0b0
Author: Sylvain Lebresne 
Authored: Tue Feb 2 14:39:28 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Feb 2 14:39:28 2016 +0100

--
 .../cql3/validation/operations/DeleteTest.java  | 29 
 1 file changed, 29 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc3ea669/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
index 4f35afa..be858e7 100644
--- a/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
+++ b/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
@@ -1008,6 +1008,35 @@ public class DeleteTest extends CQLTester
  "DELETE FROM %s WHERE values CONTAINS ?", 3);
 }
 
+@Test
+public void testDeleteWithOnlyPK() throws Throwable
+{
+// This is a regression test for CASSANDRA-11102
+
+createTable("CREATE TABLE %s (k int, v int, PRIMARY KEY (k, v)) WITH 
gc_grace_seconds=1");
+
+execute("INSERT INTO %s(k, v) VALUES (?, ?)", 1, 2);
+
+execute("DELETE FROM %s WHERE k = ? AND v = ?", 1, 2);
+execute("INSERT INTO %s(k, v) VALUES (?, ?)", 2, 3);
+
+Thread.sleep(500);
+
+execute("DELETE FROM %s WHERE k = ? AND v = ?", 2, 3);
+execute("INSERT INTO %s(k, v) VALUES (?, ?)", 1, 2);
+
+Thread.sleep(500);
+
+flush();
+
+assertRows(execute("SELECT * FROM %s"), row(1, 2));
+
+Thread.sleep(1000);
+compact();
+
+assertRows(execute("SELECT * FROM %s"), row(1, 2));
+}
+
 private void flush(boolean forceFlush)
 {
 if (forceFlush)



[1/6] cassandra git commit: Add regression test for CASSANDRA-11102

2016-02-02 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 df3d0b00b -> bc3ea6692
  refs/heads/cassandra-3.3 8996b64e4 -> fddace61e
  refs/heads/trunk be1efd283 -> 9b629d0dc


Add regression test for CASSANDRA-11102


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bc3ea669
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bc3ea669
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bc3ea669

Branch: refs/heads/cassandra-3.0
Commit: bc3ea66925429b743b672d417700d17e9936b187
Parents: df3d0b0
Author: Sylvain Lebresne 
Authored: Tue Feb 2 14:39:28 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Feb 2 14:39:28 2016 +0100

--
 .../cql3/validation/operations/DeleteTest.java  | 29 
 1 file changed, 29 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc3ea669/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
index 4f35afa..be858e7 100644
--- a/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
+++ b/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
@@ -1008,6 +1008,35 @@ public class DeleteTest extends CQLTester
  "DELETE FROM %s WHERE values CONTAINS ?", 3);
 }
 
+@Test
+public void testDeleteWithOnlyPK() throws Throwable
+{
+// This is a regression test for CASSANDRA-11102
+
+createTable("CREATE TABLE %s (k int, v int, PRIMARY KEY (k, v)) WITH 
gc_grace_seconds=1");
+
+execute("INSERT INTO %s(k, v) VALUES (?, ?)", 1, 2);
+
+execute("DELETE FROM %s WHERE k = ? AND v = ?", 1, 2);
+execute("INSERT INTO %s(k, v) VALUES (?, ?)", 2, 3);
+
+Thread.sleep(500);
+
+execute("DELETE FROM %s WHERE k = ? AND v = ?", 2, 3);
+execute("INSERT INTO %s(k, v) VALUES (?, ?)", 1, 2);
+
+Thread.sleep(500);
+
+flush();
+
+assertRows(execute("SELECT * FROM %s"), row(1, 2));
+
+Thread.sleep(1000);
+compact();
+
+assertRows(execute("SELECT * FROM %s"), row(1, 2));
+}
+
 private void flush(boolean forceFlush)
 {
 if (forceFlush)



[2/6] cassandra git commit: Add regression test for CASSANDRA-11102

2016-02-02 Thread slebresne
Add regression test for CASSANDRA-11102


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bc3ea669
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bc3ea669
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bc3ea669

Branch: refs/heads/cassandra-3.3
Commit: bc3ea66925429b743b672d417700d17e9936b187
Parents: df3d0b0
Author: Sylvain Lebresne 
Authored: Tue Feb 2 14:39:28 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Feb 2 14:39:28 2016 +0100

--
 .../cql3/validation/operations/DeleteTest.java  | 29 
 1 file changed, 29 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc3ea669/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
index 4f35afa..be858e7 100644
--- a/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
+++ b/test/unit/org/apache/cassandra/cql3/validation/operations/DeleteTest.java
@@ -1008,6 +1008,35 @@ public class DeleteTest extends CQLTester
  "DELETE FROM %s WHERE values CONTAINS ?", 3);
 }
 
+@Test
+public void testDeleteWithOnlyPK() throws Throwable
+{
+// This is a regression test for CASSANDRA-11102
+
+createTable("CREATE TABLE %s (k int, v int, PRIMARY KEY (k, v)) WITH 
gc_grace_seconds=1");
+
+execute("INSERT INTO %s(k, v) VALUES (?, ?)", 1, 2);
+
+execute("DELETE FROM %s WHERE k = ? AND v = ?", 1, 2);
+execute("INSERT INTO %s(k, v) VALUES (?, ?)", 2, 3);
+
+Thread.sleep(500);
+
+execute("DELETE FROM %s WHERE k = ? AND v = ?", 2, 3);
+execute("INSERT INTO %s(k, v) VALUES (?, ?)", 1, 2);
+
+Thread.sleep(500);
+
+flush();
+
+assertRows(execute("SELECT * FROM %s"), row(1, 2));
+
+Thread.sleep(1000);
+compact();
+
+assertRows(execute("SELECT * FROM %s"), row(1, 2));
+}
+
 private void flush(boolean forceFlush)
 {
 if (forceFlush)



[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-02-02 Thread slebresne
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fddace61
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fddace61
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fddace61

Branch: refs/heads/trunk
Commit: fddace61e53e1c2f8d221db541f183b8acf1cbe4
Parents: 8996b64 bc3ea66
Author: Sylvain Lebresne 
Authored: Tue Feb 2 14:39:52 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Feb 2 14:39:52 2016 +0100

--
 .../cql3/validation/operations/DeleteTest.java  | 29 
 1 file changed, 29 insertions(+)
--




[6/6] cassandra git commit: Merge branch 'cassandra-3.3' into trunk

2016-02-02 Thread slebresne
Merge branch 'cassandra-3.3' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9b629d0d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9b629d0d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9b629d0d

Branch: refs/heads/trunk
Commit: 9b629d0dca3847e77143eb7d0734626d24bf798a
Parents: be1efd2 fddace6
Author: Sylvain Lebresne 
Authored: Tue Feb 2 14:39:59 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Feb 2 14:39:59 2016 +0100

--
 .../cql3/validation/operations/DeleteTest.java  | 29 
 1 file changed, 29 insertions(+)
--




[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-02-02 Thread slebresne
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fddace61
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fddace61
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fddace61

Branch: refs/heads/cassandra-3.3
Commit: fddace61e53e1c2f8d221db541f183b8acf1cbe4
Parents: 8996b64 bc3ea66
Author: Sylvain Lebresne 
Authored: Tue Feb 2 14:39:52 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Feb 2 14:39:52 2016 +0100

--
 .../cql3/validation/operations/DeleteTest.java  | 29 
 1 file changed, 29 insertions(+)
--




[jira] [Commented] (CASSANDRA-10411) Add/drop multiple columns in one ALTER TABLE statement

2016-02-02 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128591#comment-15128591
 ] 

Robert Stupp commented on CASSANDRA-10411:
--

I’d prefer to have just one {{List}} passed into {{AlterTypeStatement}} instead 
of 3 lists. That one list could contain one object per column like 
{{AlterTableStatementColumn}} or so. Having that, the code in Cql.g should 
become easier, too.

Can you add unit tests for these duplicate column names (implementation works 
fine, just for safety)?
* {{ALTER TABLE foo ADD (colname int, colname int)}}
* {{ALTER TABLE foo DROP (colname, colname)}}

Unit tests:
* the {{assertRows}} and {{INSERT INTO}} statements basically test nothing. 
However it would be good to insert data after the {{ALTER TABLE ADD}} / 
{{DROP}} statements to check whether the correct columns/data are returned.

Code style:
* {{Cql.g}}: the lines below {{K_ADD}} and {{K_DROP}} should be indented
* uses tabs for indention (must be spaces)
* superfluous {{@Override}} annotations in {{AlterTableStatement}} (we only add 
these for overridden non-abstract methods)
* noop changes to import-section in {{AlterTableStatement}}
* indention for the loops in {{AlterTableStatement}} is off (sometimes 1 space, 
sometimes two). (Maybe my previous statement about indention caused this.)
* annotations (like {{@Test}}) should be placed directly before the method and 
after the javadoc

Beside that the code does what it’s about to do. So, I’d commit it as soon as 
the issues above are addresses.

> Add/drop multiple columns in one ALTER TABLE statement
> --
>
> Key: CASSANDRA-10411
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10411
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Bryn Cooke
>Assignee: Amit Singh Chowdhery
>Priority: Minor
>  Labels: patch
> Attachments: Cassandra-10411-trunk.diff, cassandra-10411.diff
>
>
> Currently it is only possible to add one column at a time in an alter table 
> statement. It would be great if we could add multiple columns at a time.
> The primary reason for this is that adding each column individually seems to 
> take a significant amount of time (at least on my development machine), I 
> know all the columns I want to add, but don't know them until after the 
> initial table is created.
> As a secondary consideration it brings CQL slightly closer to SQL where most 
> databases can handle adding multiple columns in one statement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-11001) Hadoop integration is incompatible with Cassandra Driver 3.0.0

2016-02-02 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp reassigned CASSANDRA-11001:


Assignee: Robert Stupp  (was: Jacek Lewandowski)

> Hadoop integration is incompatible with Cassandra Driver 3.0.0
> --
>
> Key: CASSANDRA-11001
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11001
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jacek Lewandowski
>Assignee: Robert Stupp
>
> When using Hadoop input format with SSL and Cassandra Driver 3.0.0-beta1, we 
> hit the following exception:
> {noformat}
> Exception in thread "main" java.lang.NoSuchFieldError: 
> DEFAULT_SSL_CIPHER_SUITES
>   at 
> org.apache.cassandra.hadoop.cql3.CqlConfigHelper.getSSLOptions(CqlConfigHelper.java:548)
>   at 
> org.apache.cassandra.hadoop.cql3.CqlConfigHelper.getCluster(CqlConfigHelper.java:315)
>   at 
> org.apache.cassandra.hadoop.cql3.CqlConfigHelper.getInputCluster(CqlConfigHelper.java:298)
>   at 
> org.apache.cassandra.hadoop.cql3.CqlInputFormat.getSplits(CqlInputFormat.java:131)
> {noformat}
> Should this be fixed with reflection so that Hadoop input/output formats are 
> compatible with both old and new driver?
> [~jjordan], [~alexliu68] ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[6/6] cassandra git commit: Merge branch 'cassandra-3.3' into trunk

2016-02-02 Thread samt
Merge branch 'cassandra-3.3' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/eef0ddfa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/eef0ddfa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/eef0ddfa

Branch: refs/heads/trunk
Commit: eef0ddfabb5071fa658b879dbbe4007b65c823dd
Parents: 0a83e6a c5feeda
Author: Sam Tunnicliffe 
Authored: Tue Feb 2 15:46:39 2016 +
Committer: Sam Tunnicliffe 
Committed: Tue Feb 2 15:46:39 2016 +

--
 CHANGES.txt| 1 +
 .../org/apache/cassandra/index/internal/keys/KeysSearcher.java | 2 ++
 2 files changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/eef0ddfa/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eef0ddfa/src/java/org/apache/cassandra/index/internal/keys/KeysSearcher.java
--



[2/6] cassandra git commit: Filter keys searcher results by target range

2016-02-02 Thread samt
Filter keys searcher results by target range

Patch by Sam Tunnicliffe; reviewed by Sylvain Lebresne for
CASSANDRA-11104


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f51e9839
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f51e9839
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f51e9839

Branch: refs/heads/cassandra-3.3
Commit: f51e98399ea3b78bdea81c6fb8bd62fda14af43c
Parents: b21df5b
Author: Sam Tunnicliffe 
Authored: Mon Feb 1 21:01:26 2016 +
Committer: Sam Tunnicliffe 
Committed: Tue Feb 2 15:27:08 2016 +

--
 CHANGES.txt| 1 +
 .../org/apache/cassandra/index/internal/keys/KeysSearcher.java | 2 ++
 2 files changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f51e9839/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index dcbce5b..ef0da4c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.3
+ * Filter keys searcher results by target range (CASSANDRA-11104)
  * Fix deserialization of legacy read commands (CASSANDRA-11087)
  * Fix incorrect computation of deletion time in sstable metadata 
(CASSANDRA-11102)
  * Avoid memory leak when collecting sstable metadata (CASSANDRA-11026)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f51e9839/src/java/org/apache/cassandra/index/internal/keys/KeysSearcher.java
--
diff --git 
a/src/java/org/apache/cassandra/index/internal/keys/KeysSearcher.java 
b/src/java/org/apache/cassandra/index/internal/keys/KeysSearcher.java
index b60d2d9..f00bb27 100644
--- a/src/java/org/apache/cassandra/index/internal/keys/KeysSearcher.java
+++ b/src/java/org/apache/cassandra/index/internal/keys/KeysSearcher.java
@@ -86,6 +86,8 @@ public class KeysSearcher extends CassandraIndexSearcher
 {
 Row hit = indexHits.next();
 DecoratedKey key = 
index.baseCfs.decorateKey(hit.clustering().get(0));
+if (!command.selectsKey(key))
+continue;
 
 SinglePartitionReadCommand dataCmd = 
SinglePartitionReadCommand.create(isForThrift(),

index.baseCfs.metadata,



[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-02-02 Thread samt
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c5feeda6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c5feeda6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c5feeda6

Branch: refs/heads/cassandra-3.3
Commit: c5feeda6a968284d15f2baaf95b871fcced32284
Parents: f70c353 f51e983
Author: Sam Tunnicliffe 
Authored: Tue Feb 2 15:44:50 2016 +
Committer: Sam Tunnicliffe 
Committed: Tue Feb 2 15:44:50 2016 +

--
 CHANGES.txt| 1 +
 .../org/apache/cassandra/index/internal/keys/KeysSearcher.java | 2 ++
 2 files changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c5feeda6/CHANGES.txt
--
diff --cc CHANGES.txt
index b67fae2,ef0da4c..5119626
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,8 -1,5 +1,9 @@@
 -3.0.3
 +3.3
 + * Avoid infinite loop if owned range is smaller than number of
 +   data dirs (CASSANDRA-11034)
 + * Avoid bootstrap hanging when existing nodes have no data to stream 
(CASSANDRA-11010)
 +Merged from 3.0:
+  * Filter keys searcher results by target range (CASSANDRA-11104)
   * Fix deserialization of legacy read commands (CASSANDRA-11087)
   * Fix incorrect computation of deletion time in sstable metadata 
(CASSANDRA-11102)
   * Avoid memory leak when collecting sstable metadata (CASSANDRA-11026)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c5feeda6/src/java/org/apache/cassandra/index/internal/keys/KeysSearcher.java
--



[jira] [Resolved] (CASSANDRA-11107) rpc_address is required for native protocol

2016-02-02 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-11107.
--
   Resolution: Invalid
Reproduced In: 3.0.2, 2.2.4  (was: 2.2.4, 3.0.2)

I'm not sure what you're expecting here.  You need to set 
start_native_transport to false if you don't want 9042 to bind at all.

> rpc_address is required for native protocol
> ---
>
> Key: CASSANDRA-11107
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11107
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: n0rad
>Priority: Minor
>
> I'm starting cassandra on a container with this /etc/hosts
> {quote}
> 127.0.0.1rkt-235c219a-f0dc-4958-9e03-5afe2581bbe1 localhost
> ::1  rkt-235c219a-f0dc-4958-9e03-5afe2581bbe1 localhost
> {quote}
> I have the default configuration except :
> {quote}
>  - seeds: "10.1.1.1"
> listen_address : 10.1.1.1
> {quote}
> cassandra will start listening on *127.0.0.1:9042*
> if I set *rpc_address:10.1.1.1* , even if *start_rpc: false*, cassandra will 
> listen on 10.1.1.1
> Since rpc is not started, I assumed that *rpc_address* and 
> *broadcast_rpc_address* will be ignored
> It took me a while to figure that. There may be something to do around this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11052) Cannot use Java 8 lambda expression inside UDF code body

2016-02-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128437#comment-15128437
 ] 

Sylvain Lebresne commented on CASSANDRA-11052:
--

Not pronouncing myself on the solution, but can you avoid doing too many style 
related change when submitting a patch. Most of the diff is the addition of 
{{@Override}} annotations and at least some of them (if not all) are violating 
the https://wiki.apache.org/cassandra/CodeStyle.

> Cannot use Java 8 lambda expression inside UDF code body
> 
>
> Key: CASSANDRA-11052
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11052
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: DOAN DuyHai
>Assignee: Robert Stupp
> Fix For: 3.x
>
> Attachments: 11052.patch
>
>
> When creating the following **UDF** using Java 8 lambda syntax
> {code:sql}
>  CREATE FUNCTION IF NOT EXISTS music.udf(state map, styles 
> list)
>  RETURNS NULL ON NULL INPUT
>  RETURNS map
>  LANGUAGE java
>  AS $$
>styles.forEach((Object o) -> {
>String style = (String)o;
>if(state.containsKey(style)) {
> state.put(style, (Long)state.get(style)+1);
>} else {
> state.put(style, 1L);   
>}
>});
>
>return state;
>  $$;
> {code}
>  I got the following exception:
> {code:java}
> Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Could 
> not compile function 'music.udf' from Java source: 
> org.apache.cassandra.exceptions.InvalidRequestException: Java source 
> compilation failed:
> Line 2: The type java.util.function.Consumer cannot be resolved. It is 
> indirectly referenced from required .class files
> Line 2: The method forEach(Consumer) from the type Iterable refers to the 
> missing type Consumer
> Line 2: The target type of this expression must be a functional interface
>   at 
> com.datastax.driver.core.Responses$Error.asException(Responses.java:136)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:179)
>   at 
> com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:184)
>   at 
> com.datastax.driver.core.RequestHandler.access$2500(RequestHandler.java:43)
>   at 
> com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:798)
>   at 
> com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:617)
>   at 
> com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1005)
>   at 
> com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:928)
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
>   at 
> io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
>   at 
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
>   at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
>   at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:263)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
>   at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
>   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
>   at 
> 

[3/6] cassandra git commit: Filter keys searcher results by target range

2016-02-02 Thread samt
Filter keys searcher results by target range

Patch by Sam Tunnicliffe; reviewed by Sylvain Lebresne for
CASSANDRA-11104


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f51e9839
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f51e9839
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f51e9839

Branch: refs/heads/trunk
Commit: f51e98399ea3b78bdea81c6fb8bd62fda14af43c
Parents: b21df5b
Author: Sam Tunnicliffe 
Authored: Mon Feb 1 21:01:26 2016 +
Committer: Sam Tunnicliffe 
Committed: Tue Feb 2 15:27:08 2016 +

--
 CHANGES.txt| 1 +
 .../org/apache/cassandra/index/internal/keys/KeysSearcher.java | 2 ++
 2 files changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f51e9839/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index dcbce5b..ef0da4c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.3
+ * Filter keys searcher results by target range (CASSANDRA-11104)
  * Fix deserialization of legacy read commands (CASSANDRA-11087)
  * Fix incorrect computation of deletion time in sstable metadata 
(CASSANDRA-11102)
  * Avoid memory leak when collecting sstable metadata (CASSANDRA-11026)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f51e9839/src/java/org/apache/cassandra/index/internal/keys/KeysSearcher.java
--
diff --git 
a/src/java/org/apache/cassandra/index/internal/keys/KeysSearcher.java 
b/src/java/org/apache/cassandra/index/internal/keys/KeysSearcher.java
index b60d2d9..f00bb27 100644
--- a/src/java/org/apache/cassandra/index/internal/keys/KeysSearcher.java
+++ b/src/java/org/apache/cassandra/index/internal/keys/KeysSearcher.java
@@ -86,6 +86,8 @@ public class KeysSearcher extends CassandraIndexSearcher
 {
 Row hit = indexHits.next();
 DecoratedKey key = 
index.baseCfs.decorateKey(hit.clustering().get(0));
+if (!command.selectsKey(key))
+continue;
 
 SinglePartitionReadCommand dataCmd = 
SinglePartitionReadCommand.create(isForThrift(),

index.baseCfs.metadata,



[jira] [Commented] (CASSANDRA-11058) largecolumn_test.TestLargeColumn.cleanup_test is failing

2016-02-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128414#comment-15128414
 ] 

Sylvain Lebresne commented on CASSANDRA-11058:
--

Pretty sure the problem is that the output of gcstats can have negative numbers 
and {{isdigit}} don't work for that. Simple pull request 
[here|https://github.com/riptano/cassandra-dtest/pull/781].

> largecolumn_test.TestLargeColumn.cleanup_test is failing
> 
>
> Key: CASSANDRA-11058
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11058
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: DS Test Eng
>  Labels: dtest
>
> This is absolutely a test issue. 
> {{largecolumn_test.TestLargeColumn.cleanup_test}} fails on a few versions, as 
> seen 
> [here|http://cassci.datastax.com/job/cassandra-2.2_dtest/488/testReport/largecolumn_test/TestLargeColumn/cleanup_test/].
>  The nominal complaint is 
> {{Expected numeric from fields from nodetool gcstats}}
> except as we can see from the new debug output I added, gcstats is printing 
> out numeric fields. So the regex is wrong somehow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9318) Bound the number of in-flight requests at the coordinator

2016-02-02 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg resolved CASSANDRA-9318.
---
Resolution: Won't Fix

This ticket was specifically scoped to an implementation strategy that isn't 
going to solve the issue of clients being able to submit more work than a 
cluster can handle resulting in timeouts and nodes appearing unresponsive 
because they can't do the work in time. We can stop the server from running out 
of memory and crashing, but we can't stop the client from submitting more 
requests then the server can handle because we need nodes to effectively 
operate as write buffers for slow nodes to maintain availability.

At this point I am kind of with [Jonathan 
Shook|https://issues.apache.org/jira/browse/CASSANDRA-9318?focusedCommentId=14536846=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14536846]
 that shedding load (and writing hints) inside the DB is less useful for 
dealing with overload. I think it is useful for dealing with temporarily slow 
ranges on the hash ring and it's part of the overall nodes as write buffers 
strategy C* uses to maintain availability.

I found some ways to OOM the server (CASSANDRA-10971 and CASSANDRA-10972) and 
have patches out for those.

The # of in flight requests already has bounds depending on the bottleneck that 
prevent the server from crashing so adding an explicit one isn't useful right 
now. When TPC is implemented we will have to implement a bound since there is 
no thread pool to exhaust, but that is later work.

> Bound the number of in-flight requests at the coordinator
> -
>
> Key: CASSANDRA-9318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9318
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths, Streaming and Messaging
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
> Fix For: 2.1.x, 2.2.x
>
>
> It's possible to somewhat bound the amount of load accepted into the cluster 
> by bounding the number of in-flight requests and request bytes.
> An implementation might do something like track the number of outstanding 
> bytes and requests and if it reaches a high watermark disable read on client 
> connections until it goes back below some low watermark.
> Need to make sure that disabling read on the client connection won't 
> introduce other issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11105) cassandra-stress tool - InvalidQueryException: Batch too large

2016-02-02 Thread Eric Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128631#comment-15128631
 ] 

Eric Evans commented on CASSANDRA-11105:


FWIW, I'm seeing the same thing (2.1.12), [yaml gist 
here|https://gist.github.com/eevans/1babf3fab9206951d7e6].  When I run this 
config with {{n=1}}, I can see that 50 CQL rows are added, all with the same 
partition key, with two unique {{rev}} columns (25 each).

> cassandra-stress tool - InvalidQueryException: Batch too large
> --
>
> Key: CASSANDRA-11105
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11105
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Cassandra 2.2.4, Java 8, CentOS 6.5
>Reporter: Ralf Steppacher
> Attachments: batch_too_large.yaml
>
>
> I am using Cassandra 2.2.4 and I am struggling to get the cassandra-stress 
> tool to work for my test scenario. I have followed the example on 
> http://www.datastax.com/dev/blog/improved-cassandra-2-1-stress-tool-benchmark-any-schema
>  to create a yaml file describing my test (attached).
> I am collecting events per user id (text, partition key). Events have a 
> session type (text), event type (text), and creation time (timestamp) 
> (clustering keys, in that order). Plus some more attributes required for 
> rendering the events in a UI. For testing purposes I ended up with the 
> following column spec and insert distribution:
> {noformat}
> columnspec:
>   - name: created_at
> cluster: uniform(10..1)
>   - name: event_type
> size: uniform(5..10)
> population: uniform(1..30)
> cluster: uniform(1..30)
>   - name: session_type
> size: fixed(5)
> population: uniform(1..4)
> cluster: uniform(1..4)
>   - name: user_id
> size: fixed(15)
> population: uniform(1..100)
>   - name: message
> size: uniform(10..100)
> population: uniform(1..100B)
> insert:
>   partitions: fixed(1)
>   batchtype: UNLOGGED
>   select: fixed(1)/120
> {noformat}
> Running stress tool for just the insert prints 
> {noformat}
> Generating batches with [1..1] partitions and [0..1] rows (of [10..120] 
> total rows in the partitions)
> {noformat}
> and then immediately starts flooding me with 
> {{com.datastax.driver.core.exceptions.InvalidQueryException: Batch too 
> large}}. 
> Why I should be exceeding the {{batch_size_fail_threshold_in_kb: 50}} in the 
> {{cassandra.yaml}} I do not understand. My understanding is that the stress 
> tool should generate one row per batch. The size of a single row should not 
> exceed {{8+10*3+5*3+15*3+100*3 = 398 bytes}}. Assuming a worst case of all 
> text characters being 3 byte unicode characters. 
> This is how I start the attached user scenario:
> {noformat}
> [rsteppac@centos bin]$ ./cassandra-stress user 
> profile=../batch_too_large.yaml ops\(insert=1\) -log level=verbose 
> file=~/centos_event_by_patient_session_event_timestamp_insert_only.log -node 
> 10.211.55.8
> INFO  08:00:07 Did not find Netty's native epoll transport in the classpath, 
> defaulting to NIO.
> INFO  08:00:08 Using data-center name 'datacenter1' for 
> DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct 
> datacenter name with DCAwareRoundRobinPolicy constructor)
> INFO  08:00:08 New Cassandra host /10.211.55.8:9042 added
> Connected to cluster: Titan_DEV
> Datatacenter: datacenter1; Host: /10.211.55.8; Rack: rack1
> Created schema. Sleeping 1s for propagation.
> Generating batches with [1..1] partitions and [0..1] rows (of [10..120] 
> total rows in the partitions)
> com.datastax.driver.core.exceptions.InvalidQueryException: Batch too large
>   at 
> com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:271)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:185)
>   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:55)
>   at 
> org.apache.cassandra.stress.operations.userdefined.SchemaInsert$JavaDriverRun.run(SchemaInsert.java:87)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:159)
>   at 
> org.apache.cassandra.stress.operations.userdefined.SchemaInsert.run(SchemaInsert.java:119)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:309)
> Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Batch 
> too large
>   at 
> com.datastax.driver.core.Responses$Error.asException(Responses.java:125)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:120)
>   at 
> 

[jira] [Updated] (CASSANDRA-11076) LEAK detected after bootstrapping a new node

2016-02-02 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11076:

Fix Version/s: 3.0.x

> LEAK detected after bootstrapping a new node
> 
>
> Key: CASSANDRA-11076
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11076
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Eduard Tudenhoefner
> Fix For: 3.0.x
>
>
> Sequence of events:
> * start up a 2 node cluster
> * bootstrap one additional node so that the cluster consists of 3 nodes in 
> total
> * the bootstrapped node will contain the LEAK error in the log file
> {code}
> INFO  [main] 2016-01-26 10:59:06,206  Server.java:162 - Starting listening 
> for CQL clients on /0.0.0.0:9042 (unencrypted)...
> INFO  [main] 2016-01-26 10:59:06,269  ThriftServer.java:119 - Binding thrift 
> service to /0.0.0.0:9160
> INFO  [Thread-6] 2016-01-26 10:59:06,280  ThriftServer.java:136 - Listening 
> for thrift clients...
> INFO  [HANDSHAKE-/10.200.178.183] 2016-01-26 10:59:06,703  
> OutboundTcpConnection.java:503 - Handshaking version with /10.200.178.183
> INFO  [RMI TCP Connection(4)-10.200.178.183] 2016-01-26 10:59:20,079  
> StorageService.java:1099 - rebuild from dc: (any dc)
> INFO  [RMI TCP Connection(4)-10.200.178.183] 2016-01-26 10:59:20,090  
> RangeStreamer.java:339 - Some ranges of 
> [(-9223372036854775808,-3074457345618258603], 
> (-3074457345618258603,3074457345618258602]] are already available. Skipping 
> streaming those ranges.
> INFO  [RMI TCP Connection(4)-10.200.178.183] 2016-01-26 10:59:20,091  
> RangeStreamer.java:339 - Some ranges of 
> [(-9223372036854775808,-3074457345618258603], 
> (-3074457345618258603,3074457345618258602]] are already available. Skipping 
> streaming those ranges.
> INFO  [RMI TCP Connection(4)-10.200.178.183] 2016-01-26 10:59:20,092  
> RangeStreamer.java:339 - Some ranges of 
> [(3074457345618258602,-9223372036854775808], 
> (-9223372036854775808,-3074457345618258603], 
> (-3074457345618258603,3074457345618258602]] are already available. Skipping 
> streaming those ranges.
> INFO  [RMI TCP Connection(4)-10.200.178.183] 2016-01-26 10:59:20,093  
> RangeStreamer.java:339 - Some ranges of 
> [(3074457345618258602,-9223372036854775808], 
> (-9223372036854775808,-3074457345618258603], 
> (-3074457345618258603,3074457345618258602]] are already available. Skipping 
> streaming those ranges.
> INFO  [RMI TCP Connection(4)-10.200.178.183] 2016-01-26 10:59:20,094  
> StreamResultFuture.java:86 - [Stream #d9bbe900-c41b-11e5-8540-d1e65b596c03] 
> Executing streaming plan for Rebuild
> INFO  [StreamConnectionEstablisher:3] 2016-01-26 10:59:20,095  
> StreamSession.java:238 - [Stream #d9bbe900-c41b-11e5-8540-d1e65b596c03] 
> Starting streaming to /10.200.178.185
> INFO  [StreamConnectionEstablisher:4] 2016-01-26 10:59:20,096  
> StreamSession.java:238 - [Stream #d9bbe900-c41b-11e5-8540-d1e65b596c03] 
> Starting streaming to /10.200.178.193
> INFO  [StreamConnectionEstablisher:4] 2016-01-26 10:59:20,097  
> StreamCoordinator.java:213 - [Stream #d9bbe900-c41b-11e5-8540-d1e65b596c03, 
> ID#0] Beginning stream session with /10.200.178.193
> INFO  [StreamConnectionEstablisher:3] 2016-01-26 10:59:20,098  
> StreamCoordinator.java:213 - [Stream #d9bbe900-c41b-11e5-8540-d1e65b596c03, 
> ID#0] Beginning stream session with /10.200.178.185
> INFO  [STREAM-IN-/10.200.178.185] 2016-01-26 10:59:20,102  
> StreamResultFuture.java:182 - [Stream #d9bbe900-c41b-11e5-8540-d1e65b596c03] 
> Session with /10.200.178.185 is complete
> INFO  [STREAM-IN-/10.200.178.193] 2016-01-26 10:59:20,359  
> StreamResultFuture.java:182 - [Stream #d9bbe900-c41b-11e5-8540-d1e65b596c03] 
> Session with /10.200.178.193 is complete
> INFO  [STREAM-IN-/10.200.178.193] 2016-01-26 10:59:20,362  
> StreamResultFuture.java:214 - [Stream #d9bbe900-c41b-11e5-8540-d1e65b596c03] 
> All sessions completed
> ERROR [Reference-Reaper:1] 2016-01-26 11:00:39,410  Ref.java:197 - LEAK 
> DETECTED: a reference 
> (org.apache.cassandra.utils.concurrent.Ref$State@1c7d1dcf) to @2011417651 was 
> not released before the reference was garbage collected
> ERROR [Reference-Reaper:1] 2016-01-26 11:00:39,411  Ref.java:228 - Allocate 
> trace org.apache.cassandra.utils.concurrent.Ref$State@1c7d1dcf:
> Thread[SharedPool-Worker-6,5,main]
>   at java.lang.Thread.getStackTrace(Thread.java:1552)
>   at org.apache.cassandra.utils.concurrent.Ref$Debug.(Ref.java:218)
>   at org.apache.cassandra.utils.concurrent.Ref$State.(Ref.java:148)
>   at org.apache.cassandra.utils.concurrent.Ref.(Ref.java:70)
>   at 
> org.apache.cassandra.utils.memory.BufferPool$Chunk.setAttachment(BufferPool.java:646)
>   at 
> org.apache.cassandra.utils.memory.BufferPool$Chunk.get(BufferPool.java:786)
>   at 
> 

[jira] [Commented] (CASSANDRA-8233) Additional file handling capabilities for COPY FROM

2016-02-02 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15129529#comment-15129529
 ] 

Stefania commented on CASSANDRA-8233:
-

Almost everything should be covered:

bq. the ability to skip file errors, write out 'bad' rows to file, skip blank 
lines, and set a max error count before a load is terminated.
All added by CASSANDRA-9303 except for skipping blank lines (unless the csv 
parser handles it for us)

bq. Allow for columns that have quotes, but strip off the quotes before load.
It should be handled by the csv parser but needs testing

bq. Set the end of record delimiter.
Missing

bq. Be able to ignore file header/other starting line(s).
Added by CASSANDRA-9303 but we may need to review how we define the number of 
rows to be skipped for multiple files.

bq. Have date and time format handling abilities (with date/time delimiters)
Added by CASSANDRA-9303

bq. Handle carriage returns in data
It should be handled by the csv parser but needs testing

bq. Skip column in file
Added by CASSANDRA-9303


> Additional file handling capabilities for COPY FROM
> ---
>
> Key: CASSANDRA-8233
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8233
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Robin Schumacher
>Assignee: Stefania
>Priority: Minor
> Fix For: 2.1.x
>
>
> To compete better with other RDBMS-styled loaders, COPY needs to include some 
> additional file handling capabilities: 
> - the ability to skip file errors, write out 'bad' rows to file, skip blank 
> lines, and set a max error count before a load is terminated. 
> - Allow for columns that have quotes, but strip off the quotes before load.
> - Set the end of record delimiter. 
> - Be able to ignore file header/other starting line(s). 
> - Have date and time format handling abilities (with date/time delimiters)  
> - Handle carriage returns in data
> - Skip column in file



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11041) Make it clear what timestamp_resolution is used for with DTCS

2016-02-02 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15129734#comment-15129734
 ] 

Jeff Jirsa commented on CASSANDRA-11041:


+1 



> Make it clear what timestamp_resolution is used for with DTCS
> -
>
> Key: CASSANDRA-11041
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11041
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>  Labels: docs-impacting, dtcs
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> We have had a few cases lately where users misunderstand what 
> timestamp_resolution does, we should;
> * make the option not autocomplete in cqlsh
> * update documentation
> * log a warning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11106) Experiment with strategies for picking compaction candidates in LCS

2016-02-02 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15129605#comment-15129605
 ] 

sankalp kohli commented on CASSANDRA-11106:
---

Similar to CASSANDRA-6216. 

> Experiment with strategies for picking compaction candidates in LCS
> ---
>
> Key: CASSANDRA-11106
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11106
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>  Labels: lcs
>
> Ideas taken here: http://rocksdb.org/blog/2921/compaction_pri/
> Current strategy in LCS is that we keep track of the token that was last 
> compacted and then we start a compaction with the sstable containing the next 
> token (kOldestSmallestSeqFirst in the blog post above)
> The rocksdb blog post above introduces a few ideas how this could be improved:
> * pick the 'coldest' sstable (sstable with the oldest max timestamp) - we 
> want to keep the hot data (recently updated) in the lower levels to avoid 
> write amplification
> * pick the sstable with the highest tombstone ratio, we want to get 
> tombstones to the top level as quickly as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >