[jira] [Created] (CASSANDRA-13695) ReadStage threads have no timeout

2017-07-14 Thread Vladimir Yudovin (JIRA)
Vladimir Yudovin created CASSANDRA-13695:


 Summary: ReadStage threads have no timeout
 Key: CASSANDRA-13695
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13695
 Project: Cassandra
  Issue Type: Bug
Reporter: Vladimir Yudovin


Following this discussion: [High CPU after read 
timeout|https://lists.apache.org/thread.html/e22a2a77634f9228bf1d5474cc77ea461262f2e125cd2fa21a17f7a2@%3Cdev.cassandra.apache.org%3E]

Currently ReadStage threads have no timeout and continue to run without 
limitation after xxx_request_timeout_in_ms expired. Thus single bad request 
like SELECT ... ALLOW FILTERING can paralyze the whole cluster for hours and 
even more.

I guess that read request should include a kind *timeout *or *expired_at 
parameter* and handling thread will check it and stop processing after 
expiration time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: use parameterized logging

2017-07-14 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 71bfab64b -> c3373012d


use parameterized logging


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c3373012
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c3373012
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c3373012

Branch: refs/heads/trunk
Commit: c3373012d4cf0f574bad03ce179b137f4f2aa29e
Parents: 71bfab6
Author: Dave Brosius 
Authored: Fri Jul 14 20:41:31 2017 -0400
Committer: Dave Brosius 
Committed: Fri Jul 14 20:41:31 2017 -0400

--
 src/java/org/apache/cassandra/repair/RepairSession.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c3373012/src/java/org/apache/cassandra/repair/RepairSession.java
--
diff --git a/src/java/org/apache/cassandra/repair/RepairSession.java 
b/src/java/org/apache/cassandra/repair/RepairSession.java
index 98ed1a3..d00e1b2 100644
--- a/src/java/org/apache/cassandra/repair/RepairSession.java
+++ b/src/java/org/apache/cassandra/repair/RepairSession.java
@@ -156,7 +156,7 @@ public class RepairSession extends 
AbstractFuture implement
 {
 if (!FailureDetector.instance.isAlive(endpoint))
 {
-logger.info("Removing a dead node from Repair due to 
-force " + endpoint);
+logger.info("Removing a dead node from Repair due to 
-force {}", endpoint);
 removeCandidates.add(endpoint);
 }
 }


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: remove dead assignment

2017-07-14 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 834031cc0 -> 71bfab64b


remove dead assignment


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/71bfab64
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/71bfab64
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/71bfab64

Branch: refs/heads/trunk
Commit: 71bfab64ba4b7fbc923accb7a99a418e6043ea6b
Parents: 834031c
Author: Dave Brosius 
Authored: Fri Jul 14 20:37:56 2017 -0400
Committer: Dave Brosius 
Committed: Fri Jul 14 20:37:56 2017 -0400

--
 src/java/org/apache/cassandra/index/SecondaryIndexManager.java | 1 -
 1 file changed, 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/71bfab64/src/java/org/apache/cassandra/index/SecondaryIndexManager.java
--
diff --git a/src/java/org/apache/cassandra/index/SecondaryIndexManager.java 
b/src/java/org/apache/cassandra/index/SecondaryIndexManager.java
index c2ed134..e253f3b 100644
--- a/src/java/org/apache/cassandra/index/SecondaryIndexManager.java
+++ b/src/java/org/apache/cassandra/index/SecondaryIndexManager.java
@@ -202,7 +202,6 @@ public class SecondaryIndexManager implements 
IndexRegistry, INotificationConsum
 private synchronized Future createIndex(IndexMetadata indexDef)
 {
 final Index index = createInstance(indexDef);
-String indexName = index.getIndexMetadata().name;
 index.register(this);
 
 // now mark as building prior to initializing


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13691) Fix incorrect [2.1 <— 3.0] serialization of counter cells with pre-2.1 local shards

2017-07-14 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088314#comment-16088314
 ] 

Aleksey Yeschenko commented on CASSANDRA-13691:
---

Ended up using a special clock id for counter update contexts, so that we can 
change for that clock id instead of merely looking at the shard type, to 
unambiguously tell regular counter values from counter updates in 3.0.

Branches with the fix: 
[3.0|https://github.com/iamaleksey/cassandra/tree/13691-3.0], 
[3.11|https://github.com/iamaleksey/cassandra/tree/13691-3.11], 
[4.0|https://github.com/iamaleksey/cassandra/tree/13691-4.0].

The commits include a small unit test to confirm that regular counter contexts 
with old local shards in first spot are no longer recognized as counter 
updates. Unfortunately it's a lot more painful given our existing framework to 
create upgrade tests that would span the whole range from 2.0 to 3.0, due to 
differences in supported driver protocol version.

Steps for manual reproduction and fix verification:

1. generate some 2.0 counter columns with local shards

{code}
ccm create test -n 3 -s -v 2.0.17

ccm node1 cqlsh
cqlsh> CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 3};
cqlsh> CREATE TABLE test.test (id int PRIMARY KEY, v1 counter, v2 counter, v3 
counter);

python
>>> from cassandra.cluster import Cluster
>>> cluster = Cluster(['127.0.0.1', '127.0.0.2', '127.0.0.3'])
>>> session = cluster.connect()
>>> query = "UPDATE test.test SET v1 = v1 + 1, v2 = v2 + 1, v3 = v3 + 1 where 
>>> id = ?"
>>> prepared = session.prepare(query)
>>> for i in range(0, 1000):
... session.execute(prepared, [i])
...

ccm flush
{code}

2. upgrade cluster to 2.1

{code}
ccm stop
ccm setdir -v 2.1.17
ccm start

ccm node1 nodetool upgradesstables
ccm node2 nodetool upgradesstables
ccm node3 nodetool upgradesstables
{code}

3. upgrade node3 to 3.0

{code}
ccm node3 stop
ccm node3 setdir -v 3.0.14
ccm node3 start
{code}

4. with a 2.1 coordinator, try to read the table with CL.ALL

{code}
ccm node1 cqlsh
cqlsh> CONSISTENCY ALL;
cqlsh> SELECT COUNT(*) FROM test.test;
ServerError: 
{code}

5. upgrade node3 to patched 3.0

{code}
ccm node3 stop
ccm node3 setdir --install-dir=/Users/aleksey/Code/cassandra
ccm node3 start
{code}

6. with a 2.1 coordinator, try again to read the table with CL.ALL

{code}
ccm node1 cqlsh
cqlsh> CONSISTENCY ALL;
cqlsh> SELECT COUNT(*) FROM test.test;

 count
---
  1000

(1 rows)
{code}

I'll try to automate it as an upgrade test, too, but for now the manual steps 
and the unit test will have to do.

> Fix incorrect [2.1 <— 3.0] serialization of counter cells with pre-2.1 local 
> shards
> ---
>
> Key: CASSANDRA-13691
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13691
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>  Labels: counters, upgrade
> Fix For: 3.0.x, 3.11.x
>
>
> We stopped generating local shards in C* 2.1, after CASSANDRA-6504 (Counters 
> 2.0). But it’s still possible to have counter cell values
> around, remaining from 2.0 times, on 2.1, 3.0, 3.11, and even trunk nodes, if 
> they’ve never been overwritten.
> In 2.1, we used two classes for two kinds of counter columns:
> {{CounterCell}} class to store counters - internally as collections of 
> {{CounterContext}} blobs, encoding collections of (host id, count, clock) 
> tuples
> {{CounterUpdateCell}} class to represent unapplied increments - essentially a 
> single long value; this class was never written to commit log, memtables, or 
> sstables, and was only used inside {{Mutation}} object graph - in memory, and 
> marshalled over network in cases when counter write coordinator and counter 
> write leader were different nodes
> 3.0 got rid of {{CounterCell}} and {{CounterUpdateCell}}, among other 
> {{Cell}} classes. In order to represent these unapplied increments - 
> equivalents of 2.1 {{CounterUpdateCell}} - in 3.0 we encode them as regular 
> counter columns, with a ‘special’ {{CounterContext}} value. I.e. a counter 
> context with a single local shard. We do that so that we can reuse local 
> shard reconcile logic (summing up) to seamlessly support counters with same 
> names collapsing to single increments in batches. See 
> {{UpdateParameters.addCounter()}} method comments 
> [here|https://github.com/apache/cassandra/blob/cassandra-3.0.14/src/java/org/apache/cassandra/cql3/UpdateParameters.java#L157-L171]
>  for details. It also assumes that nothing else can generate a counter with 
> local shards.
> It works fine in pure 3.0 clusters, and in mixed 2.1/3.0 clusters, assuming 
> that there are no counters with legacy 

[jira] [Updated] (CASSANDRA-13691) Fix incorrect [2.1 <— 3.0] serialization of counter cells with pre-2.1 local shards

2017-07-14 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-13691:
--
Status: Patch Available  (was: Open)

> Fix incorrect [2.1 <— 3.0] serialization of counter cells with pre-2.1 local 
> shards
> ---
>
> Key: CASSANDRA-13691
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13691
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
>  Labels: counters, upgrade
> Fix For: 3.0.x, 3.11.x
>
>
> We stopped generating local shards in C* 2.1, after CASSANDRA-6504 (Counters 
> 2.0). But it’s still possible to have counter cell values
> around, remaining from 2.0 times, on 2.1, 3.0, 3.11, and even trunk nodes, if 
> they’ve never been overwritten.
> In 2.1, we used two classes for two kinds of counter columns:
> {{CounterCell}} class to store counters - internally as collections of 
> {{CounterContext}} blobs, encoding collections of (host id, count, clock) 
> tuples
> {{CounterUpdateCell}} class to represent unapplied increments - essentially a 
> single long value; this class was never written to commit log, memtables, or 
> sstables, and was only used inside {{Mutation}} object graph - in memory, and 
> marshalled over network in cases when counter write coordinator and counter 
> write leader were different nodes
> 3.0 got rid of {{CounterCell}} and {{CounterUpdateCell}}, among other 
> {{Cell}} classes. In order to represent these unapplied increments - 
> equivalents of 2.1 {{CounterUpdateCell}} - in 3.0 we encode them as regular 
> counter columns, with a ‘special’ {{CounterContext}} value. I.e. a counter 
> context with a single local shard. We do that so that we can reuse local 
> shard reconcile logic (summing up) to seamlessly support counters with same 
> names collapsing to single increments in batches. See 
> {{UpdateParameters.addCounter()}} method comments 
> [here|https://github.com/apache/cassandra/blob/cassandra-3.0.14/src/java/org/apache/cassandra/cql3/UpdateParameters.java#L157-L171]
>  for details. It also assumes that nothing else can generate a counter with 
> local shards.
> It works fine in pure 3.0 clusters, and in mixed 2.1/3.0 clusters, assuming 
> that there are no counters with legacy local shards remaining from 2.0 era. 
> It breaks down badly if there are.
> {{LegacyLayout.serializeAsLegacyPartition()}} and consequently 
> {{LegacyCell.isCounterUpdate()}} - classes responsible for serializing and 
> deserialising in 2.1 format for compatibility - use the following logic to 
> tell if a cell of {{COUNTER}} kind is a regular final counter or an unapplied 
> increment:
> {code}
> private boolean isCounterUpdate()
> {
> // See UpdateParameters.addCounter() for more details on this
> return isCounter() && CounterContext.instance().isLocal(value);
> }
> {code}
> {{CounterContext.isLocal()}} method here looks at the first shard of the 
> collection of tuples and returns true if it’s a local one.
> This method would correctly identify a cell generated by 
> {{UpdateParameters.addCounter()}} as a counter update and serialize it 
> correctly as a 2.1 {{CounterUpdateCell}}. However, it would also incorrectly 
> flag any regular counter cell that just so happens to have a local shard as 
> the first tuple of the counter context as a counter update. If a 2.1 node as 
> a coordinator of a read requests fetches such a value from a 3.0 node, during 
> a rolling upgrade, instead of the expected {{CounterCell}} object it will 
> receive a {{CounterUpdateCell}}, breaking all the things. In the best case 
> scenario it will cause an assert in {{AbstractCell.reconcileCounter()}} to be 
> raised.
> To fix the problem we must find an unambiguous way, without false positives 
> or false negatives, to represent and identify unapplied counter updates on 
> 3.0 side. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13656) Change default start_native_transport configuration option

2017-07-14 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088257#comment-16088257
 ] 

Jeff Jirsa edited comment on CASSANDRA-13656 at 7/14/17 11:05 PM:
--

Not asked, but I don't see why it would need to be visible in both, I'd expect 
it to only be in {{cassandra.yaml}} by default. 

I do think it should be SETTABLE via jvm options, but we don't need the example 
there.


was (Author: jjirsa):
Not asked, but I don't see why it would need to be in both, I'd expect it to 
only be in {{cassandra.yaml}}

> Change default start_native_transport configuration option
> --
>
> Key: CASSANDRA-13656
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13656
> Project: Cassandra
>  Issue Type: Wish
>  Components: Configuration
>Reporter: Tomas Repik
>Assignee: Tomas Repik
>Priority: Trivial
> Fix For: 4.x
>
> Attachments: update_default_config.patch
>
>
> When you don't specify the start_native_transport option in the 
> cassandra.yaml config file the default value is set to false. So far I did 
> not find any good reason for setting it this way so I'm proposing to set it 
> to true as default.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13656) Change default start_native_transport configuration option

2017-07-14 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16088257#comment-16088257
 ] 

Jeff Jirsa commented on CASSANDRA-13656:


Not asked, but I don't see why it would need to be in both, I'd expect it to 
only be in {{cassandra.yaml}}

> Change default start_native_transport configuration option
> --
>
> Key: CASSANDRA-13656
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13656
> Project: Cassandra
>  Issue Type: Wish
>  Components: Configuration
>Reporter: Tomas Repik
>Assignee: Tomas Repik
>Priority: Trivial
> Fix For: 4.x
>
> Attachments: update_default_config.patch
>
>
> When you don't specify the start_native_transport option in the 
> cassandra.yaml config file the default value is set to false. So far I did 
> not find any good reason for setting it this way so I'm proposing to set it 
> to true as default.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13482) NPE on non-existing row read when row cache is enabled

2017-07-14 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-13482:

Fix Version/s: 3.11.x
   3.0.x
   4.0

> NPE on non-existing row read when row cache is enabled
> --
>
> Key: CASSANDRA-13482
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13482
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Alex Petrov
> Fix For: 4.0, 3.0.x, 3.11.x
>
>
> The problem is reproducible on 3.0 with:
> {code}
> -# row_cache_class_name: org.apache.cassandra.cache.OHCProvider
> +row_cache_class_name: org.apache.cassandra.cache.OHCProvider
> -row_cache_size_in_mb: 0
> +row_cache_size_in_mb: 100
> {code}
> Table setup:
> {code}
> CREATE TABLE cache_tables (pk int, v1 int, v2 int, v3 int, primary key (pk, 
> v1)) WITH CACHING = { 'keys': 'ALL', 'rows_per_partition': '1' } ;
> {code}
> No data is required, only a head query (or any pk/ck query but with full 
> partitions cached). 
> {code}
> select * from cross_page_queries where pk = 1 ;
> {code}
> {code}
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.db.rows.UnfilteredRowIterators.concat(UnfilteredRowIterators.java:193)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.getThroughCache(SinglePartitionReadCommand.java:461)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.SinglePartitionReadCommand.queryStorage(SinglePartitionReadCommand.java:358)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ReadCommand.executeLocally(ReadCommand.java:395) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1794)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2472)
>  ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_121]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[main/:na]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [main/:na]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [main/:na]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-11223) Queries with LIMIT filtering on clustering columns can return less rows than expected

2017-07-14 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-11223:
---
   Resolution: Fixed
Fix Version/s: 4.0
   3.11.1
   3.0.15
   2.2.11
   Status: Resolved  (was: Ready to Commit)

Committed into 2.2 at b08843de67b3c63fa9c0efe10bb9eda07c007f6c and merged into 
3.0, 3.11 and trunk.

> Queries with LIMIT filtering on clustering columns can return less rows than 
> expected
> -
>
> Key: CASSANDRA-11223
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11223
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 2.2.11, 3.0.15, 3.11.1, 4.0
>
>
> A query like {{SELECT * FROM %s WHERE b = 1 LIMIT 2 ALLOW FILTERING}} can 
> return less row than expected if the table has some static columns and some 
> of the partition have no rows matching b = 1.
> The problem can be reproduced with the following unit test:
> {code}
> public void testFilteringOnClusteringColumnsWithLimitAndStaticColumns() 
> throws Throwable
> {
> createTable("CREATE TABLE %s (a int, b int, s int static, c int, 
> primary key (a, b))");
> for (int i = 0; i < 3; i++)
> {
> execute("INSERT INTO %s (a, s) VALUES (?, ?)", i, i);
> for (int j = 0; j < 3; j++)
> if (!(i == 0 && j == 1))
> execute("INSERT INTO %s (a, b, c) VALUES (?, ?, ?)", 
> i, j, i + j);
> }
> assertRows(execute("SELECT * FROM %s"),
>    row(1, 0, 1, 1),
>    row(1, 1, 1, 2),
>    row(1, 2, 1, 3),
>    row(0, 0, 0, 0),
>    row(0, 2, 0, 2),
>    row(2, 0, 2, 2),
>    row(2, 1, 2, 3),
>    row(2, 2, 2, 4));
> assertRows(execute("SELECT * FROM %s WHERE b = 1 ALLOW FILTERING"),
>    row(1, 1, 1, 2),
>    row(2, 1, 2, 3));
> assertRows(execute("SELECT * FROM %s WHERE b = 1 LIMIT 2 ALLOW 
> FILTERING"),
>    row(1, 1, 1, 2),
>    row(2, 1, 2, 3)); // < FAIL It returns only one 
> row because the static row of partition 0 is counted and filtered out in 
> SELECT statement
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-11223) Queries with LIMIT filtering on clustering columns can return less rows than expected

2017-07-14 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-11223:
---
Status: Ready to Commit  (was: Patch Available)

> Queries with LIMIT filtering on clustering columns can return less rows than 
> expected
> -
>
> Key: CASSANDRA-11223
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11223
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>
> A query like {{SELECT * FROM %s WHERE b = 1 LIMIT 2 ALLOW FILTERING}} can 
> return less row than expected if the table has some static columns and some 
> of the partition have no rows matching b = 1.
> The problem can be reproduced with the following unit test:
> {code}
> public void testFilteringOnClusteringColumnsWithLimitAndStaticColumns() 
> throws Throwable
> {
> createTable("CREATE TABLE %s (a int, b int, s int static, c int, 
> primary key (a, b))");
> for (int i = 0; i < 3; i++)
> {
> execute("INSERT INTO %s (a, s) VALUES (?, ?)", i, i);
> for (int j = 0; j < 3; j++)
> if (!(i == 0 && j == 1))
> execute("INSERT INTO %s (a, b, c) VALUES (?, ?, ?)", 
> i, j, i + j);
> }
> assertRows(execute("SELECT * FROM %s"),
>    row(1, 0, 1, 1),
>    row(1, 1, 1, 2),
>    row(1, 2, 1, 3),
>    row(0, 0, 0, 0),
>    row(0, 2, 0, 2),
>    row(2, 0, 2, 2),
>    row(2, 1, 2, 3),
>    row(2, 2, 2, 4));
> assertRows(execute("SELECT * FROM %s WHERE b = 1 ALLOW FILTERING"),
>    row(1, 1, 1, 2),
>    row(2, 1, 2, 3));
> assertRows(execute("SELECT * FROM %s WHERE b = 1 LIMIT 2 ALLOW 
> FILTERING"),
>    row(1, 1, 1, 2),
>    row(2, 1, 2, 3)); // < FAIL It returns only one 
> row because the static row of partition 0 is counted and filtered out in 
> SELECT statement
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-13679) Add option to customize badness_threshold in dynamic endpoint snitch

2017-07-14 Thread Simon Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Zhou resolved CASSANDRA-13679.

Resolution: Not A Problem

Just realized there is a cassandra.yaml option so this ticket is not needed.

> Add option to customize badness_threshold in dynamic endpoint snitch
> 
>
> Key: CASSANDRA-13679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13679
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Simon Zhou
>Assignee: Simon Zhou
> Attachments: Screen Shot 2017-07-07 at 5.01.48 PM.png
>
>
> I'm working on tuning dynamic endpoint snitch and looks like the default 
> value (0.1) for Config.dynamic_snitch_badness_threshold is too sensitive and 
> causes traffic imbalance among nodes, especially with my patch for 
> CASSANDRA-13577. So we should:
> 1. Revisit the default value.
> 2. Add an option to allow customize badness_threshold during bootstrap.
> This ticket is to track #2. I attached a screenshot to show that, after 
> increasing badness_threshold from 0.1 to 1.0 by using patch from 
> CASSANDRA-12179, the traffic imbalance is gone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13656) Change default start_native_transport configuration option

2017-07-14 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16087697#comment-16087697
 ] 

Stefan Podkowinski commented on CASSANDRA-13656:


Do you, [~mshuler], [~urandom] or anyone else see any reason to have the 
{{start_native_transport}} flag in both cassandra.yaml and jvm.options? May I 
suggest to get rid of the jvm.options setting while we're at it?

> Change default start_native_transport configuration option
> --
>
> Key: CASSANDRA-13656
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13656
> Project: Cassandra
>  Issue Type: Wish
>  Components: Configuration
>Reporter: Tomas Repik
>Assignee: Tomas Repik
>Priority: Trivial
> Fix For: 4.x
>
> Attachments: update_default_config.patch
>
>
> When you don't specify the start_native_transport option in the 
> cassandra.yaml config file the default value is set to false. So far I did 
> not find any good reason for setting it this way so I'm proposing to set it 
> to true as default.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13643) converting expired ttl cells to tombstones causing unnecessary digest mismatches

2017-07-14 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-13643:

Resolution: Fixed
  Reviewer: Sylvain Lebresne
Status: Resolved  (was: Ready to Commit)

Committed as {{a033f51651e1a990adca795f92d683999c474151}}

> converting expired ttl cells to tombstones causing unnecessary digest 
> mismatches
> 
>
> Key: CASSANDRA-13643
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13643
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
>
> In 
> [{{AbstractCell#purge}}|https://github.com/apache/cassandra/blob/26e025804c6777a0d124dbc257747cba85b18f37/src/java/org/apache/cassandra/db/rows/AbstractCell.java#L77]
>   , we convert expired ttl'd cells to tombstones, and set the the local 
> deletion time to the cell's expiration time, less the ttl time. Depending on 
> the timing of the purge, this can cause purge to generate tombstones that are 
> otherwise purgeable. If compaction for a row with ttls isn't at the same 
> state between replicas, this will then cause digest mismatches between 
> logically identical rows, leading to unnecessary repair streaming and read 
> repairs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13643) converting expired ttl cells to tombstones causing unnecessary digest mismatches

2017-07-14 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-13643:

Status: Ready to Commit  (was: Patch Available)

> converting expired ttl cells to tombstones causing unnecessary digest 
> mismatches
> 
>
> Key: CASSANDRA-13643
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13643
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>Priority: Minor
>
> In 
> [{{AbstractCell#purge}}|https://github.com/apache/cassandra/blob/26e025804c6777a0d124dbc257747cba85b18f37/src/java/org/apache/cassandra/db/rows/AbstractCell.java#L77]
>   , we convert expired ttl'd cells to tombstones, and set the the local 
> deletion time to the cell's expiration time, less the ttl time. Depending on 
> the timing of the purge, this can cause purge to generate tombstones that are 
> otherwise purgeable. If compaction for a row with ttls isn't at the same 
> state between replicas, this will then cause digest mismatches between 
> logically identical rows, leading to unnecessary repair streaming and read 
> repairs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[9/9] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-07-14 Thread bdeggleston
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/834031cc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/834031cc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/834031cc

Branch: refs/heads/trunk
Commit: 834031cc01d555714c15341ddbbd25243118cf96
Parents: 965c774 6d0e95a
Author: Blake Eggleston 
Authored: Fri Jul 14 10:52:30 2017 -0700
Committer: Blake Eggleston 
Committed: Fri Jul 14 10:53:16 2017 -0700

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/rows/AbstractCell.java  |  2 +-
 test/unit/org/apache/cassandra/db/CellTest.java | 78 +++-
 3 files changed, 79 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/834031cc/CHANGES.txt
--
diff --cc CHANGES.txt
index 44b8ce8,8abf1f4..087bacc
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -99,6 -2,8 +99,7 @@@
   * Duplicate the buffer before passing it to analyser in SASI operation 
(CASSANDRA-13512)
   * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
  Merged from 3.0:
 -3.0.15
+  * Purge tombstones created by expired cells (CASSANDRA-13643)
   * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
   * Set test.runners based on cores and memory size (CASSANDRA-13078)
   * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/834031cc/src/java/org/apache/cassandra/db/rows/AbstractCell.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/834031cc/test/unit/org/apache/cassandra/db/CellTest.java
--
diff --cc test/unit/org/apache/cassandra/db/CellTest.java
index febfa3c,d69617e..906da8a
--- a/test/unit/org/apache/cassandra/db/CellTest.java
+++ b/test/unit/org/apache/cassandra/db/CellTest.java
@@@ -325,15 -395,20 +396,20 @@@ public class CellTes
  return BufferCell.live(cdef, timestamp, ByteBufferUtil.bytes(value));
  }
  
 -private Cell expiring(CFMetaData cfm, String columnName, String value, 
long timestamp, int localExpirationTime)
 +private Cell expiring(TableMetadata cfm, String columnName, String value, 
long timestamp, int localExpirationTime)
  {
+ return expiring(cfm, columnName, value, timestamp, 1, 
localExpirationTime);
+ }
+ 
 -private Cell expiring(CFMetaData cfm, String columnName, String value, 
long timestamp, int ttl, int localExpirationTime)
++private Cell expiring(TableMetadata cfm, String columnName, String value, 
long timestamp, int ttl, int localExpirationTime)
+ {
 -ColumnDefinition cdef = 
cfm.getColumnDefinition(ByteBufferUtil.bytes(columnName));
 +ColumnMetadata cdef = cfm.getColumn(ByteBufferUtil.bytes(columnName));
- return new BufferCell(cdef, timestamp, 1, localExpirationTime, 
ByteBufferUtil.bytes(value), null);
+ return new BufferCell(cdef, timestamp, ttl, localExpirationTime, 
ByteBufferUtil.bytes(value), null);
  }
  
 -private Cell deleted(CFMetaData cfm, String columnName, int 
localDeletionTime, long timestamp)
 +private Cell deleted(TableMetadata cfm, String columnName, int 
localDeletionTime, long timestamp)
  {
 -ColumnDefinition cdef = 
cfm.getColumnDefinition(ByteBufferUtil.bytes(columnName));
 +ColumnMetadata cdef = cfm.getColumn(ByteBufferUtil.bytes(columnName));
  return BufferCell.tombstone(cdef, timestamp, localDeletionTime);
  }
  }


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[4/9] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread bdeggleston
http://git-wip-us.apache.org/repos/asf/cassandra/blob/88d2ac4f/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
--
diff --cc src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
index b4211bb,000..319eeb4
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
+++ b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
@@@ -1,1089 -1,0 +1,1106 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.db;
 +
 +import java.io.IOException;
 +import java.nio.ByteBuffer;
 +import java.util.*;
 +
 +import com.google.common.collect.Iterables;
 +import com.google.common.collect.Sets;
 +
 +import org.apache.cassandra.cache.IRowCacheEntry;
 +import org.apache.cassandra.cache.RowCacheKey;
 +import org.apache.cassandra.cache.RowCacheSentinel;
 +import org.apache.cassandra.concurrent.Stage;
 +import org.apache.cassandra.concurrent.StageManager;
 +import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.config.ColumnDefinition;
 +import org.apache.cassandra.config.DatabaseDescriptor;
 +import org.apache.cassandra.db.lifecycle.*;
 +import org.apache.cassandra.db.filter.*;
 +import org.apache.cassandra.db.partitions.*;
 +import org.apache.cassandra.db.rows.*;
 +import org.apache.cassandra.exceptions.RequestExecutionException;
 +import org.apache.cassandra.io.sstable.format.SSTableReader;
 +import org.apache.cassandra.io.sstable.format.SSTableReadsListener;
 +import org.apache.cassandra.io.util.DataInputPlus;
 +import org.apache.cassandra.io.util.DataOutputPlus;
 +import org.apache.cassandra.metrics.TableMetrics;
 +import org.apache.cassandra.net.MessageOut;
 +import org.apache.cassandra.net.MessagingService;
 +import org.apache.cassandra.schema.IndexMetadata;
 +import org.apache.cassandra.service.CacheService;
 +import org.apache.cassandra.service.ClientState;
 +import org.apache.cassandra.service.StorageProxy;
 +import org.apache.cassandra.service.pager.*;
 +import org.apache.cassandra.thrift.ThriftResultsMerger;
 +import org.apache.cassandra.tracing.Tracing;
 +import org.apache.cassandra.utils.FBUtilities;
 +import org.apache.cassandra.utils.SearchIterator;
 +import org.apache.cassandra.utils.btree.BTreeSet;
 +import org.apache.cassandra.utils.concurrent.OpOrder;
 +import org.apache.cassandra.utils.memory.HeapAllocator;
 +
 +
 +/**
 + * A read command that selects a (part of a) single partition.
 + */
 +public class SinglePartitionReadCommand extends ReadCommand
 +{
 +protected static final SelectionDeserializer selectionDeserializer = new 
Deserializer();
 +
 +private final DecoratedKey partitionKey;
 +private final ClusteringIndexFilter clusteringIndexFilter;
 +
 +private int oldestUnrepairedTombstone = Integer.MAX_VALUE;
 +
 +public SinglePartitionReadCommand(boolean isDigest,
 +  int digestVersion,
 +  boolean isForThrift,
 +  CFMetaData metadata,
 +  int nowInSec,
 +  ColumnFilter columnFilter,
 +  RowFilter rowFilter,
 +  DataLimits limits,
 +  DecoratedKey partitionKey,
 +  ClusteringIndexFilter 
clusteringIndexFilter)
 +{
 +super(Kind.SINGLE_PARTITION, isDigest, digestVersion, isForThrift, 
metadata, nowInSec, columnFilter, rowFilter, limits);
 +assert partitionKey.getPartitioner() == metadata.partitioner;
 +this.partitionKey = partitionKey;
 +this.clusteringIndexFilter = clusteringIndexFilter;
 +}
 +
 +/**
 + * Creates a new read command on a single partition.
 + *
 + * @param metadata the table to query.
 + * @param nowInSec the time in seconds to use are "now" for this query.
 + * @param columnFilter the column filter to use for the query.
 + * @param rowFilter the row filter to use for the query.
 + * @param limits the limits to 

[5/9] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread bdeggleston
Merge branch cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/88d2ac4f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/88d2ac4f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/88d2ac4f

Branch: refs/heads/trunk
Commit: 88d2ac4f2fadba44a9b72286ef924441014a97ba
Parents: 7de853b b08843d
Author: Benjamin Lerer 
Authored: Fri Jul 14 17:14:38 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 17:26:34 2017 +0200

--
 CHANGES.txt |   7 +-
 .../apache/cassandra/db/ColumnFamilyStore.java  |   3 +-
 src/java/org/apache/cassandra/db/DataRange.java |   5 +
 .../cassandra/db/PartitionRangeReadCommand.java |   7 +-
 .../org/apache/cassandra/db/ReadCommand.java|   2 +-
 src/java/org/apache/cassandra/db/ReadQuery.java |  12 +
 .../db/SinglePartitionReadCommand.java  |  21 +-
 .../apache/cassandra/db/filter/DataLimits.java  |  63 +++--
 .../apache/cassandra/db/filter/RowFilter.java   |  15 ++
 .../apache/cassandra/service/CacheService.java  |   2 +-
 .../apache/cassandra/service/DataResolver.java  |   4 +-
 .../apache/cassandra/service/StorageProxy.java  |   8 +-
 .../service/pager/AbstractQueryPager.java   |   2 +-
 .../service/pager/MultiPartitionPager.java  |   9 +-
 .../cassandra/service/pager/QueryPagers.java|   2 +-
 .../org/apache/cassandra/cql3/CQLTester.java|   8 +-
 .../validation/operations/SelectLimitTest.java  | 256 ++-
 .../db/rows/UnfilteredRowIteratorsTest.java |  10 +-
 18 files changed, 382 insertions(+), 54 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/88d2ac4f/CHANGES.txt
--
diff --cc CHANGES.txt
index fffda7f,bda510f..4a823c9
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,65 -1,18 +1,66 @@@
 -2.2.11
 +3.0.15
 + * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
 + * Set test.runners based on cores and memory size (CASSANDRA-13078)
 + * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
 + * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
 + * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)
 + * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)
 + Merged from 2.2:
-   * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
-   * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
-   * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
+  * Fix queries with LIMIT and filtering on clustering columns 
(CASSANDRA-11223)
+  * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
+  * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
+  * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
 - * Remove unused max_value_size_in_mb config setting from yaml 
(CASSANDRA-13625
  
 -
 -2.2.10
 +3.0.14
 + * Ensure int overflow doesn't occur when calculating large partition warning 
size (CASSANDRA-13172)
 + * Ensure consistent view of partition columns between coordinator and 
replica in ColumnFilter (CASSANDRA-13004)
 + * Failed unregistering mbean during drop keyspace (CASSANDRA-13346)
 + * nodetool scrub/cleanup/upgradesstables exit code is wrong (CASSANDRA-13542)
 + * Fix the reported number of sstable data files accessed per read 
(CASSANDRA-13120)
 + * Fix schema digest mismatch during rolling upgrades from versions before 
3.0.12 (CASSANDRA-13559)
 + * Upgrade JNA version to 4.4.0 (CASSANDRA-13072)
 + * Interned ColumnIdentifiers should use minimal ByteBuffers (CASSANDRA-13533)
 + * ReverseIndexedReader may drop rows during 2.1 to 3.0 upgrade 
(CASSANDRA-13525)
 + * Fix repair process violating start/end token limits for small ranges 
(CASSANDRA-13052)
 + * Add storage port options to sstableloader (CASSANDRA-13518)
 + * Properly handle quoted index names in cqlsh DESCRIBE output 
(CASSANDRA-12847)
 + * Avoid reading static row twice from old format sstables (CASSANDRA-13236)
 + * Fix NPE in StorageService.excise() (CASSANDRA-13163)
 + * Expire OutboundTcpConnection messages by a single Thread (CASSANDRA-13265)
 + * Fail repair if insufficient responses received (CASSANDRA-13397)
 + * Fix SSTableLoader fail when the loaded table contains dropped columns 
(CASSANDRA-13276)
 + * Avoid name clashes in CassandraIndexTest (CASSANDRA-13427)
 + * Handling partially written hint files (CASSANDRA-12728)
 + * Interrupt replaying hints on decommission (CASSANDRA-13308)
 + * Fix schema version calculation for rolling upgrades (CASSANDRA-13441)
 +Merged from 2.2:
   * Nodes started with join_ring=False should be able to serve requests 

[8/9] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-07-14 Thread bdeggleston
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6d0e95af
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6d0e95af
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6d0e95af

Branch: refs/heads/trunk
Commit: 6d0e95af7d68547bf07bf8961d5e8c1594acbf56
Parents: 7df942b a033f51
Author: Blake Eggleston 
Authored: Fri Jul 14 10:37:01 2017 -0700
Committer: Blake Eggleston 
Committed: Fri Jul 14 10:46:27 2017 -0700

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/rows/AbstractCell.java  |  2 +-
 test/unit/org/apache/cassandra/db/CellTest.java | 78 +++-
 3 files changed, 79 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6d0e95af/CHANGES.txt
--
diff --cc CHANGES.txt
index 87b058e,9962f4b..8abf1f4
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,8 -1,5 +1,9 @@@
 +3.11.1
 + * Duplicate the buffer before passing it to analyser in SASI operation 
(CASSANDRA-13512)
 + * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 +Merged from 3.0:
  3.0.15
+  * Purge tombstones created by expired cells (CASSANDRA-13643)
   * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
   * Set test.runners based on cores and memory size (CASSANDRA-13078)
   * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6d0e95af/src/java/org/apache/cassandra/db/rows/AbstractCell.java
--
diff --cc src/java/org/apache/cassandra/db/rows/AbstractCell.java
index ca83783,7e93c2e..54bd9e8
--- a/src/java/org/apache/cassandra/db/rows/AbstractCell.java
+++ b/src/java/org/apache/cassandra/db/rows/AbstractCell.java
@@@ -44,81 -40,6 +44,81 @@@ public abstract class AbstractCell exte
  super(column);
  }
  
 +public boolean isCounterCell()
 +{
 +return !isTombstone() && column.cellValueType().isCounter();
 +}
 +
 +public boolean isLive(int nowInSec)
 +{
 +return localDeletionTime() == NO_DELETION_TIME || (ttl() != NO_TTL && 
nowInSec < localDeletionTime());
 +}
 +
 +public boolean isTombstone()
 +{
 +return localDeletionTime() != NO_DELETION_TIME && ttl() == NO_TTL;
 +}
 +
 +public boolean isExpiring()
 +{
 +return ttl() != NO_TTL;
 +}
 +
 +public Cell markCounterLocalToBeCleared()
 +{
 +if (!isCounterCell())
 +return this;
 +
 +ByteBuffer value = value();
 +ByteBuffer marked = 
CounterContext.instance().markLocalToBeCleared(value);
 +return marked == value ? this : new BufferCell(column, timestamp(), 
ttl(), localDeletionTime(), marked, path());
 +}
 +
 +public Cell purge(DeletionPurger purger, int nowInSec)
 +{
 +if (!isLive(nowInSec))
 +{
 +if (purger.shouldPurge(timestamp(), localDeletionTime()))
 +return null;
 +
 +// We slightly hijack purging to convert expired but not 
purgeable columns to tombstones. The reason we do that is
 +// that once a column has expired it is equivalent to a tombstone 
but actually using a tombstone is more compact since
 +// we don't keep the column value. The reason we do it here is 
that 1) it's somewhat related to dealing with tombstones
 +// so hopefully not too surprising and 2) we want to this and 
purging at the same places, so it's simpler/more efficient
 +// to do both here.
 +if (isExpiring())
 +{
 +// Note that as long as the expiring column and the tombstone 
put together live longer than GC grace seconds,
 +// we'll fulfil our responsibility to repair. See discussion 
at
 +// 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/repair-compaction-and-tombstone-rows-td7583481.html
- return BufferCell.tombstone(column, timestamp(), 
localDeletionTime() - ttl(), path());
++return BufferCell.tombstone(column, timestamp(), 
localDeletionTime() - ttl(), path()).purge(purger, nowInSec);
 +}
 +}
 +return this;
 +}
 +
 +public Cell copy(AbstractAllocator allocator)
 +{
 +CellPath path = path();
 +return new BufferCell(column, timestamp(), ttl(), 
localDeletionTime(), allocator.clone(value()), path == null ? null : 
path.copy(allocator));
 +}
 +
 +// note: while the cell returned may be different, the value is the same, 
so if 

[6/9] cassandra git commit: Purge tombstones created by expired cells

2017-07-14 Thread bdeggleston
Purge tombstones created by expired cells

Patch by Blake Eggleston; reviewed by Sylvain Lebresne for CASSANDRA-13643


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a033f516
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a033f516
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a033f516

Branch: refs/heads/trunk
Commit: a033f51651e1a990adca795f92d683999c474151
Parents: 88d2ac4
Author: Blake Eggleston 
Authored: Tue Jun 27 08:41:17 2017 -0700
Committer: Blake Eggleston 
Committed: Fri Jul 14 10:02:07 2017 -0700

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/rows/BufferCell.java|  2 +-
 test/unit/org/apache/cassandra/db/CellTest.java | 78 +++-
 3 files changed, 79 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a033f516/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4a823c9..9962f4b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * Purge tombstones created by expired cells (CASSANDRA-13643)
  * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
  * Set test.runners based on cores and memory size (CASSANDRA-13078)
  * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a033f516/src/java/org/apache/cassandra/db/rows/BufferCell.java
--
diff --git a/src/java/org/apache/cassandra/db/rows/BufferCell.java 
b/src/java/org/apache/cassandra/db/rows/BufferCell.java
index db0ded5..e4ad7e6 100644
--- a/src/java/org/apache/cassandra/db/rows/BufferCell.java
+++ b/src/java/org/apache/cassandra/db/rows/BufferCell.java
@@ -176,7 +176,7 @@ public class BufferCell extends AbstractCell
 // Note that as long as the expiring column and the tombstone 
put together live longer than GC grace seconds,
 // we'll fulfil our responsibility to repair. See discussion at
 // 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/repair-compaction-and-tombstone-rows-td7583481.html
-return BufferCell.tombstone(column, timestamp, 
localDeletionTime - ttl, path);
+return BufferCell.tombstone(column, timestamp, 
localDeletionTime - ttl, path).purge(purger, nowInSec);
 }
 }
 return this;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a033f516/test/unit/org/apache/cassandra/db/CellTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/CellTest.java 
b/test/unit/org/apache/cassandra/db/CellTest.java
index 9072f98..cd6000f 100644
--- a/test/unit/org/apache/cassandra/db/CellTest.java
+++ b/test/unit/org/apache/cassandra/db/CellTest.java
@@ -174,6 +174,77 @@ public class CellTest
 Assert.assertEquals(-1, testExpiring("val", "b", 2, 1, null, "a", 
null, 2));
 }
 
+class SimplePurger implements DeletionPurger
+{
+private final int gcBefore;
+
+public SimplePurger(int gcBefore)
+{
+this.gcBefore = gcBefore;
+}
+
+public boolean shouldPurge(long timestamp, int localDeletionTime)
+{
+return localDeletionTime < gcBefore;
+}
+}
+
+/**
+ * tombstones shouldn't be purged if localDeletionTime is greater than 
gcBefore
+ */
+@Test
+public void testNonPurgableTombstone()
+{
+int now = 100;
+Cell cell = deleted(cfm, "val", now, now);
+Cell purged = cell.purge(new SimplePurger(now - 1), now + 1);
+Assert.assertEquals(cell, purged);
+}
+
+@Test
+public void testPurgeableTombstone()
+{
+int now = 100;
+Cell cell = deleted(cfm, "val", now, now);
+Cell purged = cell.purge(new SimplePurger(now + 1), now + 1);
+Assert.assertNull(purged);
+}
+
+@Test
+public void testLiveExpiringCell()
+{
+int now = 100;
+Cell cell = expiring(cfm, "val", "a", now, now + 10);
+Cell purged = cell.purge(new SimplePurger(now), now + 1);
+Assert.assertEquals(cell, purged);
+}
+
+/**
+ * cells that have expired should be converted to tombstones with an local 
deletion time
+ * of the cell's local expiration time, minus it's ttl
+ */
+@Test
+public void testExpiredTombstoneConversion()
+{
+int now = 100;
+Cell cell = expiring(cfm, "val", "a", now, 10, now + 10);
+Cell purged = cell.purge(new SimplePurger(now), now + 11);
+

[7/9] cassandra git commit: ninja fix CHANGES.txt

2017-07-14 Thread bdeggleston
ninja fix CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7df942bc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7df942bc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7df942bc

Branch: refs/heads/trunk
Commit: 7df942bc5c2b50fccf39b06a96b654ca7840d80f
Parents: 7aa89a6
Author: Benjamin Lerer 
Authored: Fri Jul 14 19:17:59 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 19:17:59 2017 +0200

--
 CHANGES.txt | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7df942bc/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index edd66e2..87b058e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -11,7 +11,8 @@ Merged from 3.0:
 Merged from 2.2:
  * Fix queries with LIMIT and filtering on clustering columns (CASSANDRA-11223)
  * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
- * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)  
* Fix nested Tuples/UDTs validation (CASSANDRA-13646)
+ * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
+ * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
 
 3.11.0
  * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/9] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread bdeggleston
http://git-wip-us.apache.org/repos/asf/cassandra/blob/88d2ac4f/src/java/org/apache/cassandra/service/DataResolver.java
--
diff --cc src/java/org/apache/cassandra/service/DataResolver.java
index 60cfbba,000..c96a893
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/service/DataResolver.java
+++ b/src/java/org/apache/cassandra/service/DataResolver.java
@@@ -1,498 -1,0 +1,498 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.service;
 +
 +import java.net.InetAddress;
 +import java.util.*;
 +import java.util.concurrent.TimeoutException;
 +
 +import com.google.common.annotations.VisibleForTesting;
 +
 +import org.apache.cassandra.concurrent.Stage;
 +import org.apache.cassandra.concurrent.StageManager;
 +import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.config.ColumnDefinition;
 +import org.apache.cassandra.config.DatabaseDescriptor;
 +import org.apache.cassandra.db.*;
 +import org.apache.cassandra.db.filter.ClusteringIndexFilter;
 +import org.apache.cassandra.db.filter.DataLimits;
 +import org.apache.cassandra.db.partitions.*;
 +import org.apache.cassandra.db.rows.*;
 +import org.apache.cassandra.db.transform.MoreRows;
 +import org.apache.cassandra.db.transform.Transformation;
 +import org.apache.cassandra.exceptions.ReadTimeoutException;
 +import org.apache.cassandra.net.*;
 +import org.apache.cassandra.tracing.Tracing;
 +import org.apache.cassandra.utils.FBUtilities;
 +
 +public class DataResolver extends ResponseResolver
 +{
 +@VisibleForTesting
 +final List repairResults = 
Collections.synchronizedList(new ArrayList<>());
 +
 +public DataResolver(Keyspace keyspace, ReadCommand command, 
ConsistencyLevel consistency, int maxResponseCount)
 +{
 +super(keyspace, command, consistency, maxResponseCount);
 +}
 +
 +public PartitionIterator getData()
 +{
 +ReadResponse response = responses.iterator().next().payload;
 +return 
UnfilteredPartitionIterators.filter(response.makeIterator(command), 
command.nowInSec());
 +}
 +
 +public PartitionIterator resolve()
 +{
 +// We could get more responses while this method runs, which is ok 
(we're happy to ignore any response not here
 +// at the beginning of this method), so grab the response count once 
and use that through the method.
 +int count = responses.size();
 +List iters = new ArrayList<>(count);
 +InetAddress[] sources = new InetAddress[count];
 +for (int i = 0; i < count; i++)
 +{
 +MessageIn msg = responses.get(i);
 +iters.add(msg.payload.makeIterator(command));
 +sources[i] = msg.from;
 +}
 +
 +// Even though every responses should honor the limit, we might have 
more than requested post reconciliation,
 +// so ensure we're respecting the limit.
- DataLimits.Counter counter = 
command.limits().newCounter(command.nowInSec(), true);
++DataLimits.Counter counter = 
command.limits().newCounter(command.nowInSec(), true, 
command.selectsFullPartition());
 +return counter.applyTo(mergeWithShortReadProtection(iters, sources, 
counter));
 +}
 +
 +public void compareResponses()
 +{
 +// We need to fully consume the results to trigger read repairs if 
appropriate
 +try (PartitionIterator iterator = resolve())
 +{
 +PartitionIterators.consume(iterator);
 +}
 +}
 +
 +private PartitionIterator 
mergeWithShortReadProtection(List results, 
InetAddress[] sources, DataLimits.Counter resultCounter)
 +{
 +// If we have only one results, there is no read repair to do and we 
can't get short reads
 +if (results.size() == 1)
 +return UnfilteredPartitionIterators.filter(results.get(0), 
command.nowInSec());
 +
 +UnfilteredPartitionIterators.MergeListener listener = new 
RepairMergeListener(sources);
 +
 +// So-called "short reads" stems from nodes returning only a subset 
of the results they have for a partition due to the limit,
 +// but that 

[1/9] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread bdeggleston
Repository: cassandra
Updated Branches:
  refs/heads/trunk 965c7743d -> 834031cc0


http://git-wip-us.apache.org/repos/asf/cassandra/blob/88d2ac4f/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java
--
diff --cc 
test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java
index aeb3d56,0ffb799..7e90c0a
--- 
a/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java
@@@ -26,14 -26,14 +26,15 @@@ import org.junit.Test
  import org.apache.cassandra.config.DatabaseDescriptor;
  import org.apache.cassandra.cql3.CQLTester;
  import org.apache.cassandra.dht.ByteOrderedPartitioner;
--import org.apache.cassandra.exceptions.InvalidRequestException;
++import org.apache.cassandra.service.StorageService;
  
  public class SelectLimitTest extends CQLTester
  {
  @BeforeClass
  public static void setUp()
  {
 -DatabaseDescriptor.setPartitioner(ByteOrderedPartitioner.instance);
++
StorageService.instance.setPartitionerUnsafe(ByteOrderedPartitioner.instance);
 +
DatabaseDescriptor.setPartitionerUnsafe(ByteOrderedPartitioner.instance);
  }
  
  /**
@@@ -125,43 -125,217 +126,296 @@@
 row(1, 1),
 row(1, 2),
 row(1, 3));
 +assertRows(execute("SELECT * FROM %s WHERE v > 1 AND v <= 3 LIMIT 6 
ALLOW FILTERING"),
 +   row(0, 2),
 +   row(0, 3),
 +   row(1, 2),
 +   row(1, 3),
 +   row(2, 2),
 +   row(2, 3));
 +}
  
 -// strict bound (v > 1) over a range of partitions is not supported 
for compact storage if limit is provided
 -assertInvalidThrow(InvalidRequestException.class, "SELECT * FROM %s 
WHERE v > 1 AND v <= 3 LIMIT 6 ALLOW FILTERING");
 +@Test
 +public void testLimitWithDeletedRowsAndStaticColumns() throws Throwable
 +{
 +createTable("CREATE TABLE %s (pk int, c int, v int, s int static, 
PRIMARY KEY (pk, c))");
 +
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (1, -1, 1, 1)");
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (2, -1, 1, 1)");
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (3, -1, 1, 1)");
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (4, -1, 1, 1)");
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (5, -1, 1, 1)");
 +
 +assertRows(execute("SELECT * FROM %s"),
 +   row(1, -1, 1, 1),
 +   row(2, -1, 1, 1),
 +   row(3, -1, 1, 1),
 +   row(4, -1, 1, 1),
 +   row(5, -1, 1, 1));
 +
 +execute("DELETE FROM %s WHERE pk = 2");
 +
 +assertRows(execute("SELECT * FROM %s"),
 +   row(1, -1, 1, 1),
 +   row(3, -1, 1, 1),
 +   row(4, -1, 1, 1),
 +   row(5, -1, 1, 1));
 +
 +assertRows(execute("SELECT * FROM %s LIMIT 2"),
 +   row(1, -1, 1, 1),
 +   row(3, -1, 1, 1));
  }
+ 
+ @Test
+ public void testFilteringOnClusteringColumnsWithLimitAndStaticColumns() 
throws Throwable
+ {
+ // With only one clustering column
+ createTable("CREATE TABLE %s (a int, b int, s int static, c int, 
primary key (a, b))"
+   + " WITH caching = {'keys': 'ALL', 'rows_per_partition' : 'ALL'}");
+ 
+ for (int i = 0; i < 4; i++)
+ {
+ execute("INSERT INTO %s (a, s) VALUES (?, ?)", i, i);
+ for (int j = 0; j < 3; j++)
+ if (!((i == 0 || i == 3) && j == 1))
+ execute("INSERT INTO %s (a, b, c) VALUES (?, ?, ?)", 
i, j, i + j);
+ }
+ 
 -for (boolean forceFlush : new boolean[]{false, true})
++beforeAndAfterFlush(() ->
+ {
 -if (forceFlush)
 -flush();
 -
+ assertRows(execute("SELECT * FROM %s"),
+row(0, 0, 0, 0),
+row(0, 2, 0, 2),
+row(1, 0, 1, 1),
+row(1, 1, 1, 2),
+row(1, 2, 1, 3),
+row(2, 0, 2, 2),
+row(2, 1, 2, 3),
+row(2, 2, 2, 4),
+row(3, 0, 3, 3),
+row(3, 2, 3, 5));
+ 
+ assertRows(execute("SELECT * FROM %s WHERE b = 1 ALLOW 
FILTERING"),
+row(1, 1, 1, 2),
+row(2, 1, 2, 3));
+ 
+ // The problem was that the static row of the partition 0 used to 
be only filtered in SelectStatement and was
+ // by consequence counted as a row. In which case the query was 
returning one row less.
+ assertRows(execute("SELECT * FROM %s WHERE 

[3/9] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread bdeggleston
http://git-wip-us.apache.org/repos/asf/cassandra/blob/88d2ac4f/src/java/org/apache/cassandra/db/filter/DataLimits.java
--
diff --cc src/java/org/apache/cassandra/db/filter/DataLimits.java
index 94f43dc,000..48ec06a
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/db/filter/DataLimits.java
+++ b/src/java/org/apache/cassandra/db/filter/DataLimits.java
@@@ -1,814 -1,0 +1,827 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.db.filter;
 +
 +import java.io.IOException;
 +import java.nio.ByteBuffer;
 +
 +import org.apache.cassandra.db.*;
 +import org.apache.cassandra.db.rows.*;
 +import org.apache.cassandra.db.partitions.*;
 +import org.apache.cassandra.db.transform.BasePartitions;
 +import org.apache.cassandra.db.transform.BaseRows;
 +import org.apache.cassandra.db.transform.StoppingTransformation;
 +import org.apache.cassandra.db.transform.Transformation;
 +import org.apache.cassandra.io.util.DataInputPlus;
 +import org.apache.cassandra.io.util.DataOutputPlus;
 +import org.apache.cassandra.utils.ByteBufferUtil;
 +
 +/**
 + * Object in charge of tracking if we have fetch enough data for a given 
query.
 + *
 + * The reason this is not just a simple integer is that Thrift and CQL3 count
 + * stuffs in different ways. This is what abstract those differences.
 + */
 +public abstract class DataLimits
 +{
 +public static final Serializer serializer = new Serializer();
 +
 +public static final int NO_LIMIT = Integer.MAX_VALUE;
 +
 +public static final DataLimits NONE = new CQLLimits(NO_LIMIT)
 +{
 +@Override
- public boolean hasEnoughLiveData(CachedPartition cached, int nowInSec)
++public boolean hasEnoughLiveData(CachedPartition cached, int 
nowInSec, boolean countPartitionsWithOnlyStaticData)
 +{
 +return false;
 +}
 +
 +@Override
- public UnfilteredPartitionIterator filter(UnfilteredPartitionIterator 
iter, int nowInSec)
++public UnfilteredPartitionIterator filter(UnfilteredPartitionIterator 
iter,
++  int nowInSec,
++  boolean 
countPartitionsWithOnlyStaticData)
 +{
 +return iter;
 +}
 +
 +@Override
- public UnfilteredRowIterator filter(UnfilteredRowIterator iter, int 
nowInSec)
++public UnfilteredRowIterator filter(UnfilteredRowIterator iter,
++int nowInSec,
++boolean 
countPartitionsWithOnlyStaticData)
 +{
 +return iter;
 +}
 +};
 +
 +// We currently deal with distinct queries by querying full partitions 
but limiting the result at 1 row per
 +// partition (see SelectStatement.makeFilter). So an "unbounded" distinct 
is still actually doing some filtering.
 +public static final DataLimits DISTINCT_NONE = new CQLLimits(NO_LIMIT, 1, 
true);
 +
 +public enum Kind { CQL_LIMIT, CQL_PAGING_LIMIT, THRIFT_LIMIT, 
SUPER_COLUMN_COUNTING_LIMIT }
 +
 +public static DataLimits cqlLimits(int cqlRowLimit)
 +{
 +return new CQLLimits(cqlRowLimit);
 +}
 +
 +public static DataLimits cqlLimits(int cqlRowLimit, int perPartitionLimit)
 +{
 +return new CQLLimits(cqlRowLimit, perPartitionLimit);
 +}
 +
 +public static DataLimits distinctLimits(int cqlRowLimit)
 +{
 +return CQLLimits.distinct(cqlRowLimit);
 +}
 +
 +public static DataLimits thriftLimits(int partitionLimit, int 
cellPerPartitionLimit)
 +{
 +return new ThriftLimits(partitionLimit, cellPerPartitionLimit);
 +}
 +
 +public static DataLimits superColumnCountingLimits(int partitionLimit, 
int cellPerPartitionLimit)
 +{
 +return new SuperColumnCountingLimits(partitionLimit, 
cellPerPartitionLimit);
 +}
 +
 +public abstract Kind kind();
 +
 +public abstract boolean isUnlimited();
 +public abstract boolean isDistinct();
 +
 +public abstract DataLimits forPaging(int pageSize);
 +public abstract 

[5/7] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread bdeggleston
Merge branch cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/88d2ac4f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/88d2ac4f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/88d2ac4f

Branch: refs/heads/cassandra-3.11
Commit: 88d2ac4f2fadba44a9b72286ef924441014a97ba
Parents: 7de853b b08843d
Author: Benjamin Lerer 
Authored: Fri Jul 14 17:14:38 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 17:26:34 2017 +0200

--
 CHANGES.txt |   7 +-
 .../apache/cassandra/db/ColumnFamilyStore.java  |   3 +-
 src/java/org/apache/cassandra/db/DataRange.java |   5 +
 .../cassandra/db/PartitionRangeReadCommand.java |   7 +-
 .../org/apache/cassandra/db/ReadCommand.java|   2 +-
 src/java/org/apache/cassandra/db/ReadQuery.java |  12 +
 .../db/SinglePartitionReadCommand.java  |  21 +-
 .../apache/cassandra/db/filter/DataLimits.java  |  63 +++--
 .../apache/cassandra/db/filter/RowFilter.java   |  15 ++
 .../apache/cassandra/service/CacheService.java  |   2 +-
 .../apache/cassandra/service/DataResolver.java  |   4 +-
 .../apache/cassandra/service/StorageProxy.java  |   8 +-
 .../service/pager/AbstractQueryPager.java   |   2 +-
 .../service/pager/MultiPartitionPager.java  |   9 +-
 .../cassandra/service/pager/QueryPagers.java|   2 +-
 .../org/apache/cassandra/cql3/CQLTester.java|   8 +-
 .../validation/operations/SelectLimitTest.java  | 256 ++-
 .../db/rows/UnfilteredRowIteratorsTest.java |  10 +-
 18 files changed, 382 insertions(+), 54 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/88d2ac4f/CHANGES.txt
--
diff --cc CHANGES.txt
index fffda7f,bda510f..4a823c9
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,65 -1,18 +1,66 @@@
 -2.2.11
 +3.0.15
 + * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
 + * Set test.runners based on cores and memory size (CASSANDRA-13078)
 + * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
 + * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
 + * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)
 + * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)
 + Merged from 2.2:
-   * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
-   * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
-   * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
+  * Fix queries with LIMIT and filtering on clustering columns 
(CASSANDRA-11223)
+  * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
+  * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
+  * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
 - * Remove unused max_value_size_in_mb config setting from yaml 
(CASSANDRA-13625
  
 -
 -2.2.10
 +3.0.14
 + * Ensure int overflow doesn't occur when calculating large partition warning 
size (CASSANDRA-13172)
 + * Ensure consistent view of partition columns between coordinator and 
replica in ColumnFilter (CASSANDRA-13004)
 + * Failed unregistering mbean during drop keyspace (CASSANDRA-13346)
 + * nodetool scrub/cleanup/upgradesstables exit code is wrong (CASSANDRA-13542)
 + * Fix the reported number of sstable data files accessed per read 
(CASSANDRA-13120)
 + * Fix schema digest mismatch during rolling upgrades from versions before 
3.0.12 (CASSANDRA-13559)
 + * Upgrade JNA version to 4.4.0 (CASSANDRA-13072)
 + * Interned ColumnIdentifiers should use minimal ByteBuffers (CASSANDRA-13533)
 + * ReverseIndexedReader may drop rows during 2.1 to 3.0 upgrade 
(CASSANDRA-13525)
 + * Fix repair process violating start/end token limits for small ranges 
(CASSANDRA-13052)
 + * Add storage port options to sstableloader (CASSANDRA-13518)
 + * Properly handle quoted index names in cqlsh DESCRIBE output 
(CASSANDRA-12847)
 + * Avoid reading static row twice from old format sstables (CASSANDRA-13236)
 + * Fix NPE in StorageService.excise() (CASSANDRA-13163)
 + * Expire OutboundTcpConnection messages by a single Thread (CASSANDRA-13265)
 + * Fail repair if insufficient responses received (CASSANDRA-13397)
 + * Fix SSTableLoader fail when the loaded table contains dropped columns 
(CASSANDRA-13276)
 + * Avoid name clashes in CassandraIndexTest (CASSANDRA-13427)
 + * Handling partially written hint files (CASSANDRA-12728)
 + * Interrupt replaying hints on decommission (CASSANDRA-13308)
 + * Fix schema version calculation for rolling upgrades (CASSANDRA-13441)
 +Merged from 2.2:
   * Nodes started with join_ring=False should be able to serve 

[6/7] cassandra git commit: Purge tombstones created by expired cells

2017-07-14 Thread bdeggleston
Purge tombstones created by expired cells

Patch by Blake Eggleston; reviewed by Sylvain Lebresne for CASSANDRA-13643


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a033f516
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a033f516
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a033f516

Branch: refs/heads/cassandra-3.11
Commit: a033f51651e1a990adca795f92d683999c474151
Parents: 88d2ac4
Author: Blake Eggleston 
Authored: Tue Jun 27 08:41:17 2017 -0700
Committer: Blake Eggleston 
Committed: Fri Jul 14 10:02:07 2017 -0700

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/rows/BufferCell.java|  2 +-
 test/unit/org/apache/cassandra/db/CellTest.java | 78 +++-
 3 files changed, 79 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a033f516/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4a823c9..9962f4b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * Purge tombstones created by expired cells (CASSANDRA-13643)
  * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
  * Set test.runners based on cores and memory size (CASSANDRA-13078)
  * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a033f516/src/java/org/apache/cassandra/db/rows/BufferCell.java
--
diff --git a/src/java/org/apache/cassandra/db/rows/BufferCell.java 
b/src/java/org/apache/cassandra/db/rows/BufferCell.java
index db0ded5..e4ad7e6 100644
--- a/src/java/org/apache/cassandra/db/rows/BufferCell.java
+++ b/src/java/org/apache/cassandra/db/rows/BufferCell.java
@@ -176,7 +176,7 @@ public class BufferCell extends AbstractCell
 // Note that as long as the expiring column and the tombstone 
put together live longer than GC grace seconds,
 // we'll fulfil our responsibility to repair. See discussion at
 // 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/repair-compaction-and-tombstone-rows-td7583481.html
-return BufferCell.tombstone(column, timestamp, 
localDeletionTime - ttl, path);
+return BufferCell.tombstone(column, timestamp, 
localDeletionTime - ttl, path).purge(purger, nowInSec);
 }
 }
 return this;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a033f516/test/unit/org/apache/cassandra/db/CellTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/CellTest.java 
b/test/unit/org/apache/cassandra/db/CellTest.java
index 9072f98..cd6000f 100644
--- a/test/unit/org/apache/cassandra/db/CellTest.java
+++ b/test/unit/org/apache/cassandra/db/CellTest.java
@@ -174,6 +174,77 @@ public class CellTest
 Assert.assertEquals(-1, testExpiring("val", "b", 2, 1, null, "a", 
null, 2));
 }
 
+class SimplePurger implements DeletionPurger
+{
+private final int gcBefore;
+
+public SimplePurger(int gcBefore)
+{
+this.gcBefore = gcBefore;
+}
+
+public boolean shouldPurge(long timestamp, int localDeletionTime)
+{
+return localDeletionTime < gcBefore;
+}
+}
+
+/**
+ * tombstones shouldn't be purged if localDeletionTime is greater than 
gcBefore
+ */
+@Test
+public void testNonPurgableTombstone()
+{
+int now = 100;
+Cell cell = deleted(cfm, "val", now, now);
+Cell purged = cell.purge(new SimplePurger(now - 1), now + 1);
+Assert.assertEquals(cell, purged);
+}
+
+@Test
+public void testPurgeableTombstone()
+{
+int now = 100;
+Cell cell = deleted(cfm, "val", now, now);
+Cell purged = cell.purge(new SimplePurger(now + 1), now + 1);
+Assert.assertNull(purged);
+}
+
+@Test
+public void testLiveExpiringCell()
+{
+int now = 100;
+Cell cell = expiring(cfm, "val", "a", now, now + 10);
+Cell purged = cell.purge(new SimplePurger(now), now + 1);
+Assert.assertEquals(cell, purged);
+}
+
+/**
+ * cells that have expired should be converted to tombstones with an local 
deletion time
+ * of the cell's local expiration time, minus it's ttl
+ */
+@Test
+public void testExpiredTombstoneConversion()
+{
+int now = 100;
+Cell cell = expiring(cfm, "val", "a", now, 10, now + 10);
+Cell purged = cell.purge(new SimplePurger(now), now + 11);
+

[1/7] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread bdeggleston
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.11 7df942bc5 -> 6d0e95af7


http://git-wip-us.apache.org/repos/asf/cassandra/blob/88d2ac4f/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java
--
diff --cc 
test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java
index aeb3d56,0ffb799..7e90c0a
--- 
a/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java
@@@ -26,14 -26,14 +26,15 @@@ import org.junit.Test
  import org.apache.cassandra.config.DatabaseDescriptor;
  import org.apache.cassandra.cql3.CQLTester;
  import org.apache.cassandra.dht.ByteOrderedPartitioner;
--import org.apache.cassandra.exceptions.InvalidRequestException;
++import org.apache.cassandra.service.StorageService;
  
  public class SelectLimitTest extends CQLTester
  {
  @BeforeClass
  public static void setUp()
  {
 -DatabaseDescriptor.setPartitioner(ByteOrderedPartitioner.instance);
++
StorageService.instance.setPartitionerUnsafe(ByteOrderedPartitioner.instance);
 +
DatabaseDescriptor.setPartitionerUnsafe(ByteOrderedPartitioner.instance);
  }
  
  /**
@@@ -125,43 -125,217 +126,296 @@@
 row(1, 1),
 row(1, 2),
 row(1, 3));
 +assertRows(execute("SELECT * FROM %s WHERE v > 1 AND v <= 3 LIMIT 6 
ALLOW FILTERING"),
 +   row(0, 2),
 +   row(0, 3),
 +   row(1, 2),
 +   row(1, 3),
 +   row(2, 2),
 +   row(2, 3));
 +}
  
 -// strict bound (v > 1) over a range of partitions is not supported 
for compact storage if limit is provided
 -assertInvalidThrow(InvalidRequestException.class, "SELECT * FROM %s 
WHERE v > 1 AND v <= 3 LIMIT 6 ALLOW FILTERING");
 +@Test
 +public void testLimitWithDeletedRowsAndStaticColumns() throws Throwable
 +{
 +createTable("CREATE TABLE %s (pk int, c int, v int, s int static, 
PRIMARY KEY (pk, c))");
 +
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (1, -1, 1, 1)");
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (2, -1, 1, 1)");
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (3, -1, 1, 1)");
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (4, -1, 1, 1)");
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (5, -1, 1, 1)");
 +
 +assertRows(execute("SELECT * FROM %s"),
 +   row(1, -1, 1, 1),
 +   row(2, -1, 1, 1),
 +   row(3, -1, 1, 1),
 +   row(4, -1, 1, 1),
 +   row(5, -1, 1, 1));
 +
 +execute("DELETE FROM %s WHERE pk = 2");
 +
 +assertRows(execute("SELECT * FROM %s"),
 +   row(1, -1, 1, 1),
 +   row(3, -1, 1, 1),
 +   row(4, -1, 1, 1),
 +   row(5, -1, 1, 1));
 +
 +assertRows(execute("SELECT * FROM %s LIMIT 2"),
 +   row(1, -1, 1, 1),
 +   row(3, -1, 1, 1));
  }
+ 
+ @Test
+ public void testFilteringOnClusteringColumnsWithLimitAndStaticColumns() 
throws Throwable
+ {
+ // With only one clustering column
+ createTable("CREATE TABLE %s (a int, b int, s int static, c int, 
primary key (a, b))"
+   + " WITH caching = {'keys': 'ALL', 'rows_per_partition' : 'ALL'}");
+ 
+ for (int i = 0; i < 4; i++)
+ {
+ execute("INSERT INTO %s (a, s) VALUES (?, ?)", i, i);
+ for (int j = 0; j < 3; j++)
+ if (!((i == 0 || i == 3) && j == 1))
+ execute("INSERT INTO %s (a, b, c) VALUES (?, ?, ?)", 
i, j, i + j);
+ }
+ 
 -for (boolean forceFlush : new boolean[]{false, true})
++beforeAndAfterFlush(() ->
+ {
 -if (forceFlush)
 -flush();
 -
+ assertRows(execute("SELECT * FROM %s"),
+row(0, 0, 0, 0),
+row(0, 2, 0, 2),
+row(1, 0, 1, 1),
+row(1, 1, 1, 2),
+row(1, 2, 1, 3),
+row(2, 0, 2, 2),
+row(2, 1, 2, 3),
+row(2, 2, 2, 4),
+row(3, 0, 3, 3),
+row(3, 2, 3, 5));
+ 
+ assertRows(execute("SELECT * FROM %s WHERE b = 1 ALLOW 
FILTERING"),
+row(1, 1, 1, 2),
+row(2, 1, 2, 3));
+ 
+ // The problem was that the static row of the partition 0 used to 
be only filtered in SelectStatement and was
+ // by consequence counted as a row. In which case the query was 
returning one row less.
+ assertRows(execute("SELECT * FROM 

[3/7] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread bdeggleston
http://git-wip-us.apache.org/repos/asf/cassandra/blob/88d2ac4f/src/java/org/apache/cassandra/db/filter/DataLimits.java
--
diff --cc src/java/org/apache/cassandra/db/filter/DataLimits.java
index 94f43dc,000..48ec06a
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/db/filter/DataLimits.java
+++ b/src/java/org/apache/cassandra/db/filter/DataLimits.java
@@@ -1,814 -1,0 +1,827 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.db.filter;
 +
 +import java.io.IOException;
 +import java.nio.ByteBuffer;
 +
 +import org.apache.cassandra.db.*;
 +import org.apache.cassandra.db.rows.*;
 +import org.apache.cassandra.db.partitions.*;
 +import org.apache.cassandra.db.transform.BasePartitions;
 +import org.apache.cassandra.db.transform.BaseRows;
 +import org.apache.cassandra.db.transform.StoppingTransformation;
 +import org.apache.cassandra.db.transform.Transformation;
 +import org.apache.cassandra.io.util.DataInputPlus;
 +import org.apache.cassandra.io.util.DataOutputPlus;
 +import org.apache.cassandra.utils.ByteBufferUtil;
 +
 +/**
 + * Object in charge of tracking if we have fetch enough data for a given 
query.
 + *
 + * The reason this is not just a simple integer is that Thrift and CQL3 count
 + * stuffs in different ways. This is what abstract those differences.
 + */
 +public abstract class DataLimits
 +{
 +public static final Serializer serializer = new Serializer();
 +
 +public static final int NO_LIMIT = Integer.MAX_VALUE;
 +
 +public static final DataLimits NONE = new CQLLimits(NO_LIMIT)
 +{
 +@Override
- public boolean hasEnoughLiveData(CachedPartition cached, int nowInSec)
++public boolean hasEnoughLiveData(CachedPartition cached, int 
nowInSec, boolean countPartitionsWithOnlyStaticData)
 +{
 +return false;
 +}
 +
 +@Override
- public UnfilteredPartitionIterator filter(UnfilteredPartitionIterator 
iter, int nowInSec)
++public UnfilteredPartitionIterator filter(UnfilteredPartitionIterator 
iter,
++  int nowInSec,
++  boolean 
countPartitionsWithOnlyStaticData)
 +{
 +return iter;
 +}
 +
 +@Override
- public UnfilteredRowIterator filter(UnfilteredRowIterator iter, int 
nowInSec)
++public UnfilteredRowIterator filter(UnfilteredRowIterator iter,
++int nowInSec,
++boolean 
countPartitionsWithOnlyStaticData)
 +{
 +return iter;
 +}
 +};
 +
 +// We currently deal with distinct queries by querying full partitions 
but limiting the result at 1 row per
 +// partition (see SelectStatement.makeFilter). So an "unbounded" distinct 
is still actually doing some filtering.
 +public static final DataLimits DISTINCT_NONE = new CQLLimits(NO_LIMIT, 1, 
true);
 +
 +public enum Kind { CQL_LIMIT, CQL_PAGING_LIMIT, THRIFT_LIMIT, 
SUPER_COLUMN_COUNTING_LIMIT }
 +
 +public static DataLimits cqlLimits(int cqlRowLimit)
 +{
 +return new CQLLimits(cqlRowLimit);
 +}
 +
 +public static DataLimits cqlLimits(int cqlRowLimit, int perPartitionLimit)
 +{
 +return new CQLLimits(cqlRowLimit, perPartitionLimit);
 +}
 +
 +public static DataLimits distinctLimits(int cqlRowLimit)
 +{
 +return CQLLimits.distinct(cqlRowLimit);
 +}
 +
 +public static DataLimits thriftLimits(int partitionLimit, int 
cellPerPartitionLimit)
 +{
 +return new ThriftLimits(partitionLimit, cellPerPartitionLimit);
 +}
 +
 +public static DataLimits superColumnCountingLimits(int partitionLimit, 
int cellPerPartitionLimit)
 +{
 +return new SuperColumnCountingLimits(partitionLimit, 
cellPerPartitionLimit);
 +}
 +
 +public abstract Kind kind();
 +
 +public abstract boolean isUnlimited();
 +public abstract boolean isDistinct();
 +
 +public abstract DataLimits forPaging(int pageSize);
 +public abstract 

[7/7] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-07-14 Thread bdeggleston
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6d0e95af
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6d0e95af
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6d0e95af

Branch: refs/heads/cassandra-3.11
Commit: 6d0e95af7d68547bf07bf8961d5e8c1594acbf56
Parents: 7df942b a033f51
Author: Blake Eggleston 
Authored: Fri Jul 14 10:37:01 2017 -0700
Committer: Blake Eggleston 
Committed: Fri Jul 14 10:46:27 2017 -0700

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/rows/AbstractCell.java  |  2 +-
 test/unit/org/apache/cassandra/db/CellTest.java | 78 +++-
 3 files changed, 79 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6d0e95af/CHANGES.txt
--
diff --cc CHANGES.txt
index 87b058e,9962f4b..8abf1f4
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,8 -1,5 +1,9 @@@
 +3.11.1
 + * Duplicate the buffer before passing it to analyser in SASI operation 
(CASSANDRA-13512)
 + * Properly evict pstmts from prepared statements cache (CASSANDRA-13641)
 +Merged from 3.0:
  3.0.15
+  * Purge tombstones created by expired cells (CASSANDRA-13643)
   * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
   * Set test.runners based on cores and memory size (CASSANDRA-13078)
   * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6d0e95af/src/java/org/apache/cassandra/db/rows/AbstractCell.java
--
diff --cc src/java/org/apache/cassandra/db/rows/AbstractCell.java
index ca83783,7e93c2e..54bd9e8
--- a/src/java/org/apache/cassandra/db/rows/AbstractCell.java
+++ b/src/java/org/apache/cassandra/db/rows/AbstractCell.java
@@@ -44,81 -40,6 +44,81 @@@ public abstract class AbstractCell exte
  super(column);
  }
  
 +public boolean isCounterCell()
 +{
 +return !isTombstone() && column.cellValueType().isCounter();
 +}
 +
 +public boolean isLive(int nowInSec)
 +{
 +return localDeletionTime() == NO_DELETION_TIME || (ttl() != NO_TTL && 
nowInSec < localDeletionTime());
 +}
 +
 +public boolean isTombstone()
 +{
 +return localDeletionTime() != NO_DELETION_TIME && ttl() == NO_TTL;
 +}
 +
 +public boolean isExpiring()
 +{
 +return ttl() != NO_TTL;
 +}
 +
 +public Cell markCounterLocalToBeCleared()
 +{
 +if (!isCounterCell())
 +return this;
 +
 +ByteBuffer value = value();
 +ByteBuffer marked = 
CounterContext.instance().markLocalToBeCleared(value);
 +return marked == value ? this : new BufferCell(column, timestamp(), 
ttl(), localDeletionTime(), marked, path());
 +}
 +
 +public Cell purge(DeletionPurger purger, int nowInSec)
 +{
 +if (!isLive(nowInSec))
 +{
 +if (purger.shouldPurge(timestamp(), localDeletionTime()))
 +return null;
 +
 +// We slightly hijack purging to convert expired but not 
purgeable columns to tombstones. The reason we do that is
 +// that once a column has expired it is equivalent to a tombstone 
but actually using a tombstone is more compact since
 +// we don't keep the column value. The reason we do it here is 
that 1) it's somewhat related to dealing with tombstones
 +// so hopefully not too surprising and 2) we want to this and 
purging at the same places, so it's simpler/more efficient
 +// to do both here.
 +if (isExpiring())
 +{
 +// Note that as long as the expiring column and the tombstone 
put together live longer than GC grace seconds,
 +// we'll fulfil our responsibility to repair. See discussion 
at
 +// 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/repair-compaction-and-tombstone-rows-td7583481.html
- return BufferCell.tombstone(column, timestamp(), 
localDeletionTime() - ttl(), path());
++return BufferCell.tombstone(column, timestamp(), 
localDeletionTime() - ttl(), path()).purge(purger, nowInSec);
 +}
 +}
 +return this;
 +}
 +
 +public Cell copy(AbstractAllocator allocator)
 +{
 +CellPath path = path();
 +return new BufferCell(column, timestamp(), ttl(), 
localDeletionTime(), allocator.clone(value()), path == null ? null : 
path.copy(allocator));
 +}
 +
 +// note: while the cell returned may be different, the value is the 

[4/7] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread bdeggleston
http://git-wip-us.apache.org/repos/asf/cassandra/blob/88d2ac4f/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
--
diff --cc src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
index b4211bb,000..319eeb4
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
+++ b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
@@@ -1,1089 -1,0 +1,1106 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.db;
 +
 +import java.io.IOException;
 +import java.nio.ByteBuffer;
 +import java.util.*;
 +
 +import com.google.common.collect.Iterables;
 +import com.google.common.collect.Sets;
 +
 +import org.apache.cassandra.cache.IRowCacheEntry;
 +import org.apache.cassandra.cache.RowCacheKey;
 +import org.apache.cassandra.cache.RowCacheSentinel;
 +import org.apache.cassandra.concurrent.Stage;
 +import org.apache.cassandra.concurrent.StageManager;
 +import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.config.ColumnDefinition;
 +import org.apache.cassandra.config.DatabaseDescriptor;
 +import org.apache.cassandra.db.lifecycle.*;
 +import org.apache.cassandra.db.filter.*;
 +import org.apache.cassandra.db.partitions.*;
 +import org.apache.cassandra.db.rows.*;
 +import org.apache.cassandra.exceptions.RequestExecutionException;
 +import org.apache.cassandra.io.sstable.format.SSTableReader;
 +import org.apache.cassandra.io.sstable.format.SSTableReadsListener;
 +import org.apache.cassandra.io.util.DataInputPlus;
 +import org.apache.cassandra.io.util.DataOutputPlus;
 +import org.apache.cassandra.metrics.TableMetrics;
 +import org.apache.cassandra.net.MessageOut;
 +import org.apache.cassandra.net.MessagingService;
 +import org.apache.cassandra.schema.IndexMetadata;
 +import org.apache.cassandra.service.CacheService;
 +import org.apache.cassandra.service.ClientState;
 +import org.apache.cassandra.service.StorageProxy;
 +import org.apache.cassandra.service.pager.*;
 +import org.apache.cassandra.thrift.ThriftResultsMerger;
 +import org.apache.cassandra.tracing.Tracing;
 +import org.apache.cassandra.utils.FBUtilities;
 +import org.apache.cassandra.utils.SearchIterator;
 +import org.apache.cassandra.utils.btree.BTreeSet;
 +import org.apache.cassandra.utils.concurrent.OpOrder;
 +import org.apache.cassandra.utils.memory.HeapAllocator;
 +
 +
 +/**
 + * A read command that selects a (part of a) single partition.
 + */
 +public class SinglePartitionReadCommand extends ReadCommand
 +{
 +protected static final SelectionDeserializer selectionDeserializer = new 
Deserializer();
 +
 +private final DecoratedKey partitionKey;
 +private final ClusteringIndexFilter clusteringIndexFilter;
 +
 +private int oldestUnrepairedTombstone = Integer.MAX_VALUE;
 +
 +public SinglePartitionReadCommand(boolean isDigest,
 +  int digestVersion,
 +  boolean isForThrift,
 +  CFMetaData metadata,
 +  int nowInSec,
 +  ColumnFilter columnFilter,
 +  RowFilter rowFilter,
 +  DataLimits limits,
 +  DecoratedKey partitionKey,
 +  ClusteringIndexFilter 
clusteringIndexFilter)
 +{
 +super(Kind.SINGLE_PARTITION, isDigest, digestVersion, isForThrift, 
metadata, nowInSec, columnFilter, rowFilter, limits);
 +assert partitionKey.getPartitioner() == metadata.partitioner;
 +this.partitionKey = partitionKey;
 +this.clusteringIndexFilter = clusteringIndexFilter;
 +}
 +
 +/**
 + * Creates a new read command on a single partition.
 + *
 + * @param metadata the table to query.
 + * @param nowInSec the time in seconds to use are "now" for this query.
 + * @param columnFilter the column filter to use for the query.
 + * @param rowFilter the row filter to use for the query.
 + * @param limits the limits to 

[2/7] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread bdeggleston
http://git-wip-us.apache.org/repos/asf/cassandra/blob/88d2ac4f/src/java/org/apache/cassandra/service/DataResolver.java
--
diff --cc src/java/org/apache/cassandra/service/DataResolver.java
index 60cfbba,000..c96a893
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/service/DataResolver.java
+++ b/src/java/org/apache/cassandra/service/DataResolver.java
@@@ -1,498 -1,0 +1,498 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.service;
 +
 +import java.net.InetAddress;
 +import java.util.*;
 +import java.util.concurrent.TimeoutException;
 +
 +import com.google.common.annotations.VisibleForTesting;
 +
 +import org.apache.cassandra.concurrent.Stage;
 +import org.apache.cassandra.concurrent.StageManager;
 +import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.config.ColumnDefinition;
 +import org.apache.cassandra.config.DatabaseDescriptor;
 +import org.apache.cassandra.db.*;
 +import org.apache.cassandra.db.filter.ClusteringIndexFilter;
 +import org.apache.cassandra.db.filter.DataLimits;
 +import org.apache.cassandra.db.partitions.*;
 +import org.apache.cassandra.db.rows.*;
 +import org.apache.cassandra.db.transform.MoreRows;
 +import org.apache.cassandra.db.transform.Transformation;
 +import org.apache.cassandra.exceptions.ReadTimeoutException;
 +import org.apache.cassandra.net.*;
 +import org.apache.cassandra.tracing.Tracing;
 +import org.apache.cassandra.utils.FBUtilities;
 +
 +public class DataResolver extends ResponseResolver
 +{
 +@VisibleForTesting
 +final List repairResults = 
Collections.synchronizedList(new ArrayList<>());
 +
 +public DataResolver(Keyspace keyspace, ReadCommand command, 
ConsistencyLevel consistency, int maxResponseCount)
 +{
 +super(keyspace, command, consistency, maxResponseCount);
 +}
 +
 +public PartitionIterator getData()
 +{
 +ReadResponse response = responses.iterator().next().payload;
 +return 
UnfilteredPartitionIterators.filter(response.makeIterator(command), 
command.nowInSec());
 +}
 +
 +public PartitionIterator resolve()
 +{
 +// We could get more responses while this method runs, which is ok 
(we're happy to ignore any response not here
 +// at the beginning of this method), so grab the response count once 
and use that through the method.
 +int count = responses.size();
 +List iters = new ArrayList<>(count);
 +InetAddress[] sources = new InetAddress[count];
 +for (int i = 0; i < count; i++)
 +{
 +MessageIn msg = responses.get(i);
 +iters.add(msg.payload.makeIterator(command));
 +sources[i] = msg.from;
 +}
 +
 +// Even though every responses should honor the limit, we might have 
more than requested post reconciliation,
 +// so ensure we're respecting the limit.
- DataLimits.Counter counter = 
command.limits().newCounter(command.nowInSec(), true);
++DataLimits.Counter counter = 
command.limits().newCounter(command.nowInSec(), true, 
command.selectsFullPartition());
 +return counter.applyTo(mergeWithShortReadProtection(iters, sources, 
counter));
 +}
 +
 +public void compareResponses()
 +{
 +// We need to fully consume the results to trigger read repairs if 
appropriate
 +try (PartitionIterator iterator = resolve())
 +{
 +PartitionIterators.consume(iterator);
 +}
 +}
 +
 +private PartitionIterator 
mergeWithShortReadProtection(List results, 
InetAddress[] sources, DataLimits.Counter resultCounter)
 +{
 +// If we have only one results, there is no read repair to do and we 
can't get short reads
 +if (results.size() == 1)
 +return UnfilteredPartitionIterators.filter(results.get(0), 
command.nowInSec());
 +
 +UnfilteredPartitionIterators.MergeListener listener = new 
RepairMergeListener(sources);
 +
 +// So-called "short reads" stems from nodes returning only a subset 
of the results they have for a partition due to the limit,
 +// but that 

[jira] [Updated] (CASSANDRA-13682) Include cassandra-lucene-index plugin description in doc.

2017-07-14 Thread Stefan Podkowinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Podkowinski updated CASSANDRA-13682:
---
Component/s: (was: Secondary Indexes)
 Documentation and Website

> Include cassandra-lucene-index plugin description in doc.
> -
>
> Key: CASSANDRA-13682
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13682
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Eduardo Alonso de Blas
>Assignee: Eduardo Alonso de Blas
>Priority: Trivial
>  Labels: easy-fix
> Fix For: 4.0
>
>
> In cassandra user-list, in thread "UDf for sorting", Jeff Jirsa asks if 
> anyone could add this. 
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13682) Include cassandra-lucene-index plugin description in doc.

2017-07-14 Thread Stefan Podkowinski (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Podkowinski updated CASSANDRA-13682:
---
   Resolution: Fixed
 Assignee: Eduardo Alonso de Blas
 Reviewer: Stefan Podkowinski
Fix Version/s: 4.0
   Status: Resolved  (was: Patch Available)

Merged as 965c774

> Include cassandra-lucene-index plugin description in doc.
> -
>
> Key: CASSANDRA-13682
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13682
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Eduardo Alonso de Blas
>Assignee: Eduardo Alonso de Blas
>Priority: Trivial
>  Labels: easy-fix
> Fix For: 4.0
>
>
> In cassandra user-list, in thread "UDf for sorting", Jeff Jirsa asks if 
> anyone could add this. 
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13682) Include cassandra-lucene-index plugin description in doc.

2017-07-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16087672#comment-16087672
 ] 

ASF GitHub Bot commented on CASSANDRA-13682:


Github user asfgit closed the pull request at:

https://github.com/apache/cassandra/pull/128


> Include cassandra-lucene-index plugin description in doc.
> -
>
> Key: CASSANDRA-13682
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13682
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Secondary Indexes
>Reporter: Eduardo Alonso de Blas
>Priority: Trivial
>  Labels: easy-fix
>
> In cassandra user-list, in thread "UDf for sorting", Jeff Jirsa asks if 
> anyone could add this. 
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Docs: Add Stratio Lucene Index to plugins

2017-07-14 Thread spod
Repository: cassandra
Updated Branches:
  refs/heads/trunk e0ce6ce77 -> 965c7743d


Docs: Add Stratio Lucene Index to plugins

Closes #128

patch by Eduardo Alonso de Blas; reviewed by Stefan Podkowinski for 
CASSANDRA-13682


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/965c7743
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/965c7743
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/965c7743

Branch: refs/heads/trunk
Commit: 965c7743d3582b909dd133f96a831f8e07271529
Parents: e0ce6ce
Author: edu 
Authored: Thu Jul 6 14:08:36 2017 +0200
Committer: Stefan Podkowinski 
Committed: Fri Jul 14 19:37:03 2017 +0200

--
 doc/source/plugins/index.rst | 7 +++
 1 file changed, 7 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/965c7743/doc/source/plugins/index.rst
--
diff --git a/doc/source/plugins/index.rst b/doc/source/plugins/index.rst
index 257a665..2642df2 100644
--- a/doc/source/plugins/index.rst
+++ b/doc/source/plugins/index.rst
@@ -26,3 +26,10 @@ The Coherent Accelerator Process Interface (CAPI) is a 
general term for the infr
 
 The official page for the `CAPI-Rowcache plugin 
`__ contains further details how to 
build/run/download the plugin.
 
+
+Stratio’s Cassandra Lucene Index
+
+
+Stratio’s Lucene index is a Cassandra secondary index implementation based 
on `Apache Lucene `__. It extends Cassandra’s 
functionality to provide near real-time distributed search engine capabilities 
such as with ElasticSearch or `Apache Solr `__, 
including full text search capabilities, free multivariable, geospatial and 
bitemporal search, relevance queries and sorting based on column value, 
relevance or distance. Each node indexes its own data, so high availability and 
scalability is guaranteed.
+
+The official Github repository `Cassandra Lucene Index 
`__ contains everything 
you need to build/run/configure the plugin.
\ No newline at end of file


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: ninja fix CHANGES.txt

2017-07-14 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.11 7aa89a64e -> 7df942bc5


ninja fix CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7df942bc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7df942bc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7df942bc

Branch: refs/heads/cassandra-3.11
Commit: 7df942bc5c2b50fccf39b06a96b654ca7840d80f
Parents: 7aa89a6
Author: Benjamin Lerer 
Authored: Fri Jul 14 19:17:59 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 19:17:59 2017 +0200

--
 CHANGES.txt | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7df942bc/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index edd66e2..87b058e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -11,7 +11,8 @@ Merged from 3.0:
 Merged from 2.2:
  * Fix queries with LIMIT and filtering on clustering columns (CASSANDRA-11223)
  * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
- * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)  
* Fix nested Tuples/UDTs validation (CASSANDRA-13646)
+ * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
+ * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
 
 3.11.0
  * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Purge tombstones created by expired cells

2017-07-14 Thread bdeggleston
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 88d2ac4f2 -> a033f5165


Purge tombstones created by expired cells

Patch by Blake Eggleston; reviewed by Sylvain Lebresne for CASSANDRA-13643


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a033f516
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a033f516
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a033f516

Branch: refs/heads/cassandra-3.0
Commit: a033f51651e1a990adca795f92d683999c474151
Parents: 88d2ac4
Author: Blake Eggleston 
Authored: Tue Jun 27 08:41:17 2017 -0700
Committer: Blake Eggleston 
Committed: Fri Jul 14 10:02:07 2017 -0700

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/rows/BufferCell.java|  2 +-
 test/unit/org/apache/cassandra/db/CellTest.java | 78 +++-
 3 files changed, 79 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a033f516/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 4a823c9..9962f4b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.15
+ * Purge tombstones created by expired cells (CASSANDRA-13643)
  * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
  * Set test.runners based on cores and memory size (CASSANDRA-13078)
  * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a033f516/src/java/org/apache/cassandra/db/rows/BufferCell.java
--
diff --git a/src/java/org/apache/cassandra/db/rows/BufferCell.java 
b/src/java/org/apache/cassandra/db/rows/BufferCell.java
index db0ded5..e4ad7e6 100644
--- a/src/java/org/apache/cassandra/db/rows/BufferCell.java
+++ b/src/java/org/apache/cassandra/db/rows/BufferCell.java
@@ -176,7 +176,7 @@ public class BufferCell extends AbstractCell
 // Note that as long as the expiring column and the tombstone 
put together live longer than GC grace seconds,
 // we'll fulfil our responsibility to repair. See discussion at
 // 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/repair-compaction-and-tombstone-rows-td7583481.html
-return BufferCell.tombstone(column, timestamp, 
localDeletionTime - ttl, path);
+return BufferCell.tombstone(column, timestamp, 
localDeletionTime - ttl, path).purge(purger, nowInSec);
 }
 }
 return this;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a033f516/test/unit/org/apache/cassandra/db/CellTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/CellTest.java 
b/test/unit/org/apache/cassandra/db/CellTest.java
index 9072f98..cd6000f 100644
--- a/test/unit/org/apache/cassandra/db/CellTest.java
+++ b/test/unit/org/apache/cassandra/db/CellTest.java
@@ -174,6 +174,77 @@ public class CellTest
 Assert.assertEquals(-1, testExpiring("val", "b", 2, 1, null, "a", 
null, 2));
 }
 
+class SimplePurger implements DeletionPurger
+{
+private final int gcBefore;
+
+public SimplePurger(int gcBefore)
+{
+this.gcBefore = gcBefore;
+}
+
+public boolean shouldPurge(long timestamp, int localDeletionTime)
+{
+return localDeletionTime < gcBefore;
+}
+}
+
+/**
+ * tombstones shouldn't be purged if localDeletionTime is greater than 
gcBefore
+ */
+@Test
+public void testNonPurgableTombstone()
+{
+int now = 100;
+Cell cell = deleted(cfm, "val", now, now);
+Cell purged = cell.purge(new SimplePurger(now - 1), now + 1);
+Assert.assertEquals(cell, purged);
+}
+
+@Test
+public void testPurgeableTombstone()
+{
+int now = 100;
+Cell cell = deleted(cfm, "val", now, now);
+Cell purged = cell.purge(new SimplePurger(now + 1), now + 1);
+Assert.assertNull(purged);
+}
+
+@Test
+public void testLiveExpiringCell()
+{
+int now = 100;
+Cell cell = expiring(cfm, "val", "a", now, now + 10);
+Cell purged = cell.purge(new SimplePurger(now), now + 1);
+Assert.assertEquals(cell, purged);
+}
+
+/**
+ * cells that have expired should be converted to tombstones with an local 
deletion time
+ * of the cell's local expiration time, minus it's ttl
+ */
+@Test
+public void testExpiredTombstoneConversion()
+{
+int now = 100;
+Cell cell = expiring(cfm, "val", "a", now, 10, 

[jira] [Commented] (CASSANDRA-12971) Add CAS option to WRITE test to stress tool

2017-07-14 Thread Jay Zhuang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16087576#comment-16087576
 ] 

Jay Zhuang commented on CASSANDRA-12971:


And a simpler version CASSANDRA-7960

> Add CAS option to WRITE test to stress tool
> ---
>
> Key: CASSANDRA-12971
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12971
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Stress, Tools
>Reporter: Vladimir Yudovin
>Assignee: Vladimir Yudovin
> Attachments: stress-cass.patch
>
>
> If -cas option is present each UPDATE is performed with true IF condition, 
> thus data is inserted anyway.
> It's implemented, if it's needed I proceed with the patch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13688) Anticompaction race can leak sstables/txn

2017-07-14 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-13688:
---
Reviewer: Ariel Weisberg

> Anticompaction race can leak sstables/txn
> -
>
> Key: CASSANDRA-13688
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13688
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> At the top of {{CompactionManager#performAntiCompaction}}, the parent repair 
> session is loaded, if the session can't be found, a RuntimeException is 
> thrown. This can happen if a participant is evicted after the IR prepare 
> message is received, but before the anticompaction starts. This exception is 
> thrown outside of the try/finally block that guards the sstable and lifecycle 
> transaction, causing them to leak, and preventing the sstables from ever 
> being removed from View.compacting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13679) Add option to customize badness_threshold in dynamic endpoint snitch

2017-07-14 Thread Simon Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16087527#comment-16087527
 ] 

Simon Zhou commented on CASSANDRA-13679:


Here are the patches. Not sure if we need one for 3.11.

|3.0.x |[patch | 
https://github.com/szhou1234/cassandra/commit/50cc71418d3fc75b1d8225eb1bded95ac1f1bdd7]|
|4.0 |[patch | 
https://github.com/szhou1234/cassandra/commit/a7144f8d50872dc4e5591db73ff770388d410403]|


> Add option to customize badness_threshold in dynamic endpoint snitch
> 
>
> Key: CASSANDRA-13679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13679
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Simon Zhou
>Assignee: Simon Zhou
> Attachments: Screen Shot 2017-07-07 at 5.01.48 PM.png
>
>
> I'm working on tuning dynamic endpoint snitch and looks like the default 
> value (0.1) for Config.dynamic_snitch_badness_threshold is too sensitive and 
> causes traffic imbalance among nodes, especially with my patch for 
> CASSANDRA-13577. So we should:
> 1. Revisit the default value.
> 2. Add an option to allow customize badness_threshold during bootstrap.
> This ticket is to track #2. I attached a screenshot to show that, after 
> increasing badness_threshold from 0.1 to 1.0 by using patch from 
> CASSANDRA-12179, the traffic imbalance is gone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[8/8] cassandra git commit: Merge branch cassandra-3.11 into trunk

2017-07-14 Thread blerer
Merge branch cassandra-3.11 into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e0ce6ce7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e0ce6ce7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e0ce6ce7

Branch: refs/heads/trunk
Commit: e0ce6ce77d38db1890c5f2bccba1f19fdfe256be
Parents: 465cfd5 7aa89a6
Author: Benjamin Lerer 
Authored: Fri Jul 14 17:21:19 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 17:23:36 2017 +0200

--
 CHANGES.txt |   7 +-
 .../apache/cassandra/db/ColumnFamilyStore.java  |   3 +-
 src/java/org/apache/cassandra/db/DataRange.java |   5 +
 .../cassandra/db/PartitionRangeReadCommand.java |   6 +
 .../org/apache/cassandra/db/ReadCommand.java|   2 +-
 src/java/org/apache/cassandra/db/ReadQuery.java |  12 +
 .../db/SinglePartitionReadCommand.java  |  21 +-
 .../apache/cassandra/db/filter/DataLimits.java  |  73 +++--
 .../apache/cassandra/db/filter/RowFilter.java   |  15 +
 .../apache/cassandra/service/CacheService.java  |   2 +-
 .../apache/cassandra/service/DataResolver.java  |   4 +-
 .../apache/cassandra/service/StorageProxy.java  |   8 +-
 .../service/pager/AbstractQueryPager.java   |   2 +-
 .../service/pager/MultiPartitionPager.java  |   9 +-
 .../org/apache/cassandra/cql3/CQLTester.java|   8 +-
 .../validation/operations/SelectLimitTest.java  | 279 +++
 .../db/rows/UnfilteredRowIteratorsTest.java |  10 +-
 17 files changed, 412 insertions(+), 54 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0ce6ce7/CHANGES.txt
--
diff --cc CHANGES.txt
index 70aae21,edd66e2..44b8ce8
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -105,9 -9,9 +105,10 @@@ Merged from 3.0
   * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)
   * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)
  Merged from 2.2:
-   * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
-   * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
-   * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
+  * Fix queries with LIMIT and filtering on clustering columns 
(CASSANDRA-11223)
+  * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
 - * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592) 
 * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
++ * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
++ * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
  
  3.11.0
   * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0ce6ce7/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0ce6ce7/src/java/org/apache/cassandra/db/DataRange.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0ce6ce7/src/java/org/apache/cassandra/db/PartitionRangeReadCommand.java
--
diff --cc src/java/org/apache/cassandra/db/PartitionRangeReadCommand.java
index bc80907,c3d6e05..1f54f16
--- a/src/java/org/apache/cassandra/db/PartitionRangeReadCommand.java
+++ b/src/java/org/apache/cassandra/db/PartitionRangeReadCommand.java
@@@ -327,10 -331,17 +327,16 @@@ public class PartitionRangeReadCommand 
  }
  
  @Override
+ public boolean selectsFullPartition()
+ {
+ return dataRange.selectsAllPartition() && 
!rowFilter().hasExpressionOnClusteringOrRegularColumns();
+ }
+ 
+ @Override
  public String toString()
  {
 -return String.format("Read(%s.%s columns=%s rowfilter=%s limits=%s 
%s)",
 - metadata().ksName,
 - metadata().cfName,
 +return String.format("Read(%s columns=%s rowfilter=%s limits=%s %s)",
 + metadata().toString(),
   columnFilter(),
   rowFilter(),
   limits(),

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0ce6ce7/src/java/org/apache/cassandra/db/ReadCommand.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e0ce6ce7/src/java/org/apache/cassandra/db/ReadQuery.java
--


[6/7] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread blerer
Merge branch cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e2445cfb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e2445cfb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e2445cfb

Branch: refs/heads/cassandra-3.11
Commit: e2445cfb18f8b8c05acc0199df37560abfe936e6
Parents: 7de853b b08843d
Author: Benjamin Lerer 
Authored: Fri Jul 14 17:14:38 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 17:15:48 2017 +0200

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |   3 +-
 src/java/org/apache/cassandra/db/DataRange.java |   5 +
 .../cassandra/db/PartitionRangeReadCommand.java |   7 +-
 .../org/apache/cassandra/db/ReadCommand.java|   2 +-
 src/java/org/apache/cassandra/db/ReadQuery.java |  12 +
 .../db/SinglePartitionReadCommand.java  |  21 +-
 .../apache/cassandra/db/filter/DataLimits.java  |  63 +++--
 .../apache/cassandra/db/filter/RowFilter.java   |  15 ++
 .../apache/cassandra/service/CacheService.java  |   2 +-
 .../apache/cassandra/service/DataResolver.java  |   4 +-
 .../apache/cassandra/service/StorageProxy.java  |   8 +-
 .../service/pager/AbstractQueryPager.java   |   2 +-
 .../service/pager/MultiPartitionPager.java  |   9 +-
 .../cassandra/service/pager/QueryPagers.java|   2 +-
 .../org/apache/cassandra/cql3/CQLTester.java|   8 +-
 .../validation/operations/SelectLimitTest.java  | 256 ++-
 .../db/rows/UnfilteredRowIteratorsTest.java |  10 +-
 18 files changed, 379 insertions(+), 51 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e2445cfb/CHANGES.txt
--
diff --cc CHANGES.txt
index fffda7f,bda510f..c916452
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,65 -1,18 +1,66 @@@
 -2.2.11
 - * Fix queries with LIMIT and filtering on clustering columns 
(CASSANDRA-11223)
 - * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
 - * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
 - * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
 - * Remove unused max_value_size_in_mb config setting from yaml 
(CASSANDRA-13625
 +3.0.15
 + * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
 + * Set test.runners based on cores and memory size (CASSANDRA-13078)
 + * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
 + * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
 + * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)
 + * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)
 + Merged from 2.2:
++  * Fix queries with LIMIT and filtering on clustering columns 
(CASSANDRA-11223)
 +  * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
 +  * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
 +  * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
  
 -
 -2.2.10
 +3.0.14
 + * Ensure int overflow doesn't occur when calculating large partition warning 
size (CASSANDRA-13172)
 + * Ensure consistent view of partition columns between coordinator and 
replica in ColumnFilter (CASSANDRA-13004)
 + * Failed unregistering mbean during drop keyspace (CASSANDRA-13346)
 + * nodetool scrub/cleanup/upgradesstables exit code is wrong (CASSANDRA-13542)
 + * Fix the reported number of sstable data files accessed per read 
(CASSANDRA-13120)
 + * Fix schema digest mismatch during rolling upgrades from versions before 
3.0.12 (CASSANDRA-13559)
 + * Upgrade JNA version to 4.4.0 (CASSANDRA-13072)
 + * Interned ColumnIdentifiers should use minimal ByteBuffers (CASSANDRA-13533)
 + * ReverseIndexedReader may drop rows during 2.1 to 3.0 upgrade 
(CASSANDRA-13525)
 + * Fix repair process violating start/end token limits for small ranges 
(CASSANDRA-13052)
 + * Add storage port options to sstableloader (CASSANDRA-13518)
 + * Properly handle quoted index names in cqlsh DESCRIBE output 
(CASSANDRA-12847)
 + * Avoid reading static row twice from old format sstables (CASSANDRA-13236)
 + * Fix NPE in StorageService.excise() (CASSANDRA-13163)
 + * Expire OutboundTcpConnection messages by a single Thread (CASSANDRA-13265)
 + * Fail repair if insufficient responses received (CASSANDRA-13397)
 + * Fix SSTableLoader fail when the loaded table contains dropped columns 
(CASSANDRA-13276)
 + * Avoid name clashes in CassandraIndexTest (CASSANDRA-13427)
 + * Handling partially written hint files (CASSANDRA-12728)
 + * Interrupt replaying hints on decommission (CASSANDRA-13308)
 + * Fix schema version calculation for rolling upgrades (CASSANDRA-13441)
 

[2/8] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread blerer
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e2445cfb/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java
--
diff --cc 
test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java
index aeb3d56,0ffb799..7e90c0a
--- 
a/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java
@@@ -26,14 -26,14 +26,15 @@@ import org.junit.Test
  import org.apache.cassandra.config.DatabaseDescriptor;
  import org.apache.cassandra.cql3.CQLTester;
  import org.apache.cassandra.dht.ByteOrderedPartitioner;
--import org.apache.cassandra.exceptions.InvalidRequestException;
++import org.apache.cassandra.service.StorageService;
  
  public class SelectLimitTest extends CQLTester
  {
  @BeforeClass
  public static void setUp()
  {
 -DatabaseDescriptor.setPartitioner(ByteOrderedPartitioner.instance);
++
StorageService.instance.setPartitionerUnsafe(ByteOrderedPartitioner.instance);
 +
DatabaseDescriptor.setPartitionerUnsafe(ByteOrderedPartitioner.instance);
  }
  
  /**
@@@ -125,43 -125,217 +126,296 @@@
 row(1, 1),
 row(1, 2),
 row(1, 3));
 +assertRows(execute("SELECT * FROM %s WHERE v > 1 AND v <= 3 LIMIT 6 
ALLOW FILTERING"),
 +   row(0, 2),
 +   row(0, 3),
 +   row(1, 2),
 +   row(1, 3),
 +   row(2, 2),
 +   row(2, 3));
 +}
  
 -// strict bound (v > 1) over a range of partitions is not supported 
for compact storage if limit is provided
 -assertInvalidThrow(InvalidRequestException.class, "SELECT * FROM %s 
WHERE v > 1 AND v <= 3 LIMIT 6 ALLOW FILTERING");
 +@Test
 +public void testLimitWithDeletedRowsAndStaticColumns() throws Throwable
 +{
 +createTable("CREATE TABLE %s (pk int, c int, v int, s int static, 
PRIMARY KEY (pk, c))");
 +
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (1, -1, 1, 1)");
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (2, -1, 1, 1)");
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (3, -1, 1, 1)");
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (4, -1, 1, 1)");
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (5, -1, 1, 1)");
 +
 +assertRows(execute("SELECT * FROM %s"),
 +   row(1, -1, 1, 1),
 +   row(2, -1, 1, 1),
 +   row(3, -1, 1, 1),
 +   row(4, -1, 1, 1),
 +   row(5, -1, 1, 1));
 +
 +execute("DELETE FROM %s WHERE pk = 2");
 +
 +assertRows(execute("SELECT * FROM %s"),
 +   row(1, -1, 1, 1),
 +   row(3, -1, 1, 1),
 +   row(4, -1, 1, 1),
 +   row(5, -1, 1, 1));
 +
 +assertRows(execute("SELECT * FROM %s LIMIT 2"),
 +   row(1, -1, 1, 1),
 +   row(3, -1, 1, 1));
  }
+ 
+ @Test
+ public void testFilteringOnClusteringColumnsWithLimitAndStaticColumns() 
throws Throwable
+ {
+ // With only one clustering column
+ createTable("CREATE TABLE %s (a int, b int, s int static, c int, 
primary key (a, b))"
+   + " WITH caching = {'keys': 'ALL', 'rows_per_partition' : 'ALL'}");
+ 
+ for (int i = 0; i < 4; i++)
+ {
+ execute("INSERT INTO %s (a, s) VALUES (?, ?)", i, i);
+ for (int j = 0; j < 3; j++)
+ if (!((i == 0 || i == 3) && j == 1))
+ execute("INSERT INTO %s (a, b, c) VALUES (?, ?, ?)", 
i, j, i + j);
+ }
+ 
 -for (boolean forceFlush : new boolean[]{false, true})
++beforeAndAfterFlush(() ->
+ {
 -if (forceFlush)
 -flush();
 -
+ assertRows(execute("SELECT * FROM %s"),
+row(0, 0, 0, 0),
+row(0, 2, 0, 2),
+row(1, 0, 1, 1),
+row(1, 1, 1, 2),
+row(1, 2, 1, 3),
+row(2, 0, 2, 2),
+row(2, 1, 2, 3),
+row(2, 2, 2, 4),
+row(3, 0, 3, 3),
+row(3, 2, 3, 5));
+ 
+ assertRows(execute("SELECT * FROM %s WHERE b = 1 ALLOW 
FILTERING"),
+row(1, 1, 1, 2),
+row(2, 1, 2, 3));
+ 
+ // The problem was that the static row of the partition 0 used to 
be only filtered in SelectStatement and was
+ // by consequence counted as a row. In which case the query was 
returning one row less.
+ assertRows(execute("SELECT * FROM %s WHERE b = 1 LIMIT 2 ALLOW 
FILTERING"),
+row(1, 1, 1, 2),
+

[1/8] cassandra git commit: Fix queries with LIMIT and filtering on clustering columns

2017-07-14 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/trunk 465cfd5be -> e0ce6ce77


Fix queries with LIMIT and filtering on clustering columns

patch by Benjamin Lerer; reviewed by Stefania Alborghetti for CASSANDRA-11223


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b08843de
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b08843de
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b08843de

Branch: refs/heads/trunk
Commit: b08843de67b3c63fa9c0efe10bb9eda07c007f6c
Parents: 5b982d7
Author: Benjamin Lerer 
Authored: Fri Jul 14 17:11:15 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 17:11:15 2017 +0200

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/db/ColumnFamily.java   |   2 +-
 .../cassandra/db/filter/ColumnCounter.java  |  21 +-
 .../cassandra/db/filter/NamesQueryFilter.java   |   2 +-
 .../cassandra/db/filter/SliceQueryFilter.java   |  17 +-
 .../validation/operations/SelectLimitTest.java  | 209 +++
 6 files changed, 238 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b08843de/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 122ba54..bda510f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.11
+ * Fix queries with LIMIT and filtering on clustering columns (CASSANDRA-11223)
  * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
  * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
  * Fix nested Tuples/UDTs validation (CASSANDRA-13646)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b08843de/src/java/org/apache/cassandra/db/ColumnFamily.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamily.java 
b/src/java/org/apache/cassandra/db/ColumnFamily.java
index a7243a2..1532439 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamily.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamily.java
@@ -92,7 +92,7 @@ public abstract class ColumnFamily implements Iterable, 
IRowCacheEntry
 {
 ColumnCounter counter = getComparator().isDense()
   ? new ColumnCounter(now)
-  : new ColumnCounter.GroupByPrefix(now, 
getComparator(), metadata.clusteringColumns().size());
+  : new ColumnCounter.GroupByPrefix(now, 
getComparator(), metadata.clusteringColumns().size(), true);
 return counter.countAll(this).live();
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b08843de/src/java/org/apache/cassandra/db/filter/ColumnCounter.java
--
diff --git a/src/java/org/apache/cassandra/db/filter/ColumnCounter.java 
b/src/java/org/apache/cassandra/db/filter/ColumnCounter.java
index 594fde8..a00d588 100644
--- a/src/java/org/apache/cassandra/db/filter/ColumnCounter.java
+++ b/src/java/org/apache/cassandra/db/filter/ColumnCounter.java
@@ -90,6 +90,7 @@ public class ColumnCounter
 {
 protected final CellNameType type;
 protected final int toGroup;
+protected final boolean countPartitionsWithOnlyStaticData;
 protected CellName previous;
 
 /**
@@ -101,12 +102,15 @@ public class ColumnCounter
  * @param toGroup the number of composite components on which to group
  *column. If 0, all columns are grouped, otherwise we 
group
  *those for which the {@code toGroup} first component 
are equals.
+ * @param countPartitionsWithOnlyStaticData if {@code true} the 
partitions with only static data should be
+ * counted as 1 valid row.
  */
-public GroupByPrefix(long timestamp, CellNameType type, int toGroup)
+public GroupByPrefix(long timestamp, CellNameType type, int toGroup, 
boolean countPartitionsWithOnlyStaticData)
 {
 super(timestamp);
 this.type = type;
 this.toGroup = toGroup;
+this.countPartitionsWithOnlyStaticData = 
countPartitionsWithOnlyStaticData;
 
 assert toGroup == 0 || type != null;
 }
@@ -153,14 +157,16 @@ public class ColumnCounter
 // We want to count the static group as 1 (CQL) row only if 
it's the only
 // group in the partition. So, since we have already counted 
it at this point,
 // just don't count the 2nd group if there is one and the 
first one was static
-if (previous.isStatic())
+if (previous.isStatic() && countPartitionsWithOnlyStaticData)
   

[3/8] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread blerer
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e2445cfb/src/java/org/apache/cassandra/service/DataResolver.java
--
diff --cc src/java/org/apache/cassandra/service/DataResolver.java
index 60cfbba,000..c96a893
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/service/DataResolver.java
+++ b/src/java/org/apache/cassandra/service/DataResolver.java
@@@ -1,498 -1,0 +1,498 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.service;
 +
 +import java.net.InetAddress;
 +import java.util.*;
 +import java.util.concurrent.TimeoutException;
 +
 +import com.google.common.annotations.VisibleForTesting;
 +
 +import org.apache.cassandra.concurrent.Stage;
 +import org.apache.cassandra.concurrent.StageManager;
 +import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.config.ColumnDefinition;
 +import org.apache.cassandra.config.DatabaseDescriptor;
 +import org.apache.cassandra.db.*;
 +import org.apache.cassandra.db.filter.ClusteringIndexFilter;
 +import org.apache.cassandra.db.filter.DataLimits;
 +import org.apache.cassandra.db.partitions.*;
 +import org.apache.cassandra.db.rows.*;
 +import org.apache.cassandra.db.transform.MoreRows;
 +import org.apache.cassandra.db.transform.Transformation;
 +import org.apache.cassandra.exceptions.ReadTimeoutException;
 +import org.apache.cassandra.net.*;
 +import org.apache.cassandra.tracing.Tracing;
 +import org.apache.cassandra.utils.FBUtilities;
 +
 +public class DataResolver extends ResponseResolver
 +{
 +@VisibleForTesting
 +final List repairResults = 
Collections.synchronizedList(new ArrayList<>());
 +
 +public DataResolver(Keyspace keyspace, ReadCommand command, 
ConsistencyLevel consistency, int maxResponseCount)
 +{
 +super(keyspace, command, consistency, maxResponseCount);
 +}
 +
 +public PartitionIterator getData()
 +{
 +ReadResponse response = responses.iterator().next().payload;
 +return 
UnfilteredPartitionIterators.filter(response.makeIterator(command), 
command.nowInSec());
 +}
 +
 +public PartitionIterator resolve()
 +{
 +// We could get more responses while this method runs, which is ok 
(we're happy to ignore any response not here
 +// at the beginning of this method), so grab the response count once 
and use that through the method.
 +int count = responses.size();
 +List iters = new ArrayList<>(count);
 +InetAddress[] sources = new InetAddress[count];
 +for (int i = 0; i < count; i++)
 +{
 +MessageIn msg = responses.get(i);
 +iters.add(msg.payload.makeIterator(command));
 +sources[i] = msg.from;
 +}
 +
 +// Even though every responses should honor the limit, we might have 
more than requested post reconciliation,
 +// so ensure we're respecting the limit.
- DataLimits.Counter counter = 
command.limits().newCounter(command.nowInSec(), true);
++DataLimits.Counter counter = 
command.limits().newCounter(command.nowInSec(), true, 
command.selectsFullPartition());
 +return counter.applyTo(mergeWithShortReadProtection(iters, sources, 
counter));
 +}
 +
 +public void compareResponses()
 +{
 +// We need to fully consume the results to trigger read repairs if 
appropriate
 +try (PartitionIterator iterator = resolve())
 +{
 +PartitionIterators.consume(iterator);
 +}
 +}
 +
 +private PartitionIterator 
mergeWithShortReadProtection(List results, 
InetAddress[] sources, DataLimits.Counter resultCounter)
 +{
 +// If we have only one results, there is no read repair to do and we 
can't get short reads
 +if (results.size() == 1)
 +return UnfilteredPartitionIterators.filter(results.get(0), 
command.nowInSec());
 +
 +UnfilteredPartitionIterators.MergeListener listener = new 
RepairMergeListener(sources);
 +
 +// So-called "short reads" stems from nodes returning only a subset 
of the results they have for a partition due to the limit,
 +// but that 

[5/8] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread blerer
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e2445cfb/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
--
diff --cc src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
index b4211bb,000..319eeb4
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
+++ b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
@@@ -1,1089 -1,0 +1,1106 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.db;
 +
 +import java.io.IOException;
 +import java.nio.ByteBuffer;
 +import java.util.*;
 +
 +import com.google.common.collect.Iterables;
 +import com.google.common.collect.Sets;
 +
 +import org.apache.cassandra.cache.IRowCacheEntry;
 +import org.apache.cassandra.cache.RowCacheKey;
 +import org.apache.cassandra.cache.RowCacheSentinel;
 +import org.apache.cassandra.concurrent.Stage;
 +import org.apache.cassandra.concurrent.StageManager;
 +import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.config.ColumnDefinition;
 +import org.apache.cassandra.config.DatabaseDescriptor;
 +import org.apache.cassandra.db.lifecycle.*;
 +import org.apache.cassandra.db.filter.*;
 +import org.apache.cassandra.db.partitions.*;
 +import org.apache.cassandra.db.rows.*;
 +import org.apache.cassandra.exceptions.RequestExecutionException;
 +import org.apache.cassandra.io.sstable.format.SSTableReader;
 +import org.apache.cassandra.io.sstable.format.SSTableReadsListener;
 +import org.apache.cassandra.io.util.DataInputPlus;
 +import org.apache.cassandra.io.util.DataOutputPlus;
 +import org.apache.cassandra.metrics.TableMetrics;
 +import org.apache.cassandra.net.MessageOut;
 +import org.apache.cassandra.net.MessagingService;
 +import org.apache.cassandra.schema.IndexMetadata;
 +import org.apache.cassandra.service.CacheService;
 +import org.apache.cassandra.service.ClientState;
 +import org.apache.cassandra.service.StorageProxy;
 +import org.apache.cassandra.service.pager.*;
 +import org.apache.cassandra.thrift.ThriftResultsMerger;
 +import org.apache.cassandra.tracing.Tracing;
 +import org.apache.cassandra.utils.FBUtilities;
 +import org.apache.cassandra.utils.SearchIterator;
 +import org.apache.cassandra.utils.btree.BTreeSet;
 +import org.apache.cassandra.utils.concurrent.OpOrder;
 +import org.apache.cassandra.utils.memory.HeapAllocator;
 +
 +
 +/**
 + * A read command that selects a (part of a) single partition.
 + */
 +public class SinglePartitionReadCommand extends ReadCommand
 +{
 +protected static final SelectionDeserializer selectionDeserializer = new 
Deserializer();
 +
 +private final DecoratedKey partitionKey;
 +private final ClusteringIndexFilter clusteringIndexFilter;
 +
 +private int oldestUnrepairedTombstone = Integer.MAX_VALUE;
 +
 +public SinglePartitionReadCommand(boolean isDigest,
 +  int digestVersion,
 +  boolean isForThrift,
 +  CFMetaData metadata,
 +  int nowInSec,
 +  ColumnFilter columnFilter,
 +  RowFilter rowFilter,
 +  DataLimits limits,
 +  DecoratedKey partitionKey,
 +  ClusteringIndexFilter 
clusteringIndexFilter)
 +{
 +super(Kind.SINGLE_PARTITION, isDigest, digestVersion, isForThrift, 
metadata, nowInSec, columnFilter, rowFilter, limits);
 +assert partitionKey.getPartitioner() == metadata.partitioner;
 +this.partitionKey = partitionKey;
 +this.clusteringIndexFilter = clusteringIndexFilter;
 +}
 +
 +/**
 + * Creates a new read command on a single partition.
 + *
 + * @param metadata the table to query.
 + * @param nowInSec the time in seconds to use are "now" for this query.
 + * @param columnFilter the column filter to use for the query.
 + * @param rowFilter the row filter to use for the query.
 + * @param limits the limits to 

[5/7] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread blerer
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e2445cfb/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
--
diff --cc src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
index b4211bb,000..319eeb4
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
+++ b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
@@@ -1,1089 -1,0 +1,1106 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.db;
 +
 +import java.io.IOException;
 +import java.nio.ByteBuffer;
 +import java.util.*;
 +
 +import com.google.common.collect.Iterables;
 +import com.google.common.collect.Sets;
 +
 +import org.apache.cassandra.cache.IRowCacheEntry;
 +import org.apache.cassandra.cache.RowCacheKey;
 +import org.apache.cassandra.cache.RowCacheSentinel;
 +import org.apache.cassandra.concurrent.Stage;
 +import org.apache.cassandra.concurrent.StageManager;
 +import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.config.ColumnDefinition;
 +import org.apache.cassandra.config.DatabaseDescriptor;
 +import org.apache.cassandra.db.lifecycle.*;
 +import org.apache.cassandra.db.filter.*;
 +import org.apache.cassandra.db.partitions.*;
 +import org.apache.cassandra.db.rows.*;
 +import org.apache.cassandra.exceptions.RequestExecutionException;
 +import org.apache.cassandra.io.sstable.format.SSTableReader;
 +import org.apache.cassandra.io.sstable.format.SSTableReadsListener;
 +import org.apache.cassandra.io.util.DataInputPlus;
 +import org.apache.cassandra.io.util.DataOutputPlus;
 +import org.apache.cassandra.metrics.TableMetrics;
 +import org.apache.cassandra.net.MessageOut;
 +import org.apache.cassandra.net.MessagingService;
 +import org.apache.cassandra.schema.IndexMetadata;
 +import org.apache.cassandra.service.CacheService;
 +import org.apache.cassandra.service.ClientState;
 +import org.apache.cassandra.service.StorageProxy;
 +import org.apache.cassandra.service.pager.*;
 +import org.apache.cassandra.thrift.ThriftResultsMerger;
 +import org.apache.cassandra.tracing.Tracing;
 +import org.apache.cassandra.utils.FBUtilities;
 +import org.apache.cassandra.utils.SearchIterator;
 +import org.apache.cassandra.utils.btree.BTreeSet;
 +import org.apache.cassandra.utils.concurrent.OpOrder;
 +import org.apache.cassandra.utils.memory.HeapAllocator;
 +
 +
 +/**
 + * A read command that selects a (part of a) single partition.
 + */
 +public class SinglePartitionReadCommand extends ReadCommand
 +{
 +protected static final SelectionDeserializer selectionDeserializer = new 
Deserializer();
 +
 +private final DecoratedKey partitionKey;
 +private final ClusteringIndexFilter clusteringIndexFilter;
 +
 +private int oldestUnrepairedTombstone = Integer.MAX_VALUE;
 +
 +public SinglePartitionReadCommand(boolean isDigest,
 +  int digestVersion,
 +  boolean isForThrift,
 +  CFMetaData metadata,
 +  int nowInSec,
 +  ColumnFilter columnFilter,
 +  RowFilter rowFilter,
 +  DataLimits limits,
 +  DecoratedKey partitionKey,
 +  ClusteringIndexFilter 
clusteringIndexFilter)
 +{
 +super(Kind.SINGLE_PARTITION, isDigest, digestVersion, isForThrift, 
metadata, nowInSec, columnFilter, rowFilter, limits);
 +assert partitionKey.getPartitioner() == metadata.partitioner;
 +this.partitionKey = partitionKey;
 +this.clusteringIndexFilter = clusteringIndexFilter;
 +}
 +
 +/**
 + * Creates a new read command on a single partition.
 + *
 + * @param metadata the table to query.
 + * @param nowInSec the time in seconds to use are "now" for this query.
 + * @param columnFilter the column filter to use for the query.
 + * @param rowFilter the row filter to use for the query.
 + * @param limits the limits to 

[2/7] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread blerer
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e2445cfb/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java
--
diff --cc 
test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java
index aeb3d56,0ffb799..7e90c0a
--- 
a/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java
@@@ -26,14 -26,14 +26,15 @@@ import org.junit.Test
  import org.apache.cassandra.config.DatabaseDescriptor;
  import org.apache.cassandra.cql3.CQLTester;
  import org.apache.cassandra.dht.ByteOrderedPartitioner;
--import org.apache.cassandra.exceptions.InvalidRequestException;
++import org.apache.cassandra.service.StorageService;
  
  public class SelectLimitTest extends CQLTester
  {
  @BeforeClass
  public static void setUp()
  {
 -DatabaseDescriptor.setPartitioner(ByteOrderedPartitioner.instance);
++
StorageService.instance.setPartitionerUnsafe(ByteOrderedPartitioner.instance);
 +
DatabaseDescriptor.setPartitionerUnsafe(ByteOrderedPartitioner.instance);
  }
  
  /**
@@@ -125,43 -125,217 +126,296 @@@
 row(1, 1),
 row(1, 2),
 row(1, 3));
 +assertRows(execute("SELECT * FROM %s WHERE v > 1 AND v <= 3 LIMIT 6 
ALLOW FILTERING"),
 +   row(0, 2),
 +   row(0, 3),
 +   row(1, 2),
 +   row(1, 3),
 +   row(2, 2),
 +   row(2, 3));
 +}
  
 -// strict bound (v > 1) over a range of partitions is not supported 
for compact storage if limit is provided
 -assertInvalidThrow(InvalidRequestException.class, "SELECT * FROM %s 
WHERE v > 1 AND v <= 3 LIMIT 6 ALLOW FILTERING");
 +@Test
 +public void testLimitWithDeletedRowsAndStaticColumns() throws Throwable
 +{
 +createTable("CREATE TABLE %s (pk int, c int, v int, s int static, 
PRIMARY KEY (pk, c))");
 +
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (1, -1, 1, 1)");
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (2, -1, 1, 1)");
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (3, -1, 1, 1)");
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (4, -1, 1, 1)");
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (5, -1, 1, 1)");
 +
 +assertRows(execute("SELECT * FROM %s"),
 +   row(1, -1, 1, 1),
 +   row(2, -1, 1, 1),
 +   row(3, -1, 1, 1),
 +   row(4, -1, 1, 1),
 +   row(5, -1, 1, 1));
 +
 +execute("DELETE FROM %s WHERE pk = 2");
 +
 +assertRows(execute("SELECT * FROM %s"),
 +   row(1, -1, 1, 1),
 +   row(3, -1, 1, 1),
 +   row(4, -1, 1, 1),
 +   row(5, -1, 1, 1));
 +
 +assertRows(execute("SELECT * FROM %s LIMIT 2"),
 +   row(1, -1, 1, 1),
 +   row(3, -1, 1, 1));
  }
+ 
+ @Test
+ public void testFilteringOnClusteringColumnsWithLimitAndStaticColumns() 
throws Throwable
+ {
+ // With only one clustering column
+ createTable("CREATE TABLE %s (a int, b int, s int static, c int, 
primary key (a, b))"
+   + " WITH caching = {'keys': 'ALL', 'rows_per_partition' : 'ALL'}");
+ 
+ for (int i = 0; i < 4; i++)
+ {
+ execute("INSERT INTO %s (a, s) VALUES (?, ?)", i, i);
+ for (int j = 0; j < 3; j++)
+ if (!((i == 0 || i == 3) && j == 1))
+ execute("INSERT INTO %s (a, b, c) VALUES (?, ?, ?)", 
i, j, i + j);
+ }
+ 
 -for (boolean forceFlush : new boolean[]{false, true})
++beforeAndAfterFlush(() ->
+ {
 -if (forceFlush)
 -flush();
 -
+ assertRows(execute("SELECT * FROM %s"),
+row(0, 0, 0, 0),
+row(0, 2, 0, 2),
+row(1, 0, 1, 1),
+row(1, 1, 1, 2),
+row(1, 2, 1, 3),
+row(2, 0, 2, 2),
+row(2, 1, 2, 3),
+row(2, 2, 2, 4),
+row(3, 0, 3, 3),
+row(3, 2, 3, 5));
+ 
+ assertRows(execute("SELECT * FROM %s WHERE b = 1 ALLOW 
FILTERING"),
+row(1, 1, 1, 2),
+row(2, 1, 2, 3));
+ 
+ // The problem was that the static row of the partition 0 used to 
be only filtered in SelectStatement and was
+ // by consequence counted as a row. In which case the query was 
returning one row less.
+ assertRows(execute("SELECT * FROM %s WHERE b = 1 LIMIT 2 ALLOW 
FILTERING"),
+row(1, 1, 1, 2),
+

[7/8] cassandra git commit: Merge branch cassandra-3.0 into cassandra-3.11

2017-07-14 Thread blerer
Merge branch cassandra-3.0 into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7aa89a64
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7aa89a64
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7aa89a64

Branch: refs/heads/trunk
Commit: 7aa89a64e09c57061418c1d83c03ae7cfd0cd745
Parents: bd89f56 e2445cf
Author: Benjamin Lerer 
Authored: Fri Jul 14 17:18:19 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 17:19:38 2017 +0200

--
 CHANGES.txt |   6 +-
 .../apache/cassandra/db/ColumnFamilyStore.java  |   3 +-
 src/java/org/apache/cassandra/db/DataRange.java |   5 +
 .../cassandra/db/PartitionRangeReadCommand.java |   6 +
 .../org/apache/cassandra/db/ReadCommand.java|   2 +-
 src/java/org/apache/cassandra/db/ReadQuery.java |  12 +
 .../db/SinglePartitionReadCommand.java  |  22 +-
 .../apache/cassandra/db/filter/DataLimits.java  |  82 +++---
 .../apache/cassandra/db/filter/RowFilter.java   |  15 +
 .../apache/cassandra/service/CacheService.java  |   2 +-
 .../apache/cassandra/service/DataResolver.java  |   4 +-
 .../apache/cassandra/service/StorageProxy.java  |   8 +-
 .../service/pager/AbstractQueryPager.java   |   2 +-
 .../service/pager/MultiPartitionPager.java  |   9 +-
 .../cassandra/service/pager/QueryPagers.java|   2 +-
 .../org/apache/cassandra/cql3/CQLTester.java|   8 +-
 .../validation/operations/SelectLimitTest.java  | 279 +++
 .../db/rows/UnfilteredRowIteratorsTest.java |  10 +-
 18 files changed, 418 insertions(+), 59 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7aa89a64/CHANGES.txt
--
diff --cc CHANGES.txt
index e7ad6fb,c916452..edd66e2
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -6,53 -2,16 +6,53 @@@ Merged from 3.0
   * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
   * Set test.runners based on cores and memory size (CASSANDRA-13078)
   * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
 - * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
   * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)
   * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)
 - Merged from 2.2:
 -  * Fix queries with LIMIT and filtering on clustering columns 
(CASSANDRA-11223)
 -  * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
 -  * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
 -  * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
 +Merged from 2.2:
-   * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
-   * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
-   * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
++ * Fix queries with LIMIT and filtering on clustering columns 
(CASSANDRA-11223)
++ * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
++ * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592) 
 * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
  
 -3.0.14
 +3.11.0
 + * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
 + * Replace string comparison with regex/number checks in MessagingService 
test (CASSANDRA-13216)
 + * Fix formatting of duration columns in CQLSH (CASSANDRA-13549)
 + * Fix the problem with duplicated rows when using paging with SASI 
(CASSANDRA-13302)
 + * Allow CONTAINS statements filtering on the partition key and it’s parts 
(CASSANDRA-13275)
 + * Fall back to even ranges calculation in clusters with vnodes when tokens 
are distributed unevenly (CASSANDRA-13229)
 + * Fix duration type validation to prevent overflow (CASSANDRA-13218)
 + * Forbid unsupported creation of SASI indexes over partition key columns 
(CASSANDRA-13228)
 + * Reject multiple values for a key in CQL grammar. (CASSANDRA-13369)
 + * UDA fails without input rows (CASSANDRA-13399)
 + * Fix compaction-stress by using daemonInitialization (CASSANDRA-13188)
 + * V5 protocol flags decoding broken (CASSANDRA-13443)
 + * Use write lock not read lock for removing sstables from compaction 
strategies. (CASSANDRA-13422)
 + * Use corePoolSize equal to maxPoolSize in JMXEnabledThreadPoolExecutors 
(CASSANDRA-13329)
 + * Avoid rebuilding SASI indexes containing no values (CASSANDRA-12962)
 + * Add charset to Analyser input stream (CASSANDRA-13151)
 + * Fix testLimitSSTables flake caused by concurrent flush (CASSANDRA-12820)
 + * cdc column addition strikes again (CASSANDRA-13382)
 + * Fix static column indexes (CASSANDRA-13277)
 + * DataOutputBuffer.asNewBuffer broken 

[6/8] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread blerer
Merge branch cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e2445cfb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e2445cfb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e2445cfb

Branch: refs/heads/trunk
Commit: e2445cfb18f8b8c05acc0199df37560abfe936e6
Parents: 7de853b b08843d
Author: Benjamin Lerer 
Authored: Fri Jul 14 17:14:38 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 17:15:48 2017 +0200

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  |   3 +-
 src/java/org/apache/cassandra/db/DataRange.java |   5 +
 .../cassandra/db/PartitionRangeReadCommand.java |   7 +-
 .../org/apache/cassandra/db/ReadCommand.java|   2 +-
 src/java/org/apache/cassandra/db/ReadQuery.java |  12 +
 .../db/SinglePartitionReadCommand.java  |  21 +-
 .../apache/cassandra/db/filter/DataLimits.java  |  63 +++--
 .../apache/cassandra/db/filter/RowFilter.java   |  15 ++
 .../apache/cassandra/service/CacheService.java  |   2 +-
 .../apache/cassandra/service/DataResolver.java  |   4 +-
 .../apache/cassandra/service/StorageProxy.java  |   8 +-
 .../service/pager/AbstractQueryPager.java   |   2 +-
 .../service/pager/MultiPartitionPager.java  |   9 +-
 .../cassandra/service/pager/QueryPagers.java|   2 +-
 .../org/apache/cassandra/cql3/CQLTester.java|   8 +-
 .../validation/operations/SelectLimitTest.java  | 256 ++-
 .../db/rows/UnfilteredRowIteratorsTest.java |  10 +-
 18 files changed, 379 insertions(+), 51 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e2445cfb/CHANGES.txt
--
diff --cc CHANGES.txt
index fffda7f,bda510f..c916452
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,65 -1,18 +1,66 @@@
 -2.2.11
 - * Fix queries with LIMIT and filtering on clustering columns 
(CASSANDRA-11223)
 - * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
 - * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
 - * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
 - * Remove unused max_value_size_in_mb config setting from yaml 
(CASSANDRA-13625
 +3.0.15
 + * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
 + * Set test.runners based on cores and memory size (CASSANDRA-13078)
 + * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
 + * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
 + * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)
 + * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)
 + Merged from 2.2:
++  * Fix queries with LIMIT and filtering on clustering columns 
(CASSANDRA-11223)
 +  * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
 +  * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
 +  * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
  
 -
 -2.2.10
 +3.0.14
 + * Ensure int overflow doesn't occur when calculating large partition warning 
size (CASSANDRA-13172)
 + * Ensure consistent view of partition columns between coordinator and 
replica in ColumnFilter (CASSANDRA-13004)
 + * Failed unregistering mbean during drop keyspace (CASSANDRA-13346)
 + * nodetool scrub/cleanup/upgradesstables exit code is wrong (CASSANDRA-13542)
 + * Fix the reported number of sstable data files accessed per read 
(CASSANDRA-13120)
 + * Fix schema digest mismatch during rolling upgrades from versions before 
3.0.12 (CASSANDRA-13559)
 + * Upgrade JNA version to 4.4.0 (CASSANDRA-13072)
 + * Interned ColumnIdentifiers should use minimal ByteBuffers (CASSANDRA-13533)
 + * ReverseIndexedReader may drop rows during 2.1 to 3.0 upgrade 
(CASSANDRA-13525)
 + * Fix repair process violating start/end token limits for small ranges 
(CASSANDRA-13052)
 + * Add storage port options to sstableloader (CASSANDRA-13518)
 + * Properly handle quoted index names in cqlsh DESCRIBE output 
(CASSANDRA-12847)
 + * Avoid reading static row twice from old format sstables (CASSANDRA-13236)
 + * Fix NPE in StorageService.excise() (CASSANDRA-13163)
 + * Expire OutboundTcpConnection messages by a single Thread (CASSANDRA-13265)
 + * Fail repair if insufficient responses received (CASSANDRA-13397)
 + * Fix SSTableLoader fail when the loaded table contains dropped columns 
(CASSANDRA-13276)
 + * Avoid name clashes in CassandraIndexTest (CASSANDRA-13427)
 + * Handling partially written hint files (CASSANDRA-12728)
 + * Interrupt replaying hints on decommission (CASSANDRA-13308)
 + * Fix schema version calculation for rolling upgrades (CASSANDRA-13441)
 +Merged 

[4/8] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread blerer
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e2445cfb/src/java/org/apache/cassandra/db/filter/DataLimits.java
--
diff --cc src/java/org/apache/cassandra/db/filter/DataLimits.java
index 94f43dc,000..48ec06a
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/db/filter/DataLimits.java
+++ b/src/java/org/apache/cassandra/db/filter/DataLimits.java
@@@ -1,814 -1,0 +1,827 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.db.filter;
 +
 +import java.io.IOException;
 +import java.nio.ByteBuffer;
 +
 +import org.apache.cassandra.db.*;
 +import org.apache.cassandra.db.rows.*;
 +import org.apache.cassandra.db.partitions.*;
 +import org.apache.cassandra.db.transform.BasePartitions;
 +import org.apache.cassandra.db.transform.BaseRows;
 +import org.apache.cassandra.db.transform.StoppingTransformation;
 +import org.apache.cassandra.db.transform.Transformation;
 +import org.apache.cassandra.io.util.DataInputPlus;
 +import org.apache.cassandra.io.util.DataOutputPlus;
 +import org.apache.cassandra.utils.ByteBufferUtil;
 +
 +/**
 + * Object in charge of tracking if we have fetch enough data for a given 
query.
 + *
 + * The reason this is not just a simple integer is that Thrift and CQL3 count
 + * stuffs in different ways. This is what abstract those differences.
 + */
 +public abstract class DataLimits
 +{
 +public static final Serializer serializer = new Serializer();
 +
 +public static final int NO_LIMIT = Integer.MAX_VALUE;
 +
 +public static final DataLimits NONE = new CQLLimits(NO_LIMIT)
 +{
 +@Override
- public boolean hasEnoughLiveData(CachedPartition cached, int nowInSec)
++public boolean hasEnoughLiveData(CachedPartition cached, int 
nowInSec, boolean countPartitionsWithOnlyStaticData)
 +{
 +return false;
 +}
 +
 +@Override
- public UnfilteredPartitionIterator filter(UnfilteredPartitionIterator 
iter, int nowInSec)
++public UnfilteredPartitionIterator filter(UnfilteredPartitionIterator 
iter,
++  int nowInSec,
++  boolean 
countPartitionsWithOnlyStaticData)
 +{
 +return iter;
 +}
 +
 +@Override
- public UnfilteredRowIterator filter(UnfilteredRowIterator iter, int 
nowInSec)
++public UnfilteredRowIterator filter(UnfilteredRowIterator iter,
++int nowInSec,
++boolean 
countPartitionsWithOnlyStaticData)
 +{
 +return iter;
 +}
 +};
 +
 +// We currently deal with distinct queries by querying full partitions 
but limiting the result at 1 row per
 +// partition (see SelectStatement.makeFilter). So an "unbounded" distinct 
is still actually doing some filtering.
 +public static final DataLimits DISTINCT_NONE = new CQLLimits(NO_LIMIT, 1, 
true);
 +
 +public enum Kind { CQL_LIMIT, CQL_PAGING_LIMIT, THRIFT_LIMIT, 
SUPER_COLUMN_COUNTING_LIMIT }
 +
 +public static DataLimits cqlLimits(int cqlRowLimit)
 +{
 +return new CQLLimits(cqlRowLimit);
 +}
 +
 +public static DataLimits cqlLimits(int cqlRowLimit, int perPartitionLimit)
 +{
 +return new CQLLimits(cqlRowLimit, perPartitionLimit);
 +}
 +
 +public static DataLimits distinctLimits(int cqlRowLimit)
 +{
 +return CQLLimits.distinct(cqlRowLimit);
 +}
 +
 +public static DataLimits thriftLimits(int partitionLimit, int 
cellPerPartitionLimit)
 +{
 +return new ThriftLimits(partitionLimit, cellPerPartitionLimit);
 +}
 +
 +public static DataLimits superColumnCountingLimits(int partitionLimit, 
int cellPerPartitionLimit)
 +{
 +return new SuperColumnCountingLimits(partitionLimit, 
cellPerPartitionLimit);
 +}
 +
 +public abstract Kind kind();
 +
 +public abstract boolean isUnlimited();
 +public abstract boolean isDistinct();
 +
 +public abstract DataLimits forPaging(int pageSize);
 +public abstract 

[7/7] cassandra git commit: Merge branch cassandra-3.0 into cassandra-3.11

2017-07-14 Thread blerer
Merge branch cassandra-3.0 into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7aa89a64
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7aa89a64
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7aa89a64

Branch: refs/heads/cassandra-3.11
Commit: 7aa89a64e09c57061418c1d83c03ae7cfd0cd745
Parents: bd89f56 e2445cf
Author: Benjamin Lerer 
Authored: Fri Jul 14 17:18:19 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 17:19:38 2017 +0200

--
 CHANGES.txt |   6 +-
 .../apache/cassandra/db/ColumnFamilyStore.java  |   3 +-
 src/java/org/apache/cassandra/db/DataRange.java |   5 +
 .../cassandra/db/PartitionRangeReadCommand.java |   6 +
 .../org/apache/cassandra/db/ReadCommand.java|   2 +-
 src/java/org/apache/cassandra/db/ReadQuery.java |  12 +
 .../db/SinglePartitionReadCommand.java  |  22 +-
 .../apache/cassandra/db/filter/DataLimits.java  |  82 +++---
 .../apache/cassandra/db/filter/RowFilter.java   |  15 +
 .../apache/cassandra/service/CacheService.java  |   2 +-
 .../apache/cassandra/service/DataResolver.java  |   4 +-
 .../apache/cassandra/service/StorageProxy.java  |   8 +-
 .../service/pager/AbstractQueryPager.java   |   2 +-
 .../service/pager/MultiPartitionPager.java  |   9 +-
 .../cassandra/service/pager/QueryPagers.java|   2 +-
 .../org/apache/cassandra/cql3/CQLTester.java|   8 +-
 .../validation/operations/SelectLimitTest.java  | 279 +++
 .../db/rows/UnfilteredRowIteratorsTest.java |  10 +-
 18 files changed, 418 insertions(+), 59 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7aa89a64/CHANGES.txt
--
diff --cc CHANGES.txt
index e7ad6fb,c916452..edd66e2
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -6,53 -2,16 +6,53 @@@ Merged from 3.0
   * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
   * Set test.runners based on cores and memory size (CASSANDRA-13078)
   * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
 - * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
   * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)
   * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)
 - Merged from 2.2:
 -  * Fix queries with LIMIT and filtering on clustering columns 
(CASSANDRA-11223)
 -  * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
 -  * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
 -  * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
 +Merged from 2.2:
-   * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
-   * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
-   * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
++ * Fix queries with LIMIT and filtering on clustering columns 
(CASSANDRA-11223)
++ * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
++ * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592) 
 * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
  
 -3.0.14
 +3.11.0
 + * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
 + * Replace string comparison with regex/number checks in MessagingService 
test (CASSANDRA-13216)
 + * Fix formatting of duration columns in CQLSH (CASSANDRA-13549)
 + * Fix the problem with duplicated rows when using paging with SASI 
(CASSANDRA-13302)
 + * Allow CONTAINS statements filtering on the partition key and it’s parts 
(CASSANDRA-13275)
 + * Fall back to even ranges calculation in clusters with vnodes when tokens 
are distributed unevenly (CASSANDRA-13229)
 + * Fix duration type validation to prevent overflow (CASSANDRA-13218)
 + * Forbid unsupported creation of SASI indexes over partition key columns 
(CASSANDRA-13228)
 + * Reject multiple values for a key in CQL grammar. (CASSANDRA-13369)
 + * UDA fails without input rows (CASSANDRA-13399)
 + * Fix compaction-stress by using daemonInitialization (CASSANDRA-13188)
 + * V5 protocol flags decoding broken (CASSANDRA-13443)
 + * Use write lock not read lock for removing sstables from compaction 
strategies. (CASSANDRA-13422)
 + * Use corePoolSize equal to maxPoolSize in JMXEnabledThreadPoolExecutors 
(CASSANDRA-13329)
 + * Avoid rebuilding SASI indexes containing no values (CASSANDRA-12962)
 + * Add charset to Analyser input stream (CASSANDRA-13151)
 + * Fix testLimitSSTables flake caused by concurrent flush (CASSANDRA-12820)
 + * cdc column addition strikes again (CASSANDRA-13382)
 + * Fix static column indexes (CASSANDRA-13277)
 + * DataOutputBuffer.asNewBuffer broken 

[3/7] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread blerer
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e2445cfb/src/java/org/apache/cassandra/service/DataResolver.java
--
diff --cc src/java/org/apache/cassandra/service/DataResolver.java
index 60cfbba,000..c96a893
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/service/DataResolver.java
+++ b/src/java/org/apache/cassandra/service/DataResolver.java
@@@ -1,498 -1,0 +1,498 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.service;
 +
 +import java.net.InetAddress;
 +import java.util.*;
 +import java.util.concurrent.TimeoutException;
 +
 +import com.google.common.annotations.VisibleForTesting;
 +
 +import org.apache.cassandra.concurrent.Stage;
 +import org.apache.cassandra.concurrent.StageManager;
 +import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.config.ColumnDefinition;
 +import org.apache.cassandra.config.DatabaseDescriptor;
 +import org.apache.cassandra.db.*;
 +import org.apache.cassandra.db.filter.ClusteringIndexFilter;
 +import org.apache.cassandra.db.filter.DataLimits;
 +import org.apache.cassandra.db.partitions.*;
 +import org.apache.cassandra.db.rows.*;
 +import org.apache.cassandra.db.transform.MoreRows;
 +import org.apache.cassandra.db.transform.Transformation;
 +import org.apache.cassandra.exceptions.ReadTimeoutException;
 +import org.apache.cassandra.net.*;
 +import org.apache.cassandra.tracing.Tracing;
 +import org.apache.cassandra.utils.FBUtilities;
 +
 +public class DataResolver extends ResponseResolver
 +{
 +@VisibleForTesting
 +final List repairResults = 
Collections.synchronizedList(new ArrayList<>());
 +
 +public DataResolver(Keyspace keyspace, ReadCommand command, 
ConsistencyLevel consistency, int maxResponseCount)
 +{
 +super(keyspace, command, consistency, maxResponseCount);
 +}
 +
 +public PartitionIterator getData()
 +{
 +ReadResponse response = responses.iterator().next().payload;
 +return 
UnfilteredPartitionIterators.filter(response.makeIterator(command), 
command.nowInSec());
 +}
 +
 +public PartitionIterator resolve()
 +{
 +// We could get more responses while this method runs, which is ok 
(we're happy to ignore any response not here
 +// at the beginning of this method), so grab the response count once 
and use that through the method.
 +int count = responses.size();
 +List iters = new ArrayList<>(count);
 +InetAddress[] sources = new InetAddress[count];
 +for (int i = 0; i < count; i++)
 +{
 +MessageIn msg = responses.get(i);
 +iters.add(msg.payload.makeIterator(command));
 +sources[i] = msg.from;
 +}
 +
 +// Even though every responses should honor the limit, we might have 
more than requested post reconciliation,
 +// so ensure we're respecting the limit.
- DataLimits.Counter counter = 
command.limits().newCounter(command.nowInSec(), true);
++DataLimits.Counter counter = 
command.limits().newCounter(command.nowInSec(), true, 
command.selectsFullPartition());
 +return counter.applyTo(mergeWithShortReadProtection(iters, sources, 
counter));
 +}
 +
 +public void compareResponses()
 +{
 +// We need to fully consume the results to trigger read repairs if 
appropriate
 +try (PartitionIterator iterator = resolve())
 +{
 +PartitionIterators.consume(iterator);
 +}
 +}
 +
 +private PartitionIterator 
mergeWithShortReadProtection(List results, 
InetAddress[] sources, DataLimits.Counter resultCounter)
 +{
 +// If we have only one results, there is no read repair to do and we 
can't get short reads
 +if (results.size() == 1)
 +return UnfilteredPartitionIterators.filter(results.get(0), 
command.nowInSec());
 +
 +UnfilteredPartitionIterators.MergeListener listener = new 
RepairMergeListener(sources);
 +
 +// So-called "short reads" stems from nodes returning only a subset 
of the results they have for a partition due to the limit,
 +// but that 

[4/7] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread blerer
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e2445cfb/src/java/org/apache/cassandra/db/filter/DataLimits.java
--
diff --cc src/java/org/apache/cassandra/db/filter/DataLimits.java
index 94f43dc,000..48ec06a
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/db/filter/DataLimits.java
+++ b/src/java/org/apache/cassandra/db/filter/DataLimits.java
@@@ -1,814 -1,0 +1,827 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.db.filter;
 +
 +import java.io.IOException;
 +import java.nio.ByteBuffer;
 +
 +import org.apache.cassandra.db.*;
 +import org.apache.cassandra.db.rows.*;
 +import org.apache.cassandra.db.partitions.*;
 +import org.apache.cassandra.db.transform.BasePartitions;
 +import org.apache.cassandra.db.transform.BaseRows;
 +import org.apache.cassandra.db.transform.StoppingTransformation;
 +import org.apache.cassandra.db.transform.Transformation;
 +import org.apache.cassandra.io.util.DataInputPlus;
 +import org.apache.cassandra.io.util.DataOutputPlus;
 +import org.apache.cassandra.utils.ByteBufferUtil;
 +
 +/**
 + * Object in charge of tracking if we have fetch enough data for a given 
query.
 + *
 + * The reason this is not just a simple integer is that Thrift and CQL3 count
 + * stuffs in different ways. This is what abstract those differences.
 + */
 +public abstract class DataLimits
 +{
 +public static final Serializer serializer = new Serializer();
 +
 +public static final int NO_LIMIT = Integer.MAX_VALUE;
 +
 +public static final DataLimits NONE = new CQLLimits(NO_LIMIT)
 +{
 +@Override
- public boolean hasEnoughLiveData(CachedPartition cached, int nowInSec)
++public boolean hasEnoughLiveData(CachedPartition cached, int 
nowInSec, boolean countPartitionsWithOnlyStaticData)
 +{
 +return false;
 +}
 +
 +@Override
- public UnfilteredPartitionIterator filter(UnfilteredPartitionIterator 
iter, int nowInSec)
++public UnfilteredPartitionIterator filter(UnfilteredPartitionIterator 
iter,
++  int nowInSec,
++  boolean 
countPartitionsWithOnlyStaticData)
 +{
 +return iter;
 +}
 +
 +@Override
- public UnfilteredRowIterator filter(UnfilteredRowIterator iter, int 
nowInSec)
++public UnfilteredRowIterator filter(UnfilteredRowIterator iter,
++int nowInSec,
++boolean 
countPartitionsWithOnlyStaticData)
 +{
 +return iter;
 +}
 +};
 +
 +// We currently deal with distinct queries by querying full partitions 
but limiting the result at 1 row per
 +// partition (see SelectStatement.makeFilter). So an "unbounded" distinct 
is still actually doing some filtering.
 +public static final DataLimits DISTINCT_NONE = new CQLLimits(NO_LIMIT, 1, 
true);
 +
 +public enum Kind { CQL_LIMIT, CQL_PAGING_LIMIT, THRIFT_LIMIT, 
SUPER_COLUMN_COUNTING_LIMIT }
 +
 +public static DataLimits cqlLimits(int cqlRowLimit)
 +{
 +return new CQLLimits(cqlRowLimit);
 +}
 +
 +public static DataLimits cqlLimits(int cqlRowLimit, int perPartitionLimit)
 +{
 +return new CQLLimits(cqlRowLimit, perPartitionLimit);
 +}
 +
 +public static DataLimits distinctLimits(int cqlRowLimit)
 +{
 +return CQLLimits.distinct(cqlRowLimit);
 +}
 +
 +public static DataLimits thriftLimits(int partitionLimit, int 
cellPerPartitionLimit)
 +{
 +return new ThriftLimits(partitionLimit, cellPerPartitionLimit);
 +}
 +
 +public static DataLimits superColumnCountingLimits(int partitionLimit, 
int cellPerPartitionLimit)
 +{
 +return new SuperColumnCountingLimits(partitionLimit, 
cellPerPartitionLimit);
 +}
 +
 +public abstract Kind kind();
 +
 +public abstract boolean isUnlimited();
 +public abstract boolean isDistinct();
 +
 +public abstract DataLimits forPaging(int pageSize);
 +public abstract 

[1/7] cassandra git commit: Fix queries with LIMIT and filtering on clustering columns

2017-07-14 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.11 bd89f5623 -> 7aa89a64e


Fix queries with LIMIT and filtering on clustering columns

patch by Benjamin Lerer; reviewed by Stefania Alborghetti for CASSANDRA-11223


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b08843de
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b08843de
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b08843de

Branch: refs/heads/cassandra-3.11
Commit: b08843de67b3c63fa9c0efe10bb9eda07c007f6c
Parents: 5b982d7
Author: Benjamin Lerer 
Authored: Fri Jul 14 17:11:15 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 17:11:15 2017 +0200

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/db/ColumnFamily.java   |   2 +-
 .../cassandra/db/filter/ColumnCounter.java  |  21 +-
 .../cassandra/db/filter/NamesQueryFilter.java   |   2 +-
 .../cassandra/db/filter/SliceQueryFilter.java   |  17 +-
 .../validation/operations/SelectLimitTest.java  | 209 +++
 6 files changed, 238 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b08843de/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 122ba54..bda510f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.11
+ * Fix queries with LIMIT and filtering on clustering columns (CASSANDRA-11223)
  * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
  * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
  * Fix nested Tuples/UDTs validation (CASSANDRA-13646)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b08843de/src/java/org/apache/cassandra/db/ColumnFamily.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamily.java 
b/src/java/org/apache/cassandra/db/ColumnFamily.java
index a7243a2..1532439 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamily.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamily.java
@@ -92,7 +92,7 @@ public abstract class ColumnFamily implements Iterable, 
IRowCacheEntry
 {
 ColumnCounter counter = getComparator().isDense()
   ? new ColumnCounter(now)
-  : new ColumnCounter.GroupByPrefix(now, 
getComparator(), metadata.clusteringColumns().size());
+  : new ColumnCounter.GroupByPrefix(now, 
getComparator(), metadata.clusteringColumns().size(), true);
 return counter.countAll(this).live();
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b08843de/src/java/org/apache/cassandra/db/filter/ColumnCounter.java
--
diff --git a/src/java/org/apache/cassandra/db/filter/ColumnCounter.java 
b/src/java/org/apache/cassandra/db/filter/ColumnCounter.java
index 594fde8..a00d588 100644
--- a/src/java/org/apache/cassandra/db/filter/ColumnCounter.java
+++ b/src/java/org/apache/cassandra/db/filter/ColumnCounter.java
@@ -90,6 +90,7 @@ public class ColumnCounter
 {
 protected final CellNameType type;
 protected final int toGroup;
+protected final boolean countPartitionsWithOnlyStaticData;
 protected CellName previous;
 
 /**
@@ -101,12 +102,15 @@ public class ColumnCounter
  * @param toGroup the number of composite components on which to group
  *column. If 0, all columns are grouped, otherwise we 
group
  *those for which the {@code toGroup} first component 
are equals.
+ * @param countPartitionsWithOnlyStaticData if {@code true} the 
partitions with only static data should be
+ * counted as 1 valid row.
  */
-public GroupByPrefix(long timestamp, CellNameType type, int toGroup)
+public GroupByPrefix(long timestamp, CellNameType type, int toGroup, 
boolean countPartitionsWithOnlyStaticData)
 {
 super(timestamp);
 this.type = type;
 this.toGroup = toGroup;
+this.countPartitionsWithOnlyStaticData = 
countPartitionsWithOnlyStaticData;
 
 assert toGroup == 0 || type != null;
 }
@@ -153,14 +157,16 @@ public class ColumnCounter
 // We want to count the static group as 1 (CQL) row only if 
it's the only
 // group in the partition. So, since we have already counted 
it at this point,
 // just don't count the 2nd group if there is one and the 
first one was static
-if (previous.isStatic())
+if (previous.isStatic() && 

[1/6] cassandra git commit: Fix queries with LIMIT and filtering on clustering columns

2017-07-14 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 7de853bff -> 88d2ac4f2


Fix queries with LIMIT and filtering on clustering columns

patch by Benjamin Lerer; reviewed by Stefania Alborghetti for CASSANDRA-11223


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b08843de
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b08843de
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b08843de

Branch: refs/heads/cassandra-3.0
Commit: b08843de67b3c63fa9c0efe10bb9eda07c007f6c
Parents: 5b982d7
Author: Benjamin Lerer 
Authored: Fri Jul 14 17:11:15 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 17:11:15 2017 +0200

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/db/ColumnFamily.java   |   2 +-
 .../cassandra/db/filter/ColumnCounter.java  |  21 +-
 .../cassandra/db/filter/NamesQueryFilter.java   |   2 +-
 .../cassandra/db/filter/SliceQueryFilter.java   |  17 +-
 .../validation/operations/SelectLimitTest.java  | 209 +++
 6 files changed, 238 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b08843de/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 122ba54..bda510f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.11
+ * Fix queries with LIMIT and filtering on clustering columns (CASSANDRA-11223)
  * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
  * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
  * Fix nested Tuples/UDTs validation (CASSANDRA-13646)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b08843de/src/java/org/apache/cassandra/db/ColumnFamily.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamily.java 
b/src/java/org/apache/cassandra/db/ColumnFamily.java
index a7243a2..1532439 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamily.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamily.java
@@ -92,7 +92,7 @@ public abstract class ColumnFamily implements Iterable, 
IRowCacheEntry
 {
 ColumnCounter counter = getComparator().isDense()
   ? new ColumnCounter(now)
-  : new ColumnCounter.GroupByPrefix(now, 
getComparator(), metadata.clusteringColumns().size());
+  : new ColumnCounter.GroupByPrefix(now, 
getComparator(), metadata.clusteringColumns().size(), true);
 return counter.countAll(this).live();
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b08843de/src/java/org/apache/cassandra/db/filter/ColumnCounter.java
--
diff --git a/src/java/org/apache/cassandra/db/filter/ColumnCounter.java 
b/src/java/org/apache/cassandra/db/filter/ColumnCounter.java
index 594fde8..a00d588 100644
--- a/src/java/org/apache/cassandra/db/filter/ColumnCounter.java
+++ b/src/java/org/apache/cassandra/db/filter/ColumnCounter.java
@@ -90,6 +90,7 @@ public class ColumnCounter
 {
 protected final CellNameType type;
 protected final int toGroup;
+protected final boolean countPartitionsWithOnlyStaticData;
 protected CellName previous;
 
 /**
@@ -101,12 +102,15 @@ public class ColumnCounter
  * @param toGroup the number of composite components on which to group
  *column. If 0, all columns are grouped, otherwise we 
group
  *those for which the {@code toGroup} first component 
are equals.
+ * @param countPartitionsWithOnlyStaticData if {@code true} the 
partitions with only static data should be
+ * counted as 1 valid row.
  */
-public GroupByPrefix(long timestamp, CellNameType type, int toGroup)
+public GroupByPrefix(long timestamp, CellNameType type, int toGroup, 
boolean countPartitionsWithOnlyStaticData)
 {
 super(timestamp);
 this.type = type;
 this.toGroup = toGroup;
+this.countPartitionsWithOnlyStaticData = 
countPartitionsWithOnlyStaticData;
 
 assert toGroup == 0 || type != null;
 }
@@ -153,14 +157,16 @@ public class ColumnCounter
 // We want to count the static group as 1 (CQL) row only if 
it's the only
 // group in the partition. So, since we have already counted 
it at this point,
 // just don't count the 2nd group if there is one and the 
first one was static
-if (previous.isStatic())
+if (previous.isStatic() && 

[3/6] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread blerer
http://git-wip-us.apache.org/repos/asf/cassandra/blob/88d2ac4f/src/java/org/apache/cassandra/service/DataResolver.java
--
diff --cc src/java/org/apache/cassandra/service/DataResolver.java
index 60cfbba,000..c96a893
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/service/DataResolver.java
+++ b/src/java/org/apache/cassandra/service/DataResolver.java
@@@ -1,498 -1,0 +1,498 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.service;
 +
 +import java.net.InetAddress;
 +import java.util.*;
 +import java.util.concurrent.TimeoutException;
 +
 +import com.google.common.annotations.VisibleForTesting;
 +
 +import org.apache.cassandra.concurrent.Stage;
 +import org.apache.cassandra.concurrent.StageManager;
 +import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.config.ColumnDefinition;
 +import org.apache.cassandra.config.DatabaseDescriptor;
 +import org.apache.cassandra.db.*;
 +import org.apache.cassandra.db.filter.ClusteringIndexFilter;
 +import org.apache.cassandra.db.filter.DataLimits;
 +import org.apache.cassandra.db.partitions.*;
 +import org.apache.cassandra.db.rows.*;
 +import org.apache.cassandra.db.transform.MoreRows;
 +import org.apache.cassandra.db.transform.Transformation;
 +import org.apache.cassandra.exceptions.ReadTimeoutException;
 +import org.apache.cassandra.net.*;
 +import org.apache.cassandra.tracing.Tracing;
 +import org.apache.cassandra.utils.FBUtilities;
 +
 +public class DataResolver extends ResponseResolver
 +{
 +@VisibleForTesting
 +final List repairResults = 
Collections.synchronizedList(new ArrayList<>());
 +
 +public DataResolver(Keyspace keyspace, ReadCommand command, 
ConsistencyLevel consistency, int maxResponseCount)
 +{
 +super(keyspace, command, consistency, maxResponseCount);
 +}
 +
 +public PartitionIterator getData()
 +{
 +ReadResponse response = responses.iterator().next().payload;
 +return 
UnfilteredPartitionIterators.filter(response.makeIterator(command), 
command.nowInSec());
 +}
 +
 +public PartitionIterator resolve()
 +{
 +// We could get more responses while this method runs, which is ok 
(we're happy to ignore any response not here
 +// at the beginning of this method), so grab the response count once 
and use that through the method.
 +int count = responses.size();
 +List iters = new ArrayList<>(count);
 +InetAddress[] sources = new InetAddress[count];
 +for (int i = 0; i < count; i++)
 +{
 +MessageIn msg = responses.get(i);
 +iters.add(msg.payload.makeIterator(command));
 +sources[i] = msg.from;
 +}
 +
 +// Even though every responses should honor the limit, we might have 
more than requested post reconciliation,
 +// so ensure we're respecting the limit.
- DataLimits.Counter counter = 
command.limits().newCounter(command.nowInSec(), true);
++DataLimits.Counter counter = 
command.limits().newCounter(command.nowInSec(), true, 
command.selectsFullPartition());
 +return counter.applyTo(mergeWithShortReadProtection(iters, sources, 
counter));
 +}
 +
 +public void compareResponses()
 +{
 +// We need to fully consume the results to trigger read repairs if 
appropriate
 +try (PartitionIterator iterator = resolve())
 +{
 +PartitionIterators.consume(iterator);
 +}
 +}
 +
 +private PartitionIterator 
mergeWithShortReadProtection(List results, 
InetAddress[] sources, DataLimits.Counter resultCounter)
 +{
 +// If we have only one results, there is no read repair to do and we 
can't get short reads
 +if (results.size() == 1)
 +return UnfilteredPartitionIterators.filter(results.get(0), 
command.nowInSec());
 +
 +UnfilteredPartitionIterators.MergeListener listener = new 
RepairMergeListener(sources);
 +
 +// So-called "short reads" stems from nodes returning only a subset 
of the results they have for a partition due to the limit,
 +// but that 

[5/6] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread blerer
http://git-wip-us.apache.org/repos/asf/cassandra/blob/88d2ac4f/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
--
diff --cc src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
index b4211bb,000..319eeb4
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
+++ b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
@@@ -1,1089 -1,0 +1,1106 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.db;
 +
 +import java.io.IOException;
 +import java.nio.ByteBuffer;
 +import java.util.*;
 +
 +import com.google.common.collect.Iterables;
 +import com.google.common.collect.Sets;
 +
 +import org.apache.cassandra.cache.IRowCacheEntry;
 +import org.apache.cassandra.cache.RowCacheKey;
 +import org.apache.cassandra.cache.RowCacheSentinel;
 +import org.apache.cassandra.concurrent.Stage;
 +import org.apache.cassandra.concurrent.StageManager;
 +import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.config.ColumnDefinition;
 +import org.apache.cassandra.config.DatabaseDescriptor;
 +import org.apache.cassandra.db.lifecycle.*;
 +import org.apache.cassandra.db.filter.*;
 +import org.apache.cassandra.db.partitions.*;
 +import org.apache.cassandra.db.rows.*;
 +import org.apache.cassandra.exceptions.RequestExecutionException;
 +import org.apache.cassandra.io.sstable.format.SSTableReader;
 +import org.apache.cassandra.io.sstable.format.SSTableReadsListener;
 +import org.apache.cassandra.io.util.DataInputPlus;
 +import org.apache.cassandra.io.util.DataOutputPlus;
 +import org.apache.cassandra.metrics.TableMetrics;
 +import org.apache.cassandra.net.MessageOut;
 +import org.apache.cassandra.net.MessagingService;
 +import org.apache.cassandra.schema.IndexMetadata;
 +import org.apache.cassandra.service.CacheService;
 +import org.apache.cassandra.service.ClientState;
 +import org.apache.cassandra.service.StorageProxy;
 +import org.apache.cassandra.service.pager.*;
 +import org.apache.cassandra.thrift.ThriftResultsMerger;
 +import org.apache.cassandra.tracing.Tracing;
 +import org.apache.cassandra.utils.FBUtilities;
 +import org.apache.cassandra.utils.SearchIterator;
 +import org.apache.cassandra.utils.btree.BTreeSet;
 +import org.apache.cassandra.utils.concurrent.OpOrder;
 +import org.apache.cassandra.utils.memory.HeapAllocator;
 +
 +
 +/**
 + * A read command that selects a (part of a) single partition.
 + */
 +public class SinglePartitionReadCommand extends ReadCommand
 +{
 +protected static final SelectionDeserializer selectionDeserializer = new 
Deserializer();
 +
 +private final DecoratedKey partitionKey;
 +private final ClusteringIndexFilter clusteringIndexFilter;
 +
 +private int oldestUnrepairedTombstone = Integer.MAX_VALUE;
 +
 +public SinglePartitionReadCommand(boolean isDigest,
 +  int digestVersion,
 +  boolean isForThrift,
 +  CFMetaData metadata,
 +  int nowInSec,
 +  ColumnFilter columnFilter,
 +  RowFilter rowFilter,
 +  DataLimits limits,
 +  DecoratedKey partitionKey,
 +  ClusteringIndexFilter 
clusteringIndexFilter)
 +{
 +super(Kind.SINGLE_PARTITION, isDigest, digestVersion, isForThrift, 
metadata, nowInSec, columnFilter, rowFilter, limits);
 +assert partitionKey.getPartitioner() == metadata.partitioner;
 +this.partitionKey = partitionKey;
 +this.clusteringIndexFilter = clusteringIndexFilter;
 +}
 +
 +/**
 + * Creates a new read command on a single partition.
 + *
 + * @param metadata the table to query.
 + * @param nowInSec the time in seconds to use are "now" for this query.
 + * @param columnFilter the column filter to use for the query.
 + * @param rowFilter the row filter to use for the query.
 + * @param limits the limits to 

[4/6] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread blerer
http://git-wip-us.apache.org/repos/asf/cassandra/blob/88d2ac4f/src/java/org/apache/cassandra/db/filter/DataLimits.java
--
diff --cc src/java/org/apache/cassandra/db/filter/DataLimits.java
index 94f43dc,000..48ec06a
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/db/filter/DataLimits.java
+++ b/src/java/org/apache/cassandra/db/filter/DataLimits.java
@@@ -1,814 -1,0 +1,827 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.db.filter;
 +
 +import java.io.IOException;
 +import java.nio.ByteBuffer;
 +
 +import org.apache.cassandra.db.*;
 +import org.apache.cassandra.db.rows.*;
 +import org.apache.cassandra.db.partitions.*;
 +import org.apache.cassandra.db.transform.BasePartitions;
 +import org.apache.cassandra.db.transform.BaseRows;
 +import org.apache.cassandra.db.transform.StoppingTransformation;
 +import org.apache.cassandra.db.transform.Transformation;
 +import org.apache.cassandra.io.util.DataInputPlus;
 +import org.apache.cassandra.io.util.DataOutputPlus;
 +import org.apache.cassandra.utils.ByteBufferUtil;
 +
 +/**
 + * Object in charge of tracking if we have fetch enough data for a given 
query.
 + *
 + * The reason this is not just a simple integer is that Thrift and CQL3 count
 + * stuffs in different ways. This is what abstract those differences.
 + */
 +public abstract class DataLimits
 +{
 +public static final Serializer serializer = new Serializer();
 +
 +public static final int NO_LIMIT = Integer.MAX_VALUE;
 +
 +public static final DataLimits NONE = new CQLLimits(NO_LIMIT)
 +{
 +@Override
- public boolean hasEnoughLiveData(CachedPartition cached, int nowInSec)
++public boolean hasEnoughLiveData(CachedPartition cached, int 
nowInSec, boolean countPartitionsWithOnlyStaticData)
 +{
 +return false;
 +}
 +
 +@Override
- public UnfilteredPartitionIterator filter(UnfilteredPartitionIterator 
iter, int nowInSec)
++public UnfilteredPartitionIterator filter(UnfilteredPartitionIterator 
iter,
++  int nowInSec,
++  boolean 
countPartitionsWithOnlyStaticData)
 +{
 +return iter;
 +}
 +
 +@Override
- public UnfilteredRowIterator filter(UnfilteredRowIterator iter, int 
nowInSec)
++public UnfilteredRowIterator filter(UnfilteredRowIterator iter,
++int nowInSec,
++boolean 
countPartitionsWithOnlyStaticData)
 +{
 +return iter;
 +}
 +};
 +
 +// We currently deal with distinct queries by querying full partitions 
but limiting the result at 1 row per
 +// partition (see SelectStatement.makeFilter). So an "unbounded" distinct 
is still actually doing some filtering.
 +public static final DataLimits DISTINCT_NONE = new CQLLimits(NO_LIMIT, 1, 
true);
 +
 +public enum Kind { CQL_LIMIT, CQL_PAGING_LIMIT, THRIFT_LIMIT, 
SUPER_COLUMN_COUNTING_LIMIT }
 +
 +public static DataLimits cqlLimits(int cqlRowLimit)
 +{
 +return new CQLLimits(cqlRowLimit);
 +}
 +
 +public static DataLimits cqlLimits(int cqlRowLimit, int perPartitionLimit)
 +{
 +return new CQLLimits(cqlRowLimit, perPartitionLimit);
 +}
 +
 +public static DataLimits distinctLimits(int cqlRowLimit)
 +{
 +return CQLLimits.distinct(cqlRowLimit);
 +}
 +
 +public static DataLimits thriftLimits(int partitionLimit, int 
cellPerPartitionLimit)
 +{
 +return new ThriftLimits(partitionLimit, cellPerPartitionLimit);
 +}
 +
 +public static DataLimits superColumnCountingLimits(int partitionLimit, 
int cellPerPartitionLimit)
 +{
 +return new SuperColumnCountingLimits(partitionLimit, 
cellPerPartitionLimit);
 +}
 +
 +public abstract Kind kind();
 +
 +public abstract boolean isUnlimited();
 +public abstract boolean isDistinct();
 +
 +public abstract DataLimits forPaging(int pageSize);
 +public abstract 

[6/6] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread blerer
Merge branch cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/88d2ac4f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/88d2ac4f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/88d2ac4f

Branch: refs/heads/cassandra-3.0
Commit: 88d2ac4f2fadba44a9b72286ef924441014a97ba
Parents: 7de853b b08843d
Author: Benjamin Lerer 
Authored: Fri Jul 14 17:14:38 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 17:26:34 2017 +0200

--
 CHANGES.txt |   7 +-
 .../apache/cassandra/db/ColumnFamilyStore.java  |   3 +-
 src/java/org/apache/cassandra/db/DataRange.java |   5 +
 .../cassandra/db/PartitionRangeReadCommand.java |   7 +-
 .../org/apache/cassandra/db/ReadCommand.java|   2 +-
 src/java/org/apache/cassandra/db/ReadQuery.java |  12 +
 .../db/SinglePartitionReadCommand.java  |  21 +-
 .../apache/cassandra/db/filter/DataLimits.java  |  63 +++--
 .../apache/cassandra/db/filter/RowFilter.java   |  15 ++
 .../apache/cassandra/service/CacheService.java  |   2 +-
 .../apache/cassandra/service/DataResolver.java  |   4 +-
 .../apache/cassandra/service/StorageProxy.java  |   8 +-
 .../service/pager/AbstractQueryPager.java   |   2 +-
 .../service/pager/MultiPartitionPager.java  |   9 +-
 .../cassandra/service/pager/QueryPagers.java|   2 +-
 .../org/apache/cassandra/cql3/CQLTester.java|   8 +-
 .../validation/operations/SelectLimitTest.java  | 256 ++-
 .../db/rows/UnfilteredRowIteratorsTest.java |  10 +-
 18 files changed, 382 insertions(+), 54 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/88d2ac4f/CHANGES.txt
--
diff --cc CHANGES.txt
index fffda7f,bda510f..4a823c9
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,65 -1,18 +1,66 @@@
 -2.2.11
 +3.0.15
 + * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
 + * Set test.runners based on cores and memory size (CASSANDRA-13078)
 + * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
 + * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
 + * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)
 + * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)
 + Merged from 2.2:
-   * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
-   * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
-   * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
+  * Fix queries with LIMIT and filtering on clustering columns 
(CASSANDRA-11223)
+  * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
+  * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
+  * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
 - * Remove unused max_value_size_in_mb config setting from yaml 
(CASSANDRA-13625
  
 -
 -2.2.10
 +3.0.14
 + * Ensure int overflow doesn't occur when calculating large partition warning 
size (CASSANDRA-13172)
 + * Ensure consistent view of partition columns between coordinator and 
replica in ColumnFilter (CASSANDRA-13004)
 + * Failed unregistering mbean during drop keyspace (CASSANDRA-13346)
 + * nodetool scrub/cleanup/upgradesstables exit code is wrong (CASSANDRA-13542)
 + * Fix the reported number of sstable data files accessed per read 
(CASSANDRA-13120)
 + * Fix schema digest mismatch during rolling upgrades from versions before 
3.0.12 (CASSANDRA-13559)
 + * Upgrade JNA version to 4.4.0 (CASSANDRA-13072)
 + * Interned ColumnIdentifiers should use minimal ByteBuffers (CASSANDRA-13533)
 + * ReverseIndexedReader may drop rows during 2.1 to 3.0 upgrade 
(CASSANDRA-13525)
 + * Fix repair process violating start/end token limits for small ranges 
(CASSANDRA-13052)
 + * Add storage port options to sstableloader (CASSANDRA-13518)
 + * Properly handle quoted index names in cqlsh DESCRIBE output 
(CASSANDRA-12847)
 + * Avoid reading static row twice from old format sstables (CASSANDRA-13236)
 + * Fix NPE in StorageService.excise() (CASSANDRA-13163)
 + * Expire OutboundTcpConnection messages by a single Thread (CASSANDRA-13265)
 + * Fail repair if insufficient responses received (CASSANDRA-13397)
 + * Fix SSTableLoader fail when the loaded table contains dropped columns 
(CASSANDRA-13276)
 + * Avoid name clashes in CassandraIndexTest (CASSANDRA-13427)
 + * Handling partially written hint files (CASSANDRA-12728)
 + * Interrupt replaying hints on decommission (CASSANDRA-13308)
 + * Fix schema version calculation for rolling upgrades (CASSANDRA-13441)
 +Merged from 2.2:
   * Nodes started with join_ring=False should be able to serve 

[2/6] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread blerer
http://git-wip-us.apache.org/repos/asf/cassandra/blob/88d2ac4f/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java
--
diff --cc 
test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java
index aeb3d56,0ffb799..7e90c0a
--- 
a/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java
@@@ -26,14 -26,14 +26,15 @@@ import org.junit.Test
  import org.apache.cassandra.config.DatabaseDescriptor;
  import org.apache.cassandra.cql3.CQLTester;
  import org.apache.cassandra.dht.ByteOrderedPartitioner;
--import org.apache.cassandra.exceptions.InvalidRequestException;
++import org.apache.cassandra.service.StorageService;
  
  public class SelectLimitTest extends CQLTester
  {
  @BeforeClass
  public static void setUp()
  {
 -DatabaseDescriptor.setPartitioner(ByteOrderedPartitioner.instance);
++
StorageService.instance.setPartitionerUnsafe(ByteOrderedPartitioner.instance);
 +
DatabaseDescriptor.setPartitionerUnsafe(ByteOrderedPartitioner.instance);
  }
  
  /**
@@@ -125,43 -125,217 +126,296 @@@
 row(1, 1),
 row(1, 2),
 row(1, 3));
 +assertRows(execute("SELECT * FROM %s WHERE v > 1 AND v <= 3 LIMIT 6 
ALLOW FILTERING"),
 +   row(0, 2),
 +   row(0, 3),
 +   row(1, 2),
 +   row(1, 3),
 +   row(2, 2),
 +   row(2, 3));
 +}
  
 -// strict bound (v > 1) over a range of partitions is not supported 
for compact storage if limit is provided
 -assertInvalidThrow(InvalidRequestException.class, "SELECT * FROM %s 
WHERE v > 1 AND v <= 3 LIMIT 6 ALLOW FILTERING");
 +@Test
 +public void testLimitWithDeletedRowsAndStaticColumns() throws Throwable
 +{
 +createTable("CREATE TABLE %s (pk int, c int, v int, s int static, 
PRIMARY KEY (pk, c))");
 +
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (1, -1, 1, 1)");
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (2, -1, 1, 1)");
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (3, -1, 1, 1)");
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (4, -1, 1, 1)");
 +execute("INSERT INTO %s (pk, c, v, s) VALUES (5, -1, 1, 1)");
 +
 +assertRows(execute("SELECT * FROM %s"),
 +   row(1, -1, 1, 1),
 +   row(2, -1, 1, 1),
 +   row(3, -1, 1, 1),
 +   row(4, -1, 1, 1),
 +   row(5, -1, 1, 1));
 +
 +execute("DELETE FROM %s WHERE pk = 2");
 +
 +assertRows(execute("SELECT * FROM %s"),
 +   row(1, -1, 1, 1),
 +   row(3, -1, 1, 1),
 +   row(4, -1, 1, 1),
 +   row(5, -1, 1, 1));
 +
 +assertRows(execute("SELECT * FROM %s LIMIT 2"),
 +   row(1, -1, 1, 1),
 +   row(3, -1, 1, 1));
  }
+ 
+ @Test
+ public void testFilteringOnClusteringColumnsWithLimitAndStaticColumns() 
throws Throwable
+ {
+ // With only one clustering column
+ createTable("CREATE TABLE %s (a int, b int, s int static, c int, 
primary key (a, b))"
+   + " WITH caching = {'keys': 'ALL', 'rows_per_partition' : 'ALL'}");
+ 
+ for (int i = 0; i < 4; i++)
+ {
+ execute("INSERT INTO %s (a, s) VALUES (?, ?)", i, i);
+ for (int j = 0; j < 3; j++)
+ if (!((i == 0 || i == 3) && j == 1))
+ execute("INSERT INTO %s (a, b, c) VALUES (?, ?, ?)", 
i, j, i + j);
+ }
+ 
 -for (boolean forceFlush : new boolean[]{false, true})
++beforeAndAfterFlush(() ->
+ {
 -if (forceFlush)
 -flush();
 -
+ assertRows(execute("SELECT * FROM %s"),
+row(0, 0, 0, 0),
+row(0, 2, 0, 2),
+row(1, 0, 1, 1),
+row(1, 1, 1, 2),
+row(1, 2, 1, 3),
+row(2, 0, 2, 2),
+row(2, 1, 2, 3),
+row(2, 2, 2, 4),
+row(3, 0, 3, 3),
+row(3, 2, 3, 5));
+ 
+ assertRows(execute("SELECT * FROM %s WHERE b = 1 ALLOW 
FILTERING"),
+row(1, 1, 1, 2),
+row(2, 1, 2, 3));
+ 
+ // The problem was that the static row of the partition 0 used to 
be only filtered in SelectStatement and was
+ // by consequence counted as a row. In which case the query was 
returning one row less.
+ assertRows(execute("SELECT * FROM %s WHERE b = 1 LIMIT 2 ALLOW 
FILTERING"),
+row(1, 1, 1, 2),
+

cassandra git commit: Fix queries with LIMIT and filtering on clustering columns

2017-07-14 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 5b982d790 -> b08843de6


Fix queries with LIMIT and filtering on clustering columns

patch by Benjamin Lerer; reviewed by Stefania Alborghetti for CASSANDRA-11223


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b08843de
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b08843de
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b08843de

Branch: refs/heads/cassandra-2.2
Commit: b08843de67b3c63fa9c0efe10bb9eda07c007f6c
Parents: 5b982d7
Author: Benjamin Lerer 
Authored: Fri Jul 14 17:11:15 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 17:11:15 2017 +0200

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/db/ColumnFamily.java   |   2 +-
 .../cassandra/db/filter/ColumnCounter.java  |  21 +-
 .../cassandra/db/filter/NamesQueryFilter.java   |   2 +-
 .../cassandra/db/filter/SliceQueryFilter.java   |  17 +-
 .../validation/operations/SelectLimitTest.java  | 209 +++
 6 files changed, 238 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b08843de/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 122ba54..bda510f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.11
+ * Fix queries with LIMIT and filtering on clustering columns (CASSANDRA-11223)
  * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
  * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
  * Fix nested Tuples/UDTs validation (CASSANDRA-13646)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b08843de/src/java/org/apache/cassandra/db/ColumnFamily.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamily.java 
b/src/java/org/apache/cassandra/db/ColumnFamily.java
index a7243a2..1532439 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamily.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamily.java
@@ -92,7 +92,7 @@ public abstract class ColumnFamily implements Iterable, 
IRowCacheEntry
 {
 ColumnCounter counter = getComparator().isDense()
   ? new ColumnCounter(now)
-  : new ColumnCounter.GroupByPrefix(now, 
getComparator(), metadata.clusteringColumns().size());
+  : new ColumnCounter.GroupByPrefix(now, 
getComparator(), metadata.clusteringColumns().size(), true);
 return counter.countAll(this).live();
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b08843de/src/java/org/apache/cassandra/db/filter/ColumnCounter.java
--
diff --git a/src/java/org/apache/cassandra/db/filter/ColumnCounter.java 
b/src/java/org/apache/cassandra/db/filter/ColumnCounter.java
index 594fde8..a00d588 100644
--- a/src/java/org/apache/cassandra/db/filter/ColumnCounter.java
+++ b/src/java/org/apache/cassandra/db/filter/ColumnCounter.java
@@ -90,6 +90,7 @@ public class ColumnCounter
 {
 protected final CellNameType type;
 protected final int toGroup;
+protected final boolean countPartitionsWithOnlyStaticData;
 protected CellName previous;
 
 /**
@@ -101,12 +102,15 @@ public class ColumnCounter
  * @param toGroup the number of composite components on which to group
  *column. If 0, all columns are grouped, otherwise we 
group
  *those for which the {@code toGroup} first component 
are equals.
+ * @param countPartitionsWithOnlyStaticData if {@code true} the 
partitions with only static data should be
+ * counted as 1 valid row.
  */
-public GroupByPrefix(long timestamp, CellNameType type, int toGroup)
+public GroupByPrefix(long timestamp, CellNameType type, int toGroup, 
boolean countPartitionsWithOnlyStaticData)
 {
 super(timestamp);
 this.type = type;
 this.toGroup = toGroup;
+this.countPartitionsWithOnlyStaticData = 
countPartitionsWithOnlyStaticData;
 
 assert toGroup == 0 || type != null;
 }
@@ -153,14 +157,16 @@ public class ColumnCounter
 // We want to count the static group as 1 (CQL) row only if 
it's the only
 // group in the partition. So, since we have already counted 
it at this point,
 // just don't count the 2nd group if there is one and the 
first one was static
-if (previous.isStatic())
+if (previous.isStatic() && 

[jira] [Created] (CASSANDRA-13694) sstabledump does not show full precision of timestamp columns

2017-07-14 Thread Tim Reeves (JIRA)
Tim Reeves created CASSANDRA-13694:
--

 Summary: sstabledump does not show full precision of timestamp 
columns
 Key: CASSANDRA-13694
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13694
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Ubuntu 16.04 LTS
Reporter: Tim Reeves
 Fix For: 3.7


Create a table:

CREATE TABLE test_table (
unit_no bigint,
event_code text,
active_time timestamp,
ack_time timestamp,
PRIMARY KEY ((unit_no, event_code), active_time)
) WITH CLUSTERING ORDER BY (active_time DESC)

Insert a row:

INSERT INTO test_table (unit_no, event_code, active_time, ack_time)
  VALUES (1234, 'TEST EVENT', toTimestamp(now()), 
toTimestamp(now()));

Verify that it is in the database with a full timestamp:

cqlsh:pentaho> select * from test_table;

 unit_no | event_code | active_time | ack_time
-++-+-
1234 | TEST EVENT | 2017-07-14 14:52:39.919000+ | 2017-07-14 
14:52:39.919000+

(1 rows)


Write file:

nodetool flush
nodetool compact pentaho

Use sstabledump:

treeves@ubuntu:~$ sstabledump 
/var/lib/cassandra/data/pentaho/test_table-99ba228068a311e7ac30953b79ac2c3e/mb-2-big-Data.db
[
  {
"partition" : {
  "key" : [ "1234", "TEST EVENT" ],
  "position" : 0
},
"rows" : [
  {
"type" : "row",
"position" : 38,
"clustering" : [ "2017-07-14 15:52+0100" ],
"liveness_info" : { "tstamp" : "2017-07-14T14:52:39.888701Z" },
"cells" : [
  { "name" : "ack_time", "value" : "2017-07-14 15:52+0100" }
]
  }
]
  }
]

treeves@ubuntu:~$ 

The timestamp in the cluster key, and the regular column, are both truncated to 
the minute.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11223) Queries with LIMIT filtering on clustering columns can return less rows than expected

2017-07-14 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16087314#comment-16087314
 ] 

Benjamin Lerer commented on CASSANDRA-11223:


I rebased the branches and ran CI. Everything looks good.

> Queries with LIMIT filtering on clustering columns can return less rows than 
> expected
> -
>
> Key: CASSANDRA-11223
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11223
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>
> A query like {{SELECT * FROM %s WHERE b = 1 LIMIT 2 ALLOW FILTERING}} can 
> return less row than expected if the table has some static columns and some 
> of the partition have no rows matching b = 1.
> The problem can be reproduced with the following unit test:
> {code}
> public void testFilteringOnClusteringColumnsWithLimitAndStaticColumns() 
> throws Throwable
> {
> createTable("CREATE TABLE %s (a int, b int, s int static, c int, 
> primary key (a, b))");
> for (int i = 0; i < 3; i++)
> {
> execute("INSERT INTO %s (a, s) VALUES (?, ?)", i, i);
> for (int j = 0; j < 3; j++)
> if (!(i == 0 && j == 1))
> execute("INSERT INTO %s (a, b, c) VALUES (?, ?, ?)", 
> i, j, i + j);
> }
> assertRows(execute("SELECT * FROM %s"),
>    row(1, 0, 1, 1),
>    row(1, 1, 1, 2),
>    row(1, 2, 1, 3),
>    row(0, 0, 0, 0),
>    row(0, 2, 0, 2),
>    row(2, 0, 2, 2),
>    row(2, 1, 2, 3),
>    row(2, 2, 2, 4));
> assertRows(execute("SELECT * FROM %s WHERE b = 1 ALLOW FILTERING"),
>    row(1, 1, 1, 2),
>    row(2, 1, 2, 3));
> assertRows(execute("SELECT * FROM %s WHERE b = 1 LIMIT 2 ALLOW 
> FILTERING"),
>    row(1, 1, 1, 2),
>    row(2, 1, 2, 3)); // < FAIL It returns only one 
> row because the static row of partition 0 is counted and filtered out in 
> SELECT statement
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12971) Add CAS option to WRITE test to stress tool

2017-07-14 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16087296#comment-16087296
 ] 

Stefan Podkowinski commented on CASSANDRA-12971:


This also seems to be addressed in CASSANDRA-13529

> Add CAS option to WRITE test to stress tool
> ---
>
> Key: CASSANDRA-12971
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12971
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Stress, Tools
>Reporter: Vladimir Yudovin
>Assignee: Vladimir Yudovin
> Attachments: stress-cass.patch
>
>
> If -cas option is present each UPDATE is performed with true IF condition, 
> thus data is inserted anyway.
> It's implemented, if it's needed I proceed with the patch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13072) Cassandra failed to run on Linux-aarch64

2017-07-14 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084633#comment-16084633
 ] 

Benjamin Lerer edited comment on CASSANDRA-13072 at 7/14/17 1:08 PM:
-

I pushed some patches to change the JNA version to {{4.2.2}} in 
[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...blerer:13072-3.0],
  
[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...blerer:13072-3.11]
 and 
[trunk|https://github.com/apache/cassandra/compare/trunk...blerer:13072-trunk] 
. I ran the patch on CI an the failing DTests are known flaky tests. 


was (Author: blerer):
I pushed some patches to change the JNA version to {{4.2.2}} in 
[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...blerer:13072-3.0],
  
[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...blerer:13072-3.11]
 and [trunk|https://github.com/apache/cassandra/compare/trunk...blerer:trunk] . 
I ran the patch on CI an the failing DTests are known flaky tests. 

> Cassandra failed to run on Linux-aarch64
> 
>
> Key: CASSANDRA-13072
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13072
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Hardware: ARM aarch64
> OS: Ubuntu 16.04.1 LTS
>Reporter: Jun He
>Assignee: Benjamin Lerer
>  Labels: incompatible
> Fix For: 3.0.14, 3.11.0, 4.0
>
> Attachments: compat_report.html
>
>
> Steps to reproduce:
> 1. Download cassandra latest source
> 2. Build it with "ant"
> 3. Run with "./bin/cassandra". Daemon is crashed with following error message:
> {quote}
> INFO  05:30:21 Initializing system.schema_functions
> INFO  05:30:21 Initializing system.schema_aggregates
> ERROR 05:30:22 Exception in thread Thread[MemtableFlushWriter:1,5,main]
> java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native
> at 
> org.apache.cassandra.utils.memory.MemoryUtil.allocate(MemoryUtil.java:97) 
> ~[main/:na]
> at org.apache.cassandra.io.util.Memory.(Memory.java:74) 
> ~[main/:na]
> at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.(CompressionMetadata.java:316)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.open(CompressionMetadata.java:330)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressedSequentialWriter.(CompressedSequentialWriter.java:76)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:163) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.(BigTableWriter.java:73)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigFormat$WriterFactory.open(BigFormat.java:93)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.create(SSTableWriter.java:96)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.create(SimpleSSTableMultiWriter.java:114)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionStrategy.createSSTableMultiWriter(AbstractCompactionStrategy.java:519)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.createSSTableMultiWriter(CompactionStrategyManager.java:497)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.createSSTableMultiWriter(ColumnFamilyStore.java:480)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable.createFlushWriter(Memtable.java:439) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable.writeSortedContents(Memtable.java:371) 
> ~[main/:na]
> at org.apache.cassandra.db.Memtable.flush(Memtable.java:332) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1054)
>  ~[main/:na]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_111]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_111]
> {quote}
> Analyze:
> This issue is caused by bundled jna-4.0.0.jar which doesn't come with aarch64 
> native support. Replace lib/jna-4.0.0.jar with jna-4.2.0.jar from 
> http://central.maven.org/maven2/net/java/dev/jna/jna/4.2.0/ can fix this 
> problem.
> Attached is the binary compatibility report of jna.jar between 4.0 and 4.2. 
> The result is good (97.4%). So is there possibility to upgrade jna to 4.2.0 
> in upstream? Should there be any kind of tests to execute, please kindly 
> point me. Thanks a lot.




[jira] [Updated] (CASSANDRA-10271) ORDER BY should allow skipping equality-restricted clustering columns

2017-07-14 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-10271:
---
   Resolution: Fixed
Fix Version/s: (was: 3.11.x)
   (was: 4.x)
   4.0
   Status: Resolved  (was: Ready to Commit)

Committed into trunk at 465cfd5be2e92bd9553e1ac4987bfa579d8efca3

> ORDER BY should allow skipping equality-restricted clustering columns
> -
>
> Key: CASSANDRA-10271
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10271
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tyler Hobbs
>Assignee: Andrés de la Peña
>Priority: Minor
> Fix For: 4.0
>
> Attachments: 10271-3.x.txt, cassandra-2.2-10271.txt
>
>
> Given a table like the following:
> {noformat}
> CREATE TABLE foo (a int, b int, c int, d int, PRIMARY KEY (a, b, c));
> {noformat}
> We should support a query like this:
> {noformat}
> SELECT * FROM foo WHERE a = 0 AND b = 0 ORDER BY c ASC;
> {noformat}
> Currently, this results in the following error:
> {noformat}
> [Invalid query] message="Order by currently only support the ordering of 
> columns following their declared order in the PRIMARY KEY"
> {noformat}
> However, since {{b}} is restricted by an equality restriction, we shouldn't 
> require it to be present in the {{ORDER BY}} clause.
> As a workaround, you can use this query instead:
> {noformat}
> SELECT * FROM foo WHERE a = 0 AND b = 0 ORDER BY b ASC, c ASC;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10271) ORDER BY should allow skipping equality-restricted clustering columns

2017-07-14 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16087285#comment-16087285
 ] 

Benjamin Lerer commented on CASSANDRA-10271:


I committed the patch only in trunk as it is an improvement.

> ORDER BY should allow skipping equality-restricted clustering columns
> -
>
> Key: CASSANDRA-10271
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10271
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tyler Hobbs
>Assignee: Andrés de la Peña
>Priority: Minor
> Fix For: 3.11.x, 4.x
>
> Attachments: 10271-3.x.txt, cassandra-2.2-10271.txt
>
>
> Given a table like the following:
> {noformat}
> CREATE TABLE foo (a int, b int, c int, d int, PRIMARY KEY (a, b, c));
> {noformat}
> We should support a query like this:
> {noformat}
> SELECT * FROM foo WHERE a = 0 AND b = 0 ORDER BY c ASC;
> {noformat}
> Currently, this results in the following error:
> {noformat}
> [Invalid query] message="Order by currently only support the ordering of 
> columns following their declared order in the PRIMARY KEY"
> {noformat}
> However, since {{b}} is restricted by an equality restriction, we shouldn't 
> require it to be present in the {{ORDER BY}} clause.
> As a workaround, you can use this query instead:
> {noformat}
> SELECT * FROM foo WHERE a = 0 AND b = 0 ORDER BY b ASC, c ASC;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Allow skipping equality-restricted clustering columns in ORDER BY clause

2017-07-14 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/trunk 4ef86457b -> 465cfd5be


Allow skipping equality-restricted clustering columns in ORDER BY clause

patch by Andrés de la Peña; reviewed by Benjamin Lerer for CASSANDRA-10271


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/465cfd5b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/465cfd5b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/465cfd5b

Branch: refs/heads/trunk
Commit: 465cfd5be2e92bd9553e1ac4987bfa579d8efca3
Parents: 4ef8645
Author: Andrés de la Peña 
Authored: Fri Jul 14 15:02:47 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 15:02:47 2017 +0200

--
 CHANGES.txt |   1 +
 .../restrictions/MultiColumnRestriction.java|   6 +
 .../cql3/statements/SelectStatement.java|  13 +-
 .../operations/SelectOrderByTest.java   | 126 +++
 4 files changed, 141 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/465cfd5b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 583647a..70aae21 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Allow skipping equality-restricted clustering columns in ORDER BY clause 
(CASSANDRA-10271)
  * Use common nowInSec for validation compactions (CASSANDRA-13671)
  * Improve handling of IR prepare failures (CASSANDRA-13672)
  * Send IR coordinator messages synchronously (CASSANDRA-13673)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/465cfd5b/src/java/org/apache/cassandra/cql3/restrictions/MultiColumnRestriction.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/restrictions/MultiColumnRestriction.java 
b/src/java/org/apache/cassandra/cql3/restrictions/MultiColumnRestriction.java
index bf10024..07ebb74 100644
--- 
a/src/java/org/apache/cassandra/cql3/restrictions/MultiColumnRestriction.java
+++ 
b/src/java/org/apache/cassandra/cql3/restrictions/MultiColumnRestriction.java
@@ -137,6 +137,12 @@ public abstract class MultiColumnRestriction implements 
SingleRestriction
 }
 
 @Override
+public boolean isEQ()
+{
+return true;
+}
+
+@Override
 public void addFunctionsTo(List functions)
 {
 value.addFunctionsTo(functions);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/465cfd5b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 2cec190..77eebcf 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -989,7 +989,7 @@ public class SelectStatement implements CQLStatement
 assert !forView;
 verifyOrderingIsAllowed(restrictions);
 orderingComparator = getOrderingComparator(table, selection, 
restrictions, orderingColumns);
-isReversed = isReversed(table, orderingColumns);
+isReversed = isReversed(table, orderingColumns, restrictions);
 if (isReversed)
 orderingComparator = 
Collections.reverseOrder(orderingComparator);
 }
@@ -1203,7 +1203,7 @@ public class SelectStatement implements CQLStatement
 return orderingIndexes;
 }
 
-private boolean isReversed(TableMetadata table, Map orderingColumns) throws InvalidRequestException
+private boolean isReversed(TableMetadata table, Map orderingColumns, StatementRestrictions restrictions) throws 
InvalidRequestException
 {
 Boolean[] reversedMap = new 
Boolean[table.clusteringColumns().size()];
 int i = 0;
@@ -1215,9 +1215,12 @@ public class SelectStatement implements CQLStatement
 checkTrue(def.isClusteringColumn(),
   "Order by is currently only supported on the 
clustered columns of the PRIMARY KEY, got %s", def.name);
 
-checkTrue(i++ == def.position(),
-  "Order by currently only support the ordering of 
columns following their declared order in the PRIMARY KEY");
-
+while (i != def.position())
+{
+
checkTrue(restrictions.isColumnRestrictedByEq(table.clusteringColumns().get(i++)),
+  

[jira] [Commented] (CASSANDRA-13072) Cassandra failed to run on Linux-aarch64

2017-07-14 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16087268#comment-16087268
 ] 

Alex Petrov commented on CASSANDRA-13072:
-

Looks like trunk link you have posted is broken (points to your fork trunk), 
I've checked [this 
one|https://github.com/apache/cassandra/compare/trunk...blerer:13072-trunkhttps://github.com/apache/cassandra/compare/trunk...blerer:13072-trunk].
 CI looks good, too.

Checked the branches locally just to make sure the daemon starts, everything 
works as excepted.

LGTM, +1.

> Cassandra failed to run on Linux-aarch64
> 
>
> Key: CASSANDRA-13072
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13072
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Hardware: ARM aarch64
> OS: Ubuntu 16.04.1 LTS
>Reporter: Jun He
>Assignee: Benjamin Lerer
>  Labels: incompatible
> Fix For: 3.0.14, 3.11.0, 4.0
>
> Attachments: compat_report.html
>
>
> Steps to reproduce:
> 1. Download cassandra latest source
> 2. Build it with "ant"
> 3. Run with "./bin/cassandra". Daemon is crashed with following error message:
> {quote}
> INFO  05:30:21 Initializing system.schema_functions
> INFO  05:30:21 Initializing system.schema_aggregates
> ERROR 05:30:22 Exception in thread Thread[MemtableFlushWriter:1,5,main]
> java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native
> at 
> org.apache.cassandra.utils.memory.MemoryUtil.allocate(MemoryUtil.java:97) 
> ~[main/:na]
> at org.apache.cassandra.io.util.Memory.(Memory.java:74) 
> ~[main/:na]
> at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.(CompressionMetadata.java:316)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.open(CompressionMetadata.java:330)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressedSequentialWriter.(CompressedSequentialWriter.java:76)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:163) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.(BigTableWriter.java:73)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigFormat$WriterFactory.open(BigFormat.java:93)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.create(SSTableWriter.java:96)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.create(SimpleSSTableMultiWriter.java:114)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionStrategy.createSSTableMultiWriter(AbstractCompactionStrategy.java:519)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.createSSTableMultiWriter(CompactionStrategyManager.java:497)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.createSSTableMultiWriter(ColumnFamilyStore.java:480)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable.createFlushWriter(Memtable.java:439) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable.writeSortedContents(Memtable.java:371) 
> ~[main/:na]
> at org.apache.cassandra.db.Memtable.flush(Memtable.java:332) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1054)
>  ~[main/:na]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_111]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_111]
> {quote}
> Analyze:
> This issue is caused by bundled jna-4.0.0.jar which doesn't come with aarch64 
> native support. Replace lib/jna-4.0.0.jar with jna-4.2.0.jar from 
> http://central.maven.org/maven2/net/java/dev/jna/jna/4.2.0/ can fix this 
> problem.
> Attached is the binary compatibility report of jna.jar between 4.0 and 4.2. 
> The result is good (97.4%). So is there possibility to upgrade jna to 4.2.0 
> in upstream? Should there be any kind of tests to execute, please kindly 
> point me. Thanks a lot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13072) Cassandra failed to run on Linux-aarch64

2017-07-14 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-13072:

Status: Ready to Commit  (was: Patch Available)

> Cassandra failed to run on Linux-aarch64
> 
>
> Key: CASSANDRA-13072
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13072
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Hardware: ARM aarch64
> OS: Ubuntu 16.04.1 LTS
>Reporter: Jun He
>Assignee: Benjamin Lerer
>  Labels: incompatible
> Fix For: 3.0.14, 3.11.0, 4.0
>
> Attachments: compat_report.html
>
>
> Steps to reproduce:
> 1. Download cassandra latest source
> 2. Build it with "ant"
> 3. Run with "./bin/cassandra". Daemon is crashed with following error message:
> {quote}
> INFO  05:30:21 Initializing system.schema_functions
> INFO  05:30:21 Initializing system.schema_aggregates
> ERROR 05:30:22 Exception in thread Thread[MemtableFlushWriter:1,5,main]
> java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native
> at 
> org.apache.cassandra.utils.memory.MemoryUtil.allocate(MemoryUtil.java:97) 
> ~[main/:na]
> at org.apache.cassandra.io.util.Memory.(Memory.java:74) 
> ~[main/:na]
> at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.(CompressionMetadata.java:316)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.open(CompressionMetadata.java:330)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressedSequentialWriter.(CompressedSequentialWriter.java:76)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:163) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.(BigTableWriter.java:73)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigFormat$WriterFactory.open(BigFormat.java:93)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.create(SSTableWriter.java:96)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.create(SimpleSSTableMultiWriter.java:114)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionStrategy.createSSTableMultiWriter(AbstractCompactionStrategy.java:519)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.createSSTableMultiWriter(CompactionStrategyManager.java:497)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.createSSTableMultiWriter(ColumnFamilyStore.java:480)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable.createFlushWriter(Memtable.java:439) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable.writeSortedContents(Memtable.java:371) 
> ~[main/:na]
> at org.apache.cassandra.db.Memtable.flush(Memtable.java:332) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1054)
>  ~[main/:na]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_111]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_111]
> {quote}
> Analyze:
> This issue is caused by bundled jna-4.0.0.jar which doesn't come with aarch64 
> native support. Replace lib/jna-4.0.0.jar with jna-4.2.0.jar from 
> http://central.maven.org/maven2/net/java/dev/jna/jna/4.2.0/ can fix this 
> problem.
> Attached is the binary compatibility report of jna.jar between 4.0 and 4.2. 
> The result is good (97.4%). So is there possibility to upgrade jna to 4.2.0 
> in upstream? Should there be any kind of tests to execute, please kindly 
> point me. Thanks a lot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13072) Cassandra failed to run on Linux-aarch64

2017-07-14 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-13072:

Reviewer: Alex Petrov  (was: Aleksey Yeschenko)

> Cassandra failed to run on Linux-aarch64
> 
>
> Key: CASSANDRA-13072
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13072
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Hardware: ARM aarch64
> OS: Ubuntu 16.04.1 LTS
>Reporter: Jun He
>Assignee: Benjamin Lerer
>  Labels: incompatible
> Fix For: 3.0.14, 3.11.0, 4.0
>
> Attachments: compat_report.html
>
>
> Steps to reproduce:
> 1. Download cassandra latest source
> 2. Build it with "ant"
> 3. Run with "./bin/cassandra". Daemon is crashed with following error message:
> {quote}
> INFO  05:30:21 Initializing system.schema_functions
> INFO  05:30:21 Initializing system.schema_aggregates
> ERROR 05:30:22 Exception in thread Thread[MemtableFlushWriter:1,5,main]
> java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native
> at 
> org.apache.cassandra.utils.memory.MemoryUtil.allocate(MemoryUtil.java:97) 
> ~[main/:na]
> at org.apache.cassandra.io.util.Memory.(Memory.java:74) 
> ~[main/:na]
> at org.apache.cassandra.io.util.SafeMemory.(SafeMemory.java:32) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.(CompressionMetadata.java:316)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata$Writer.open(CompressionMetadata.java:330)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.compress.CompressedSequentialWriter.(CompressedSequentialWriter.java:76)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:163) 
> ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.(BigTableWriter.java:73)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.big.BigFormat$WriterFactory.open(BigFormat.java:93)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.create(SSTableWriter.java:96)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.create(SimpleSSTableMultiWriter.java:114)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionStrategy.createSSTableMultiWriter(AbstractCompactionStrategy.java:519)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.createSSTableMultiWriter(CompactionStrategyManager.java:497)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.createSSTableMultiWriter(ColumnFamilyStore.java:480)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable.createFlushWriter(Memtable.java:439) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.Memtable.writeSortedContents(Memtable.java:371) 
> ~[main/:na]
> at org.apache.cassandra.db.Memtable.flush(Memtable.java:332) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1054)
>  ~[main/:na]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_111]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_111]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_111]
> {quote}
> Analyze:
> This issue is caused by bundled jna-4.0.0.jar which doesn't come with aarch64 
> native support. Replace lib/jna-4.0.0.jar with jna-4.2.0.jar from 
> http://central.maven.org/maven2/net/java/dev/jna/jna/4.2.0/ can fix this 
> problem.
> Attached is the binary compatibility report of jna.jar between 4.0 and 4.2. 
> The result is good (97.4%). So is there possibility to upgrade jna to 4.2.0 
> in upstream? Should there be any kind of tests to execute, please kindly 
> point me. Thanks a lot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10271) ORDER BY should allow skipping equality-restricted clustering columns

2017-07-14 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16087235#comment-16087235
 ] 

Benjamin Lerer commented on CASSANDRA-10271:


Thanks the patch looks good. +1

> ORDER BY should allow skipping equality-restricted clustering columns
> -
>
> Key: CASSANDRA-10271
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10271
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tyler Hobbs
>Assignee: Andrés de la Peña
>Priority: Minor
> Fix For: 3.11.x, 4.x
>
> Attachments: 10271-3.x.txt, cassandra-2.2-10271.txt
>
>
> Given a table like the following:
> {noformat}
> CREATE TABLE foo (a int, b int, c int, d int, PRIMARY KEY (a, b, c));
> {noformat}
> We should support a query like this:
> {noformat}
> SELECT * FROM foo WHERE a = 0 AND b = 0 ORDER BY c ASC;
> {noformat}
> Currently, this results in the following error:
> {noformat}
> [Invalid query] message="Order by currently only support the ordering of 
> columns following their declared order in the PRIMARY KEY"
> {noformat}
> However, since {{b}} is restricted by an equality restriction, we shouldn't 
> require it to be present in the {{ORDER BY}} clause.
> As a workaround, you can use this query instead:
> {noformat}
> SELECT * FROM foo WHERE a = 0 AND b = 0 ORDER BY b ASC, c ASC;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13043) Unable to achieve CL while applying counters from commitlog

2017-07-14 Thread Stefano Ortolani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16087188#comment-16087188
 ] 

Stefano Ortolani commented on CASSANDRA-13043:
--

Seems like the more connections the worse it is (which would make sense). Last 
node restart produced > 30 Exceptions, which in turn cause 20 independent 
queries at QUORUM level to fail :/

> Unable to achieve CL while applying counters from commitlog
> ---
>
> Key: CASSANDRA-13043
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13043
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Debian
>Reporter: Catalin Alexandru Zamfir
>
> In version 3.9 of Cassandra, we get the following exceptions on the 
> system.log whenever booting an agent. They seem to grow in number with each 
> reboot. Any idea where they come from or what can we do about them? Note that 
> the cluster is healthy (has sufficient live nodes).
> {noformat}
> 2/14/2016 12:39:47 PMINFO  10:39:47 Updating topology for /10.136.64.120
> 12/14/2016 12:39:47 PMINFO  10:39:47 Updating topology for /10.136.64.120
> 12/14/2016 12:39:47 PMWARN  10:39:47 Uncaught exception on thread 
> Thread[CounterMutationStage-111,5,main]: {}
> 12/14/2016 12:39:47 PMorg.apache.cassandra.exceptions.UnavailableException: 
> Cannot achieve consistency level LOCAL_QUORUM
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:313)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.AbstractWriteResponseHandler.assureSufficientLiveNodes(AbstractWriteResponseHandler.java:146)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:1054)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:1450)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:48)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
> ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_111]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>  [apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat java.lang.Thread.run(Thread.java:745) 
> [na:1.8.0_111]
> 12/14/2016 12:39:47 PMWARN  10:39:47 Uncaught exception on thread 
> Thread[CounterMutationStage-118,5,main]: {}
> 12/14/2016 12:39:47 PMorg.apache.cassandra.exceptions.UnavailableException: 
> Cannot achieve consistency level LOCAL_QUORUM
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:313)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.AbstractWriteResponseHandler.assureSufficientLiveNodes(AbstractWriteResponseHandler.java:146)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:1054)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:1450)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:48)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
> ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_111]
> 12/14/2016 12:39:47 PMat 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.9.jar:3.9]
> 12/14/2016 12:39:47 PMat 
> 

[jira] [Updated] (CASSANDRA-8596) Display datacenter/rack info for offline nodes - PropertyFileSnitch

2017-07-14 Thread Vladimir Yudovin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Yudovin updated CASSANDRA-8596:

Status: Patch Available  (was: Open)

> Display datacenter/rack info for offline nodes - PropertyFileSnitch
> ---
>
> Key: CASSANDRA-8596
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8596
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Vladimir Yudovin
>Priority: Minor
> Attachments: ByteBufferUtils.diff, file_snitch.patch
>
>
> When using GossipPropertyFileSnitch "nodetool status" shows default (from 
> cassandra-topology.properties ) datacenter/rack for offline nodes.
> It happens because offline nodes are not in endpointMap, and thus 
> getRawEndpointInfo  returns default DC/rack is returned 
> (PropertyFileSnitch.java).
> I suggest to take info for those nodes from system.peers tables - just like 
> SELECT data_center,rack FROM system.peers WHERE peer='10.0.0.1'
> Patch attached.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-8596) Display datacenter/rack info for offline nodes - PropertyFileSnitch

2017-07-14 Thread Vladimir Yudovin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Yudovin updated CASSANDRA-8596:

 Flags: Patch
Attachment: file_snitch.patch

-AbstractEndpointSnitch might be a better choice

This class has no getDC/Rack methods. AbstractNetworkTopologySnitch has, but 
different snitches have their own implementations, like EC2Snitch.

I uploaded new patch *file_snitch.patch*, with simpler implementation, like 
EC2Snitch. 

> Display datacenter/rack info for offline nodes - PropertyFileSnitch
> ---
>
> Key: CASSANDRA-8596
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8596
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Vladimir Yudovin
>Priority: Minor
> Attachments: ByteBufferUtils.diff, file_snitch.patch
>
>
> When using GossipPropertyFileSnitch "nodetool status" shows default (from 
> cassandra-topology.properties ) datacenter/rack for offline nodes.
> It happens because offline nodes are not in endpointMap, and thus 
> getRawEndpointInfo  returns default DC/rack is returned 
> (PropertyFileSnitch.java).
> I suggest to take info for those nodes from system.peers tables - just like 
> SELECT data_center,rack FROM system.peers WHERE peer='10.0.0.1'
> Patch attached.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10271) ORDER BY should allow skipping equality-restricted clustering columns

2017-07-14 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-10271:
---
Status: Ready to Commit  (was: Patch Available)

> ORDER BY should allow skipping equality-restricted clustering columns
> -
>
> Key: CASSANDRA-10271
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10271
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tyler Hobbs
>Assignee: Andrés de la Peña
>Priority: Minor
> Fix For: 3.11.x, 4.x
>
> Attachments: 10271-3.x.txt, cassandra-2.2-10271.txt
>
>
> Given a table like the following:
> {noformat}
> CREATE TABLE foo (a int, b int, c int, d int, PRIMARY KEY (a, b, c));
> {noformat}
> We should support a query like this:
> {noformat}
> SELECT * FROM foo WHERE a = 0 AND b = 0 ORDER BY c ASC;
> {noformat}
> Currently, this results in the following error:
> {noformat}
> [Invalid query] message="Order by currently only support the ordering of 
> columns following their declared order in the PRIMARY KEY"
> {noformat}
> However, since {{b}} is restricted by an equality restriction, we shouldn't 
> require it to be present in the {{ORDER BY}} clause.
> As a workaround, you can use this query instead:
> {noformat}
> SELECT * FROM foo WHERE a = 0 AND b = 0 ORDER BY b ASC, c ASC;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13272) "nodetool bootstrap resume" does not exit

2017-07-14 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-13272:
---
   Resolution: Fixed
Fix Version/s: 4.0
   3.11.1
   3.0.15
   2.2.11
   Status: Resolved  (was: Ready to Commit)

Committed into 2.2 at 5b982d790bffbf1beb92fd605f6f213914ba4b63 and merged into 
3.0, 3.11 and trunk

> "nodetool bootstrap resume" does not exit
> -
>
> Key: CASSANDRA-13272
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13272
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Streaming and Messaging
>Reporter: Tom van der Woerdt
>Assignee: Tim Lamballais
>  Labels: lhf
> Fix For: 2.2.11, 3.0.15, 3.11.1, 4.0
>
>
> I have a script that calls "nodetool bootstrap resume" after a failed join 
> (in my environment some streams sometimes fail due to mis-tuning of stream 
> bandwidth settings). However, if the streams fail again, nodetool won't exit.
> Last lines before it just hangs forever :
> {noformat}
> [2017-02-26 07:02:42,287] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12670-big-Data.db
>  (progress: 1112%)
> [2017-02-26 07:02:42,287] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12670-big-Data.db
>  (progress: 1112%)
> [2017-02-26 07:02:59,843] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12671-big-Data.db
>  (progress: 1112%)
> [2017-02-26 09:25:51,000] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:33:45,017] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:39:27,216] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:53:33,084] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:55:07,115] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 10:06:49,557] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 10:40:55,880] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 11:09:21,025] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 12:44:35,755] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 12:49:18,867] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 13:23:50,611] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 13:23:50,612] Stream failed
> {noformat}
> At that point ("Stream failed") I would expect nodetool to exit with a 
> non-zero exit code. Instead, it just wants me to ^C it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13272) "nodetool bootstrap resume" does not exit

2017-07-14 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-13272:
---
Status: Ready to Commit  (was: Patch Available)

> "nodetool bootstrap resume" does not exit
> -
>
> Key: CASSANDRA-13272
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13272
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Streaming and Messaging
>Reporter: Tom van der Woerdt
>Assignee: Tim Lamballais
>  Labels: lhf
>
> I have a script that calls "nodetool bootstrap resume" after a failed join 
> (in my environment some streams sometimes fail due to mis-tuning of stream 
> bandwidth settings). However, if the streams fail again, nodetool won't exit.
> Last lines before it just hangs forever :
> {noformat}
> [2017-02-26 07:02:42,287] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12670-big-Data.db
>  (progress: 1112%)
> [2017-02-26 07:02:42,287] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12670-big-Data.db
>  (progress: 1112%)
> [2017-02-26 07:02:59,843] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12671-big-Data.db
>  (progress: 1112%)
> [2017-02-26 09:25:51,000] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:33:45,017] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:39:27,216] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:53:33,084] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:55:07,115] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 10:06:49,557] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 10:40:55,880] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 11:09:21,025] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 12:44:35,755] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 12:49:18,867] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 13:23:50,611] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 13:23:50,612] Stream failed
> {noformat}
> At that point ("Stream failed") I would expect nodetool to exit with a 
> non-zero exit code. Instead, it just wants me to ^C it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[09/10] cassandra git commit: Merge branch cassandra-3.0 into cassandra-3.11

2017-07-14 Thread blerer
Merge branch cassandra-3.0 into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bd89f562
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bd89f562
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bd89f562

Branch: refs/heads/trunk
Commit: bd89f56232859d8076c8da147e983881ce09e5b7
Parents: 29db251 7de853b
Author: Benjamin Lerer 
Authored: Fri Jul 14 11:41:51 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 11:41:51 2017 +0200

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/service/StorageService.java| 12 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bd89f562/CHANGES.txt
--
diff --cc CHANGES.txt
index 30fa350,fffda7f..e7ad6fb
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -6,9 -2,11 +6,10 @@@ Merged from 3.0
   * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
   * Set test.runners based on cores and memory size (CASSANDRA-13078)
   * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
 - * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
   * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)
   * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)
 - Merged from 2.2:
 +Merged from 2.2:
+   * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
* Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
* Fix nested Tuples/UDTs validation (CASSANDRA-13646)
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bd89f562/src/java/org/apache/cassandra/service/StorageService.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[05/10] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread blerer
Merge branch cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7de853bf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7de853bf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7de853bf

Branch: refs/heads/cassandra-3.11
Commit: 7de853bff7375f18328faa2beeed1c0e35ea5e68
Parents: 7251c95 5b982d7
Author: Benjamin Lerer 
Authored: Fri Jul 14 11:31:49 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 11:31:49 2017 +0200

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/service/StorageService.java| 12 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7de853bf/CHANGES.txt
--
diff --cc CHANGES.txt
index bf36769,122ba54..fffda7f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,64 -1,17 +1,65 @@@
 -2.2.11
 - * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
 - * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
 - * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
 - * Remove unused max_value_size_in_mb config setting from yaml 
(CASSANDRA-13625
 +3.0.15
 + * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
 + * Set test.runners based on cores and memory size (CASSANDRA-13078)
 + * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
 + * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
 + * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)
 + * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)
 + Merged from 2.2:
++  * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
 +  * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
 +  * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
  
 -
 -2.2.10
 +3.0.14
 + * Ensure int overflow doesn't occur when calculating large partition warning 
size (CASSANDRA-13172)
 + * Ensure consistent view of partition columns between coordinator and 
replica in ColumnFilter (CASSANDRA-13004)
 + * Failed unregistering mbean during drop keyspace (CASSANDRA-13346)
 + * nodetool scrub/cleanup/upgradesstables exit code is wrong (CASSANDRA-13542)
 + * Fix the reported number of sstable data files accessed per read 
(CASSANDRA-13120)
 + * Fix schema digest mismatch during rolling upgrades from versions before 
3.0.12 (CASSANDRA-13559)
 + * Upgrade JNA version to 4.4.0 (CASSANDRA-13072)
 + * Interned ColumnIdentifiers should use minimal ByteBuffers (CASSANDRA-13533)
 + * ReverseIndexedReader may drop rows during 2.1 to 3.0 upgrade 
(CASSANDRA-13525)
 + * Fix repair process violating start/end token limits for small ranges 
(CASSANDRA-13052)
 + * Add storage port options to sstableloader (CASSANDRA-13518)
 + * Properly handle quoted index names in cqlsh DESCRIBE output 
(CASSANDRA-12847)
 + * Avoid reading static row twice from old format sstables (CASSANDRA-13236)
 + * Fix NPE in StorageService.excise() (CASSANDRA-13163)
 + * Expire OutboundTcpConnection messages by a single Thread (CASSANDRA-13265)
 + * Fail repair if insufficient responses received (CASSANDRA-13397)
 + * Fix SSTableLoader fail when the loaded table contains dropped columns 
(CASSANDRA-13276)
 + * Avoid name clashes in CassandraIndexTest (CASSANDRA-13427)
 + * Handling partially written hint files (CASSANDRA-12728)
 + * Interrupt replaying hints on decommission (CASSANDRA-13308)
 + * Fix schema version calculation for rolling upgrades (CASSANDRA-13441)
 +Merged from 2.2:
   * Nodes started with join_ring=False should be able to serve requests when 
authentication is enabled (CASSANDRA-11381)
   * cqlsh COPY FROM: increment error count only for failures, not for attempts 
(CASSANDRA-13209)
 - * nodetool upgradesstables should upgrade system tables (CASSANDRA-13119)
 +
 +3.0.13
 + * Make reading of range tombstones more reliable (CASSANDRA-12811)
 + * Fix startup problems due to schema tables not completely flushed 
(CASSANDRA-12213)
 + * Fix view builder bug that can filter out data on restart (CASSANDRA-13405)
 + * Fix 2i page size calculation when there are no regular columns 
(CASSANDRA-13400)
 + * Fix the conversion of 2.X expired rows without regular column data 
(CASSANDRA-13395)
 + * Fix hint delivery when using ext+internal IPs with prefer_local enabled 
(CASSANDRA-13020)
 + * Fix possible NPE on upgrade to 3.0/3.X in case of IO errors 
(CASSANDRA-13389)
 + * Legacy deserializer can create empty range tombstones (CASSANDRA-13341)
 + * Use the Kernel32 library to retrieve the PID on Windows and fix startup 
checks 

[08/10] cassandra git commit: Merge branch cassandra-3.0 into cassandra-3.11

2017-07-14 Thread blerer
Merge branch cassandra-3.0 into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bd89f562
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bd89f562
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bd89f562

Branch: refs/heads/cassandra-3.11
Commit: bd89f56232859d8076c8da147e983881ce09e5b7
Parents: 29db251 7de853b
Author: Benjamin Lerer 
Authored: Fri Jul 14 11:41:51 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 11:41:51 2017 +0200

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/service/StorageService.java| 12 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bd89f562/CHANGES.txt
--
diff --cc CHANGES.txt
index 30fa350,fffda7f..e7ad6fb
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -6,9 -2,11 +6,10 @@@ Merged from 3.0
   * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
   * Set test.runners based on cores and memory size (CASSANDRA-13078)
   * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
 - * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
   * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)
   * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)
 - Merged from 2.2:
 +Merged from 2.2:
+   * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
* Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
* Fix nested Tuples/UDTs validation (CASSANDRA-13646)
  

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bd89f562/src/java/org/apache/cassandra/service/StorageService.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[06/10] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread blerer
Merge branch cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7de853bf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7de853bf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7de853bf

Branch: refs/heads/cassandra-3.0
Commit: 7de853bff7375f18328faa2beeed1c0e35ea5e68
Parents: 7251c95 5b982d7
Author: Benjamin Lerer 
Authored: Fri Jul 14 11:31:49 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 11:31:49 2017 +0200

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/service/StorageService.java| 12 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7de853bf/CHANGES.txt
--
diff --cc CHANGES.txt
index bf36769,122ba54..fffda7f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,64 -1,17 +1,65 @@@
 -2.2.11
 - * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
 - * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
 - * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
 - * Remove unused max_value_size_in_mb config setting from yaml 
(CASSANDRA-13625
 +3.0.15
 + * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
 + * Set test.runners based on cores and memory size (CASSANDRA-13078)
 + * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
 + * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
 + * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)
 + * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)
 + Merged from 2.2:
++  * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
 +  * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
 +  * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
  
 -
 -2.2.10
 +3.0.14
 + * Ensure int overflow doesn't occur when calculating large partition warning 
size (CASSANDRA-13172)
 + * Ensure consistent view of partition columns between coordinator and 
replica in ColumnFilter (CASSANDRA-13004)
 + * Failed unregistering mbean during drop keyspace (CASSANDRA-13346)
 + * nodetool scrub/cleanup/upgradesstables exit code is wrong (CASSANDRA-13542)
 + * Fix the reported number of sstable data files accessed per read 
(CASSANDRA-13120)
 + * Fix schema digest mismatch during rolling upgrades from versions before 
3.0.12 (CASSANDRA-13559)
 + * Upgrade JNA version to 4.4.0 (CASSANDRA-13072)
 + * Interned ColumnIdentifiers should use minimal ByteBuffers (CASSANDRA-13533)
 + * ReverseIndexedReader may drop rows during 2.1 to 3.0 upgrade 
(CASSANDRA-13525)
 + * Fix repair process violating start/end token limits for small ranges 
(CASSANDRA-13052)
 + * Add storage port options to sstableloader (CASSANDRA-13518)
 + * Properly handle quoted index names in cqlsh DESCRIBE output 
(CASSANDRA-12847)
 + * Avoid reading static row twice from old format sstables (CASSANDRA-13236)
 + * Fix NPE in StorageService.excise() (CASSANDRA-13163)
 + * Expire OutboundTcpConnection messages by a single Thread (CASSANDRA-13265)
 + * Fail repair if insufficient responses received (CASSANDRA-13397)
 + * Fix SSTableLoader fail when the loaded table contains dropped columns 
(CASSANDRA-13276)
 + * Avoid name clashes in CassandraIndexTest (CASSANDRA-13427)
 + * Handling partially written hint files (CASSANDRA-12728)
 + * Interrupt replaying hints on decommission (CASSANDRA-13308)
 + * Fix schema version calculation for rolling upgrades (CASSANDRA-13441)
 +Merged from 2.2:
   * Nodes started with join_ring=False should be able to serve requests when 
authentication is enabled (CASSANDRA-11381)
   * cqlsh COPY FROM: increment error count only for failures, not for attempts 
(CASSANDRA-13209)
 - * nodetool upgradesstables should upgrade system tables (CASSANDRA-13119)
 +
 +3.0.13
 + * Make reading of range tombstones more reliable (CASSANDRA-12811)
 + * Fix startup problems due to schema tables not completely flushed 
(CASSANDRA-12213)
 + * Fix view builder bug that can filter out data on restart (CASSANDRA-13405)
 + * Fix 2i page size calculation when there are no regular columns 
(CASSANDRA-13400)
 + * Fix the conversion of 2.X expired rows without regular column data 
(CASSANDRA-13395)
 + * Fix hint delivery when using ext+internal IPs with prefer_local enabled 
(CASSANDRA-13020)
 + * Fix possible NPE on upgrade to 3.0/3.X in case of IO errors 
(CASSANDRA-13389)
 + * Legacy deserializer can create empty range tombstones (CASSANDRA-13341)
 + * Use the Kernel32 library to retrieve the PID on Windows and fix startup 
checks 

[10/10] cassandra git commit: Merge branch cassandra-3.11 into trunk

2017-07-14 Thread blerer
Merge branch cassandra-3.11 into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4ef86457
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4ef86457
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4ef86457

Branch: refs/heads/trunk
Commit: 4ef86457b5ed3fd82c48011997d6c8b25af6fdc6
Parents: f48a319 bd89f56
Author: Benjamin Lerer 
Authored: Fri Jul 14 11:43:42 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 11:43:53 2017 +0200

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/service/StorageService.java| 12 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4ef86457/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4ef86457/src/java/org/apache/cassandra/service/StorageService.java
--


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[07/10] cassandra git commit: Merge branch cassandra-2.2 into cassandra-3.0

2017-07-14 Thread blerer
Merge branch cassandra-2.2 into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7de853bf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7de853bf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7de853bf

Branch: refs/heads/trunk
Commit: 7de853bff7375f18328faa2beeed1c0e35ea5e68
Parents: 7251c95 5b982d7
Author: Benjamin Lerer 
Authored: Fri Jul 14 11:31:49 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 11:31:49 2017 +0200

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/service/StorageService.java| 12 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7de853bf/CHANGES.txt
--
diff --cc CHANGES.txt
index bf36769,122ba54..fffda7f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,64 -1,17 +1,65 @@@
 -2.2.11
 - * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
 - * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
 - * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
 - * Remove unused max_value_size_in_mb config setting from yaml 
(CASSANDRA-13625
 +3.0.15
 + * Make concat work with iterators that have different subsets of columns 
(CASSANDRA-13482)
 + * Set test.runners based on cores and memory size (CASSANDRA-13078)
 + * Allow different NUMACTL_ARGS to be passed in (CASSANDRA-13557)
 + * Allow native function calls in CQLSSTableWriter (CASSANDRA-12606)
 + * Fix secondary index queries on COMPACT tables (CASSANDRA-13627)
 + * Nodetool listsnapshots output is missing a newline, if there are no 
snapshots (CASSANDRA-13568)
 + Merged from 2.2:
++  * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
 +  * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
 +  * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
  
 -
 -2.2.10
 +3.0.14
 + * Ensure int overflow doesn't occur when calculating large partition warning 
size (CASSANDRA-13172)
 + * Ensure consistent view of partition columns between coordinator and 
replica in ColumnFilter (CASSANDRA-13004)
 + * Failed unregistering mbean during drop keyspace (CASSANDRA-13346)
 + * nodetool scrub/cleanup/upgradesstables exit code is wrong (CASSANDRA-13542)
 + * Fix the reported number of sstable data files accessed per read 
(CASSANDRA-13120)
 + * Fix schema digest mismatch during rolling upgrades from versions before 
3.0.12 (CASSANDRA-13559)
 + * Upgrade JNA version to 4.4.0 (CASSANDRA-13072)
 + * Interned ColumnIdentifiers should use minimal ByteBuffers (CASSANDRA-13533)
 + * ReverseIndexedReader may drop rows during 2.1 to 3.0 upgrade 
(CASSANDRA-13525)
 + * Fix repair process violating start/end token limits for small ranges 
(CASSANDRA-13052)
 + * Add storage port options to sstableloader (CASSANDRA-13518)
 + * Properly handle quoted index names in cqlsh DESCRIBE output 
(CASSANDRA-12847)
 + * Avoid reading static row twice from old format sstables (CASSANDRA-13236)
 + * Fix NPE in StorageService.excise() (CASSANDRA-13163)
 + * Expire OutboundTcpConnection messages by a single Thread (CASSANDRA-13265)
 + * Fail repair if insufficient responses received (CASSANDRA-13397)
 + * Fix SSTableLoader fail when the loaded table contains dropped columns 
(CASSANDRA-13276)
 + * Avoid name clashes in CassandraIndexTest (CASSANDRA-13427)
 + * Handling partially written hint files (CASSANDRA-12728)
 + * Interrupt replaying hints on decommission (CASSANDRA-13308)
 + * Fix schema version calculation for rolling upgrades (CASSANDRA-13441)
 +Merged from 2.2:
   * Nodes started with join_ring=False should be able to serve requests when 
authentication is enabled (CASSANDRA-11381)
   * cqlsh COPY FROM: increment error count only for failures, not for attempts 
(CASSANDRA-13209)
 - * nodetool upgradesstables should upgrade system tables (CASSANDRA-13119)
 +
 +3.0.13
 + * Make reading of range tombstones more reliable (CASSANDRA-12811)
 + * Fix startup problems due to schema tables not completely flushed 
(CASSANDRA-12213)
 + * Fix view builder bug that can filter out data on restart (CASSANDRA-13405)
 + * Fix 2i page size calculation when there are no regular columns 
(CASSANDRA-13400)
 + * Fix the conversion of 2.X expired rows without regular column data 
(CASSANDRA-13395)
 + * Fix hint delivery when using ext+internal IPs with prefer_local enabled 
(CASSANDRA-13020)
 + * Fix possible NPE on upgrade to 3.0/3.X in case of IO errors 
(CASSANDRA-13389)
 + * Legacy deserializer can create empty range tombstones (CASSANDRA-13341)
 + * Use the Kernel32 library to retrieve the PID on Windows and fix startup 
checks 

[02/10] cassandra git commit: Fix potential NPE when resume bootstrap fails

2017-07-14 Thread blerer
Fix potential NPE when resume bootstrap fails

patch by Tim Lamballais; reviewed by Benjamin Lerer for CASSANDRA-13272


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5b982d79
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5b982d79
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5b982d79

Branch: refs/heads/cassandra-3.0
Commit: 5b982d790bffbf1beb92fd605f6f213914ba4b63
Parents: cb6fad3
Author: Tim Lamballais 
Authored: Fri Jul 14 11:28:12 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 11:28:12 2017 +0200

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/service/StorageService.java| 12 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5b982d79/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6740c9e..122ba54 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.11
+ * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
  * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
  * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
  * Remove unused max_value_size_in_mb config setting from yaml (CASSANDRA-13625

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5b982d79/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 9d2d7bb..1ecedac 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -1254,8 +1254,16 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 @Override
 public void onFailure(Throwable e)
 {
-String message = "Error during bootstrap: " + 
e.getCause().getMessage();
-logger.error(message, e.getCause());
+String message = "Error during bootstrap: ";
+if (e instanceof ExecutionException && e.getCause() != 
null)
+{
+message += e.getCause().getMessage();
+}
+else
+{
+message += e.getMessage();
+}
+logger.error(message, e);
 progressSupport.progress("bootstrap", new 
ProgressEvent(ProgressEventType.ERROR, 1, 1, message));
 progressSupport.progress("bootstrap", new 
ProgressEvent(ProgressEventType.COMPLETE, 1, 1, "Resume bootstrap complete"));
 }


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[04/10] cassandra git commit: Fix potential NPE when resume bootstrap fails

2017-07-14 Thread blerer
Fix potential NPE when resume bootstrap fails

patch by Tim Lamballais; reviewed by Benjamin Lerer for CASSANDRA-13272


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5b982d79
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5b982d79
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5b982d79

Branch: refs/heads/trunk
Commit: 5b982d790bffbf1beb92fd605f6f213914ba4b63
Parents: cb6fad3
Author: Tim Lamballais 
Authored: Fri Jul 14 11:28:12 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 11:28:12 2017 +0200

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/service/StorageService.java| 12 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5b982d79/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6740c9e..122ba54 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.11
+ * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
  * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
  * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
  * Remove unused max_value_size_in_mb config setting from yaml (CASSANDRA-13625

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5b982d79/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 9d2d7bb..1ecedac 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -1254,8 +1254,16 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 @Override
 public void onFailure(Throwable e)
 {
-String message = "Error during bootstrap: " + 
e.getCause().getMessage();
-logger.error(message, e.getCause());
+String message = "Error during bootstrap: ";
+if (e instanceof ExecutionException && e.getCause() != 
null)
+{
+message += e.getCause().getMessage();
+}
+else
+{
+message += e.getMessage();
+}
+logger.error(message, e);
 progressSupport.progress("bootstrap", new 
ProgressEvent(ProgressEventType.ERROR, 1, 1, message));
 progressSupport.progress("bootstrap", new 
ProgressEvent(ProgressEventType.COMPLETE, 1, 1, "Resume bootstrap complete"));
 }


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[03/10] cassandra git commit: Fix potential NPE when resume bootstrap fails

2017-07-14 Thread blerer
Fix potential NPE when resume bootstrap fails

patch by Tim Lamballais; reviewed by Benjamin Lerer for CASSANDRA-13272


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5b982d79
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5b982d79
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5b982d79

Branch: refs/heads/cassandra-3.11
Commit: 5b982d790bffbf1beb92fd605f6f213914ba4b63
Parents: cb6fad3
Author: Tim Lamballais 
Authored: Fri Jul 14 11:28:12 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 11:28:12 2017 +0200

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/service/StorageService.java| 12 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5b982d79/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6740c9e..122ba54 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.11
+ * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
  * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
  * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
  * Remove unused max_value_size_in_mb config setting from yaml (CASSANDRA-13625

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5b982d79/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 9d2d7bb..1ecedac 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -1254,8 +1254,16 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 @Override
 public void onFailure(Throwable e)
 {
-String message = "Error during bootstrap: " + 
e.getCause().getMessage();
-logger.error(message, e.getCause());
+String message = "Error during bootstrap: ";
+if (e instanceof ExecutionException && e.getCause() != 
null)
+{
+message += e.getCause().getMessage();
+}
+else
+{
+message += e.getMessage();
+}
+logger.error(message, e);
 progressSupport.progress("bootstrap", new 
ProgressEvent(ProgressEventType.ERROR, 1, 1, message));
 progressSupport.progress("bootstrap", new 
ProgressEvent(ProgressEventType.COMPLETE, 1, 1, "Resume bootstrap complete"));
 }


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[01/10] cassandra git commit: Fix potential NPE when resume bootstrap fails

2017-07-14 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 cb6fad3ef -> 5b982d790
  refs/heads/cassandra-3.0 7251c9559 -> 7de853bff
  refs/heads/cassandra-3.11 29db25116 -> bd89f5623
  refs/heads/trunk f48a319ac -> 4ef86457b


Fix potential NPE when resume bootstrap fails

patch by Tim Lamballais; reviewed by Benjamin Lerer for CASSANDRA-13272


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5b982d79
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5b982d79
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5b982d79

Branch: refs/heads/cassandra-2.2
Commit: 5b982d790bffbf1beb92fd605f6f213914ba4b63
Parents: cb6fad3
Author: Tim Lamballais 
Authored: Fri Jul 14 11:28:12 2017 +0200
Committer: Benjamin Lerer 
Committed: Fri Jul 14 11:28:12 2017 +0200

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/service/StorageService.java| 12 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5b982d79/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6740c9e..122ba54 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2.11
+ * Fix potential NPE when resume bootstrap fails (CASSANDRA-13272)
  * Fix toJSONString for the UDT, tuple and collection types (CASSANDRA-13592)
  * Fix nested Tuples/UDTs validation (CASSANDRA-13646)
  * Remove unused max_value_size_in_mb config setting from yaml (CASSANDRA-13625

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5b982d79/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 9d2d7bb..1ecedac 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -1254,8 +1254,16 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 @Override
 public void onFailure(Throwable e)
 {
-String message = "Error during bootstrap: " + 
e.getCause().getMessage();
-logger.error(message, e.getCause());
+String message = "Error during bootstrap: ";
+if (e instanceof ExecutionException && e.getCause() != 
null)
+{
+message += e.getCause().getMessage();
+}
+else
+{
+message += e.getMessage();
+}
+logger.error(message, e);
 progressSupport.progress("bootstrap", new 
ProgressEvent(ProgressEventType.ERROR, 1, 1, message));
 progressSupport.progress("bootstrap", new 
ProgressEvent(ProgressEventType.COMPLETE, 1, 1, "Resume bootstrap complete"));
 }


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13272) "nodetool bootstrap resume" does not exit

2017-07-14 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16087112#comment-16087112
 ] 

Benjamin Lerer commented on CASSANDRA-13272:


Thanks for the patch. I ran CI on the 2.2 branch for extra safety and the 
failing tests are unrelated.
 
+1

> "nodetool bootstrap resume" does not exit
> -
>
> Key: CASSANDRA-13272
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13272
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle, Streaming and Messaging
>Reporter: Tom van der Woerdt
>Assignee: Tim Lamballais
>  Labels: lhf
>
> I have a script that calls "nodetool bootstrap resume" after a failed join 
> (in my environment some streams sometimes fail due to mis-tuning of stream 
> bandwidth settings). However, if the streams fail again, nodetool won't exit.
> Last lines before it just hangs forever :
> {noformat}
> [2017-02-26 07:02:42,287] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12670-big-Data.db
>  (progress: 1112%)
> [2017-02-26 07:02:42,287] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12670-big-Data.db
>  (progress: 1112%)
> [2017-02-26 07:02:59,843] received file 
> /var/lib/cassandra/data/keyspace/table-63d5d42009fa11e5879ebd9463bffdac/mc-12671-big-Data.db
>  (progress: 1112%)
> [2017-02-26 09:25:51,000] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:33:45,017] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:39:27,216] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:53:33,084] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 09:55:07,115] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 10:06:49,557] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 10:40:55,880] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 11:09:21,025] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 12:44:35,755] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 12:49:18,867] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 13:23:50,611] session with /10.x.y.z complete (progress: 1112%)
> [2017-02-26 13:23:50,612] Stream failed
> {noformat}
> At that point ("Stream failed") I would expect nodetool to exit with a 
> non-zero exit code. Instead, it just wants me to ^C it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13622) Better config validation/documentation

2017-07-14 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang reassigned CASSANDRA-13622:


Assignee: ZhaoYang

> Better config validation/documentation
> --
>
> Key: CASSANDRA-13622
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13622
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Kurt Greaves
>Assignee: ZhaoYang
>Priority: Minor
>  Labels: lhf
>
> There are a number of properties in the yaml that are "in_mb", however 
> resolve to bytes when calculated in {{DatabaseDescriptor.java}}, but are 
> stored in int's. This means that their maximum values are 2047, as any higher 
> when converted to bytes overflows the int.
> Where possible/reasonable we should convert these to be long's, and stored as 
> long's. If there is no reason for the value to ever be >2047 we should at 
> least document that as the max value, or better yet make it error if set 
> higher than that. Noting that although it's bad practice to increase a lot of 
> them to such high values, there may be cases where it is necessary and in 
> which case we should handle it appropriately rather than overflowing and 
> surprising the user. That is, causing it to break but not in the way the user 
> expected it to :)
> Following are functions that currently could be at risk of the above:
> {code:java|title=DatabaseDescriptor.java}
> getThriftFramedTransportSize()
> getMaxValueSize()
> getCompactionLargePartitionWarningThreshold()
> getCommitLogSegmentSize()
> getNativeTransportMaxFrameSize()
> # These are in KB so max value of 2096128
> getBatchSizeWarnThreshold()
> getColumnIndexSize()
> getColumnIndexCacheSize()
> getMaxMutationSize()
> {code}
> Note we may not actually need to fix all of these, and there may be more. 
> This was just from a rough scan over the code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13622) Better config validation/documentation

2017-07-14 Thread ZhaoYang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated CASSANDRA-13622:
-
Status: Patch Available  (was: Open)

> Better config validation/documentation
> --
>
> Key: CASSANDRA-13622
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13622
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Kurt Greaves
>Assignee: ZhaoYang
>Priority: Minor
>  Labels: lhf
>
> There are a number of properties in the yaml that are "in_mb", however 
> resolve to bytes when calculated in {{DatabaseDescriptor.java}}, but are 
> stored in int's. This means that their maximum values are 2047, as any higher 
> when converted to bytes overflows the int.
> Where possible/reasonable we should convert these to be long's, and stored as 
> long's. If there is no reason for the value to ever be >2047 we should at 
> least document that as the max value, or better yet make it error if set 
> higher than that. Noting that although it's bad practice to increase a lot of 
> them to such high values, there may be cases where it is necessary and in 
> which case we should handle it appropriately rather than overflowing and 
> surprising the user. That is, causing it to break but not in the way the user 
> expected it to :)
> Following are functions that currently could be at risk of the above:
> {code:java|title=DatabaseDescriptor.java}
> getThriftFramedTransportSize()
> getMaxValueSize()
> getCompactionLargePartitionWarningThreshold()
> getCommitLogSegmentSize()
> getNativeTransportMaxFrameSize()
> # These are in KB so max value of 2096128
> getBatchSizeWarnThreshold()
> getColumnIndexSize()
> getColumnIndexCacheSize()
> getMaxMutationSize()
> {code}
> Note we may not actually need to fix all of these, and there may be more. 
> This was just from a rough scan over the code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13622) Better config validation/documentation

2017-07-14 Thread ZhaoYang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16087060#comment-16087060
 ] 

ZhaoYang commented on CASSANDRA-13622:
--

| [trunk| https://github.com/jasonstack/cassandra/commits/CASSANDRA-13622] | 
[unit|https://circleci.com/gh/jasonstack/cassandra/153] | dtest: except for 1 
known error in bootstrap_test |

> Better config validation/documentation
> --
>
> Key: CASSANDRA-13622
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13622
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Kurt Greaves
>Priority: Minor
>  Labels: lhf
>
> There are a number of properties in the yaml that are "in_mb", however 
> resolve to bytes when calculated in {{DatabaseDescriptor.java}}, but are 
> stored in int's. This means that their maximum values are 2047, as any higher 
> when converted to bytes overflows the int.
> Where possible/reasonable we should convert these to be long's, and stored as 
> long's. If there is no reason for the value to ever be >2047 we should at 
> least document that as the max value, or better yet make it error if set 
> higher than that. Noting that although it's bad practice to increase a lot of 
> them to such high values, there may be cases where it is necessary and in 
> which case we should handle it appropriately rather than overflowing and 
> surprising the user. That is, causing it to break but not in the way the user 
> expected it to :)
> Following are functions that currently could be at risk of the above:
> {code:java|title=DatabaseDescriptor.java}
> getThriftFramedTransportSize()
> getMaxValueSize()
> getCompactionLargePartitionWarningThreshold()
> getCommitLogSegmentSize()
> getNativeTransportMaxFrameSize()
> # These are in KB so max value of 2096128
> getBatchSizeWarnThreshold()
> getColumnIndexSize()
> getColumnIndexCacheSize()
> getMaxMutationSize()
> {code}
> Note we may not actually need to fix all of these, and there may be more. 
> This was just from a rough scan over the code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13652) Deadlock in AbstractCommitLogSegmentManager

2017-07-14 Thread Fuud (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16087047#comment-16087047
 ] 

Fuud commented on CASSANDRA-13652:
--

Yes. Seems good.

> Deadlock in AbstractCommitLogSegmentManager
> ---
>
> Key: CASSANDRA-13652
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13652
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Fuud
>
> AbstractCommitLogManager uses LockSupport.(un)park incorreclty. It invokes 
> unpark without checking if manager thread was parked in approriate place. 
> For example, logging frameworks uses queues and queues uses ReadWriteLock's 
> that uses LockSupport. Therefore AbstractCommitLogManager.wakeManager can 
> wake thread inside Lock and manager thread will sleep forever at park() 
> method (because unpark permit was already consumed inside lock).
> For examle stack traces:
> {code}
> "MigrationStage:1" id=412 state=WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUninterruptibly(WaitQueue.java:279)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.awaitAvailableSegment(AbstractCommitLogSegmentManager.java:263)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.advanceAllocatingFrom(AbstractCommitLogSegmentManager.java:237)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager.forceRecycleAll(AbstractCommitLogSegmentManager.java:279)
> at 
> org.apache.cassandra.db.commitlog.CommitLog.forceRecycleAllSegments(CommitLog.java:210)
> at org.apache.cassandra.config.Schema.dropView(Schema.java:708)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.lambda$updateKeyspace$23(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace$$Lambda$382/1123232162.accept(Unknown
>  Source)
> at java.util.LinkedHashMap$LinkedValues.forEach(LinkedHashMap.java:608)
> at 
> java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1080)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.updateKeyspace(SchemaKeyspace.java:1361)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1332)
> at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1282)
>   - locked java.lang.Class@cc38904
> at 
> org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:51)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$LocalSessionWrapper.run(DebuggableThreadPoolExecutor.java:322)
> at 
> com.ringcentral.concurrent.executors.MonitoredRunnable.run(MonitoredRunnable.java:36)
> at MON_R_MigrationStage.run(NamedRunnableFactory.java:67)
> at 
> com.ringcentral.concurrent.executors.MonitoredThreadPoolExecutor$MdcAwareRunnable.run(MonitoredThreadPoolExecutor.java:114)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$61/179045.run(Unknown
>  Source)
> at java.lang.Thread.run(Thread.java:745)
> "COMMIT-LOG-ALLOCATOR:1" id=80 state=WAITING
> at sun.misc.Unsafe.park(Native Method)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
> at 
> org.apache.cassandra.db.commitlog.AbstractCommitLogSegmentManager$1.runMayThrow(AbstractCommitLogSegmentManager.java:128)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
> at 
> org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$61/179045.run(Unknown
>  Source)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Solution is to use Semaphore instead of low-level LockSupport.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12971) Add CAS option to WRITE test to stress tool

2017-07-14 Thread Vladimir Yudovin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Yudovin updated CASSANDRA-12971:
-
Status: Patch Available  (was: Open)

> Add CAS option to WRITE test to stress tool
> ---
>
> Key: CASSANDRA-12971
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12971
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Stress, Tools
>Reporter: Vladimir Yudovin
>Assignee: Vladimir Yudovin
> Attachments: stress-cass.patch
>
>
> If -cas option is present each UPDATE is performed with true IF condition, 
> thus data is inserted anyway.
> It's implemented, if it's needed I proceed with the patch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12971) Add CAS option to WRITE test to stress tool

2017-07-14 Thread Vladimir Yudovin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Yudovin updated CASSANDRA-12971:
-
 Flags: Patch
Attachment: stress-cass.patch

Merges in 3.11 and trunk

> Add CAS option to WRITE test to stress tool
> ---
>
> Key: CASSANDRA-12971
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12971
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Stress, Tools
>Reporter: Vladimir Yudovin
>Assignee: Vladimir Yudovin
> Attachments: stress-cass.patch
>
>
> If -cas option is present each UPDATE is performed with true IF condition, 
> thus data is inserted anyway.
> It's implemented, if it's needed I proceed with the patch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11223) Queries with LIMIT filtering on clustering columns can return less rows than expected

2017-07-14 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086999#comment-16086999
 ] 

Stefania commented on CASSANDRA-11223:
--

Shame on me for only picking up a grammar mistake and not the actual content 
yesterday... ¯_(ツ)_/¯

> Queries with LIMIT filtering on clustering columns can return less rows than 
> expected
> -
>
> Key: CASSANDRA-11223
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11223
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>
> A query like {{SELECT * FROM %s WHERE b = 1 LIMIT 2 ALLOW FILTERING}} can 
> return less row than expected if the table has some static columns and some 
> of the partition have no rows matching b = 1.
> The problem can be reproduced with the following unit test:
> {code}
> public void testFilteringOnClusteringColumnsWithLimitAndStaticColumns() 
> throws Throwable
> {
> createTable("CREATE TABLE %s (a int, b int, s int static, c int, 
> primary key (a, b))");
> for (int i = 0; i < 3; i++)
> {
> execute("INSERT INTO %s (a, s) VALUES (?, ?)", i, i);
> for (int j = 0; j < 3; j++)
> if (!(i == 0 && j == 1))
> execute("INSERT INTO %s (a, b, c) VALUES (?, ?, ?)", 
> i, j, i + j);
> }
> assertRows(execute("SELECT * FROM %s"),
>    row(1, 0, 1, 1),
>    row(1, 1, 1, 2),
>    row(1, 2, 1, 3),
>    row(0, 0, 0, 0),
>    row(0, 2, 0, 2),
>    row(2, 0, 2, 2),
>    row(2, 1, 2, 3),
>    row(2, 2, 2, 4));
> assertRows(execute("SELECT * FROM %s WHERE b = 1 ALLOW FILTERING"),
>    row(1, 1, 1, 2),
>    row(2, 1, 2, 3));
> assertRows(execute("SELECT * FROM %s WHERE b = 1 LIMIT 2 ALLOW 
> FILTERING"),
>    row(1, 1, 1, 2),
>    row(2, 1, 2, 3)); // < FAIL It returns only one 
> row because the static row of partition 0 is counted and filtered out in 
> SELECT statement
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13693) A potential problem in the Ec2MultiRegionSnitch_gossiperStarting method

2017-07-14 Thread Hao Zhong (JIRA)
Hao Zhong created CASSANDRA-13693:
-

 Summary: A potential problem in the 
Ec2MultiRegionSnitch_gossiperStarting method
 Key: CASSANDRA-13693
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13693
 Project: Cassandra
  Issue Type: Bug
Reporter: Hao Zhong


The code of Ec2MultiRegionSnitch_gossiperStarting is as follow:
{code}
public void gossiperStarting()
{
super.gossiperStarting();

Gossiper.instance.addLocalApplicationState(ApplicationState.INTERNAL_IP, 
StorageService.instance.valueFactory.internalIP(localPrivateAddress));
Gossiper.instance.register(new ReconnectableSnitchHelper(this, 
ec2region, true));
}
{code}
I notice that CASSANDRA-5897 fixed a bug, whose buggy code is identical. The 
fixed code is 
{code}
public void gossiperStarting()
{
super.gossiperStarting();

Gossiper.instance.addLocalApplicationState(ApplicationState.INTERNAL_IP,

StorageService.instance.valueFactory.internalIP(FBUtilities.getLocalAddress().getHostAddress()));

reloadGossiperState();

gossipStarted = true;
}

private void reloadGossiperState()
{
if (Gossiper.instance != null)
{
ReconnectableSnitchHelper pendingHelper = new 
ReconnectableSnitchHelper(this, myDC, preferLocal);
Gossiper.instance.register(pendingHelper);

pendingHelper = snitchHelperReference.getAndSet(pendingHelper);
if (pendingHelper != null)
Gossiper.instance.unregister(pendingHelper);
}
// else this will eventually rerun at gossiperStarting()
}
{code}

If Ec2MultiRegionSnitch is supposed to auto-reload, the above fix shall be 
applied to its code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-11223) Queries with LIMIT filtering on clustering columns can return less rows than expected

2017-07-14 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086967#comment-16086967
 ] 

Benjamin Lerer edited comment on CASSANDRA-11223 at 7/14/17 7:29 AM:
-

Shame on me. I need 3 tries out to properly correct some javadoc (not even able 
to copy paste the proper stuff) :-(


was (Author: blerer):
Shame on me. I need 3 try out to properly correct some javadoc (not even able 
to copy paste the proper stuff) :-(

> Queries with LIMIT filtering on clustering columns can return less rows than 
> expected
> -
>
> Key: CASSANDRA-11223
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11223
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>
> A query like {{SELECT * FROM %s WHERE b = 1 LIMIT 2 ALLOW FILTERING}} can 
> return less row than expected if the table has some static columns and some 
> of the partition have no rows matching b = 1.
> The problem can be reproduced with the following unit test:
> {code}
> public void testFilteringOnClusteringColumnsWithLimitAndStaticColumns() 
> throws Throwable
> {
> createTable("CREATE TABLE %s (a int, b int, s int static, c int, 
> primary key (a, b))");
> for (int i = 0; i < 3; i++)
> {
> execute("INSERT INTO %s (a, s) VALUES (?, ?)", i, i);
> for (int j = 0; j < 3; j++)
> if (!(i == 0 && j == 1))
> execute("INSERT INTO %s (a, b, c) VALUES (?, ?, ?)", 
> i, j, i + j);
> }
> assertRows(execute("SELECT * FROM %s"),
>    row(1, 0, 1, 1),
>    row(1, 1, 1, 2),
>    row(1, 2, 1, 3),
>    row(0, 0, 0, 0),
>    row(0, 2, 0, 2),
>    row(2, 0, 2, 2),
>    row(2, 1, 2, 3),
>    row(2, 2, 2, 4));
> assertRows(execute("SELECT * FROM %s WHERE b = 1 ALLOW FILTERING"),
>    row(1, 1, 1, 2),
>    row(2, 1, 2, 3));
> assertRows(execute("SELECT * FROM %s WHERE b = 1 LIMIT 2 ALLOW 
> FILTERING"),
>    row(1, 1, 1, 2),
>    row(2, 1, 2, 3)); // < FAIL It returns only one 
> row because the static row of partition 0 is counted and filtered out in 
> SELECT statement
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13692) CompactionAwareWriter_getWriteDirectory throws incompatible exceptions

2017-07-14 Thread Hao Zhong (JIRA)
Hao Zhong created CASSANDRA-13692:
-

 Summary: CompactionAwareWriter_getWriteDirectory throws 
incompatible exceptions
 Key: CASSANDRA-13692
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13692
 Project: Cassandra
  Issue Type: Bug
  Components: Compaction
Reporter: Hao Zhong


The CompactionAwareWriter_getWriteDirectory throws RuntimeException:
{code}
public Directories.DataDirectory getWriteDirectory(Iterable 
sstables, long estimatedWriteSize)
{
File directory = null;
for (SSTableReader sstable : sstables)
{
if (directory == null)
directory = sstable.descriptor.directory;
if (!directory.equals(sstable.descriptor.directory))
{
logger.trace("All sstables not from the same disk - putting 
results in {}", directory);
break;
}
}
Directories.DataDirectory d = 
getDirectories().getDataDirectoryForFile(directory);
if (d != null)
{
long availableSpace = d.getAvailableSpace();
if (availableSpace < estimatedWriteSize)
throw new RuntimeException(String.format("Not enough space to 
write %s to %s (%s available)",
 
FBUtilities.prettyPrintMemory(estimatedWriteSize),
 d.location,
 
FBUtilities.prettyPrintMemory(availableSpace)));
logger.trace("putting compaction results in {}", directory);
return d;
}
d = getDirectories().getWriteableLocation(estimatedWriteSize);
if (d == null)
throw new RuntimeException(String.format("Not enough disk space to 
store %s",
 
FBUtilities.prettyPrintMemory(estimatedWriteSize)));
return d;
}
{code}

However, the thrown exception does not  trigger the failure policy. 
CASSANDRA-11448 fixed a similar problem. The buggy code is:
{code}
protected Directories.DataDirectory getWriteDirectory(long writeSize)
{
Directories.DataDirectory directory = 
getDirectories().getWriteableLocation(writeSize);
if (directory == null)
throw new RuntimeException("Insufficient disk space to write " + 
writeSize + " bytes");

return directory;
}
{code}
The fixed code is:
{code}
protected Directories.DataDirectory getWriteDirectory(long writeSize)
{
Directories.DataDirectory directory = 
getDirectories().getWriteableLocation(writeSize);
if (directory == null)
throw new FSWriteError(new IOException("Insufficient disk space to 
write " + writeSize + " bytes"), "");

return directory;
}
{code}
The fixed code throws FSWE and triggers the failure policy.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-11223) Queries with LIMIT filtering on clustering columns can return less rows than expected

2017-07-14 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16086967#comment-16086967
 ] 

Benjamin Lerer commented on CASSANDRA-11223:


Shame on me. I need 3 try out to properly correct some javadoc (not even able 
to copy paste the proper stuff) :-(

> Queries with LIMIT filtering on clustering columns can return less rows than 
> expected
> -
>
> Key: CASSANDRA-11223
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11223
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>
> A query like {{SELECT * FROM %s WHERE b = 1 LIMIT 2 ALLOW FILTERING}} can 
> return less row than expected if the table has some static columns and some 
> of the partition have no rows matching b = 1.
> The problem can be reproduced with the following unit test:
> {code}
> public void testFilteringOnClusteringColumnsWithLimitAndStaticColumns() 
> throws Throwable
> {
> createTable("CREATE TABLE %s (a int, b int, s int static, c int, 
> primary key (a, b))");
> for (int i = 0; i < 3; i++)
> {
> execute("INSERT INTO %s (a, s) VALUES (?, ?)", i, i);
> for (int j = 0; j < 3; j++)
> if (!(i == 0 && j == 1))
> execute("INSERT INTO %s (a, b, c) VALUES (?, ?, ?)", 
> i, j, i + j);
> }
> assertRows(execute("SELECT * FROM %s"),
>    row(1, 0, 1, 1),
>    row(1, 1, 1, 2),
>    row(1, 2, 1, 3),
>    row(0, 0, 0, 0),
>    row(0, 2, 0, 2),
>    row(2, 0, 2, 2),
>    row(2, 1, 2, 3),
>    row(2, 2, 2, 4));
> assertRows(execute("SELECT * FROM %s WHERE b = 1 ALLOW FILTERING"),
>    row(1, 1, 1, 2),
>    row(2, 1, 2, 3));
> assertRows(execute("SELECT * FROM %s WHERE b = 1 LIMIT 2 ALLOW 
> FILTERING"),
>    row(1, 1, 1, 2),
>    row(2, 1, 2, 3)); // < FAIL It returns only one 
> row because the static row of partition 0 is counted and filtered out in 
> SELECT statement
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



  1   2   >