[jira] [Commented] (CASSANDRA-13478) SASIndex has a time to live issue in Cassandra

2017-05-11 Thread jack chen (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007666#comment-16007666
 ] 

jack chen commented on CASSANDRA-13478:
---

When I insert a large amount of data, i use lastposttime as my query condition, 
and i got the result, but on the next day I do the same action with the same 
condition but no result found.
I do not know if the problem is related to TTL,but I can provide the host and 
settings to prove that this problem exists

> SASIndex has a time to live issue in Cassandra
> --
>
> Key: CASSANDRA-13478
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13478
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
> Environment: cqlsh 5.0.1 | Cassandra 3.10 | CQL spec 3.4.4 | Native 
> protocol v4 | ubuntu 14.04
>Reporter: jack chen
>Priority: Minor
> Attachments: schema
>
>
> I have a table, the schema can be seen in attach file
> I would like to search the data using the timestamp data type with lt gt eq 
> as a query condition,
> Ex:
> {code}
> CREATE TABLE XXX.userlist (
> userid text PRIMARY KEY,
> lastposttime timestamp
> )
> Select * from userlist where lastposttime> '2017-04-01 16:00:00+';
> {code}
> There are 2 conditions :
> If I insert the data and then select it, the result will be correct
> But in case I insert data and then the next day I restart Cassandra, and 
> after that select the data, there will be no data selected
> The difference is that there is no Service restart on th next day in the 
> first manner. Actually, the data are still living in Cassandra, but TimeStamp 
> can’t be used as the query condition



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10130) Node failure during 2i update after streaming can have incomplete 2i when restarted

2017-05-11 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007507#comment-16007507
 ] 

Paulo Motta commented on CASSANDRA-10130:
-

Good call [~sbtourist]! I thought setting the index removed would make the 
queries unavailable so I haven't considered this route, my bad!

There is still a slight possibility of race though if an index is being built 
by some other thread and we mark the index as built after the streaming 
operation but before the  other thread build is completed.
 
Perhaps we could have a counter incremented each time an index starts 
rebuilding and only call {{markIndexBuilt}} if this counter is zero when the 
index finishes rebuilding?

Dtest looks good!

> Node failure during 2i update after streaming can have incomplete 2i when 
> restarted
> ---
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Yuki Morishita
>Assignee: Andrés de la Peña
>Priority: Minor
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[1/3] cassandra git commit: Remove old driver version from merge of CASSANDRA-12847

2017-05-11 Thread mshuler
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.11 c0941cf78 -> 28f8fcb04
  refs/heads/trunk 36b63ef51 -> 66e5cc0d1


Remove old driver version from merge of CASSANDRA-12847


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/28f8fcb0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/28f8fcb0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/28f8fcb0

Branch: refs/heads/cassandra-3.11
Commit: 28f8fcb04e2ad0600e3f782a1474602f4f40faa3
Parents: c0941cf
Author: Michael Shuler 
Authored: Thu May 11 20:03:35 2017 -0500
Committer: Michael Shuler 
Committed: Thu May 11 20:03:35 2017 -0500

--
 ...a-driver-internal-only-3.7.0.post0-2481531.zip | Bin 252057 -> 0 bytes
 1 file changed, 0 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/28f8fcb0/lib/cassandra-driver-internal-only-3.7.0.post0-2481531.zip
--
diff --git a/lib/cassandra-driver-internal-only-3.7.0.post0-2481531.zip 
b/lib/cassandra-driver-internal-only-3.7.0.post0-2481531.zip
deleted file mode 100644
index 11d5944..000
Binary files a/lib/cassandra-driver-internal-only-3.7.0.post0-2481531.zip and 
/dev/null differ


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[3/3] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-05-11 Thread mshuler
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/66e5cc0d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/66e5cc0d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/66e5cc0d

Branch: refs/heads/trunk
Commit: 66e5cc0d1d3524dd7da5608000be01669e49902c
Parents: 36b63ef 28f8fcb
Author: Michael Shuler 
Authored: Thu May 11 20:04:21 2017 -0500
Committer: Michael Shuler 
Committed: Thu May 11 20:04:21 2017 -0500

--
 ...a-driver-internal-only-3.7.0.post0-2481531.zip | Bin 252057 -> 0 bytes
 1 file changed, 0 insertions(+), 0 deletions(-)
--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/3] cassandra git commit: Remove old driver version from merge of CASSANDRA-12847

2017-05-11 Thread mshuler
Remove old driver version from merge of CASSANDRA-12847


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/28f8fcb0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/28f8fcb0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/28f8fcb0

Branch: refs/heads/trunk
Commit: 28f8fcb04e2ad0600e3f782a1474602f4f40faa3
Parents: c0941cf
Author: Michael Shuler 
Authored: Thu May 11 20:03:35 2017 -0500
Committer: Michael Shuler 
Committed: Thu May 11 20:03:35 2017 -0500

--
 ...a-driver-internal-only-3.7.0.post0-2481531.zip | Bin 252057 -> 0 bytes
 1 file changed, 0 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/28f8fcb0/lib/cassandra-driver-internal-only-3.7.0.post0-2481531.zip
--
diff --git a/lib/cassandra-driver-internal-only-3.7.0.post0-2481531.zip 
b/lib/cassandra-driver-internal-only-3.7.0.post0-2481531.zip
deleted file mode 100644
index 11d5944..000
Binary files a/lib/cassandra-driver-internal-only-3.7.0.post0-2481531.zip and 
/dev/null differ


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13518) sstableloader doesn't support non default storage_port and ssl_storage_port.

2017-05-11 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007356#comment-16007356
 ] 

Yuki Morishita commented on CASSANDRA-13518:


Thanks for the patch!
Patch looks good to me. I can merge it to trunk when committing.

> sstableloader doesn't support non default storage_port and ssl_storage_port. 
> -
>
> Key: CASSANDRA-13518
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13518
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Zhiyan Shao
>Assignee: Zhiyan Shao
>Priority: Minor
> Fix For: 3.0.x
>
> Attachments: 13518-3.0-1.txt, 13518-3.0-2.txt
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Currently these 2 ports are using hardcoded default ports: 
> https://github.com/apache/cassandra/blob/8b3a60b9a7dbefeecc06bace617279612ec7092d/src/java/org/apache/cassandra/config/Config.java#L128-L129
> The proposed fix is to add command line option for these two ports like what 
> NATIVE_PORT_OPTION currently does



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13518) sstableloader doesn't support non default storage_port and ssl_storage_port.

2017-05-11 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-13518:
---
Status: Ready to Commit  (was: Patch Available)

> sstableloader doesn't support non default storage_port and ssl_storage_port. 
> -
>
> Key: CASSANDRA-13518
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13518
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Zhiyan Shao
>Assignee: Zhiyan Shao
>Priority: Minor
> Fix For: 3.0.x
>
> Attachments: 13518-3.0-1.txt, 13518-3.0-2.txt
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Currently these 2 ports are using hardcoded default ports: 
> https://github.com/apache/cassandra/blob/8b3a60b9a7dbefeecc06bace617279612ec7092d/src/java/org/apache/cassandra/config/Config.java#L128-L129
> The proposed fix is to add command line option for these two ports like what 
> NATIVE_PORT_OPTION currently does



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13526) nodetool cleanup on KS with no replicas should remove old data, not silently complete

2017-05-11 Thread Jai Bheemsen Rao Dhanwada (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007319#comment-16007319
 ] 

Jai Bheemsen Rao Dhanwada commented on CASSANDRA-13526:
---

The issue I am seeing on C* cluster with the below setup

Cassandra version : 2.1.16
Datacenters: 4 DC
RF: NetworkTopologyStrategy with 3 RF in each DC
Keyspaces: 50 keyspaces, few replicating to one DC and few replicating to 
multiple DC



> nodetool cleanup on KS with no replicas should remove old data, not silently 
> complete
> -
>
> Key: CASSANDRA-13526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13526
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Jeff Jirsa
>
> From the user list:
> https://lists.apache.org/thread.html/5d49cc6bbc6fd2e5f8b12f2308a3e24212a55afbb441af5cb8cd4167@%3Cuser.cassandra.apache.org%3E
> If you have a multi-dc cluster, but some keyspaces not replicated to a given 
> DC, you'll be unable to run cleanup on those keyspaces in that DC, because 
> [the cleanup code will see no ranges and exit 
> early|https://github.com/apache/cassandra/blob/4cfaf85/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L427-L441]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13521) Add configurable upper bound for validation executor threads

2017-05-11 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007173#comment-16007173
 ] 

Jeff Jirsa commented on CASSANDRA-13521:


+1. Looks good to me.


> Add configurable upper bound for validation executor threads
> 
>
> Key: CASSANDRA-13521
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13521
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> CompactionManager.validationExecutor has no upper limit on the maximum number 
> of threads it can use. This could cause a node to become overwhelmed with 
> simultaneous validation tasks in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13521) Add configurable upper bound for validation executor threads

2017-05-11 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13521:
---
Status: Ready to Commit  (was: Patch Available)

> Add configurable upper bound for validation executor threads
> 
>
> Key: CASSANDRA-13521
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13521
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> CompactionManager.validationExecutor has no upper limit on the maximum number 
> of threads it can use. This could cause a node to become overwhelmed with 
> simultaneous validation tasks in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13521) Add configurable upper bound for validation executor threads

2017-05-11 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007158#comment-16007158
 ] 

Blake Eggleston commented on CASSANDRA-13521:
-

[~jjirsa] fixed

> Add configurable upper bound for validation executor threads
> 
>
> Key: CASSANDRA-13521
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13521
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> CompactionManager.validationExecutor has no upper limit on the maximum number 
> of threads it can use. This could cause a node to become overwhelmed with 
> simultaneous validation tasks in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13526) nodetool cleanup on KS with no replicas should remove old data, not silently complete

2017-05-11 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13526:
---
Description: 
>From the user list:

https://lists.apache.org/thread.html/5d49cc6bbc6fd2e5f8b12f2308a3e24212a55afbb441af5cb8cd4167@%3Cuser.cassandra.apache.org%3E

If you have a multi-dc cluster, but some keyspaces not replicated to a given 
DC, you'll be unable to run cleanup on those keyspaces in that DC, because [the 
cleanup code will see no ranges and exit 
early|https://github.com/apache/cassandra/blob/4cfaf85/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L427-L441]

  was:
>From the user list:

https://lists.apache.org/thread.html/5d49cc6bbc6fd2e5f8b12f2308a3e24212a55afbb441af5cb8cd4167@%3Cuser.cassandra.apache.org%3E

If you have a multi-dc cluster, but some keyspaces not replicated to a given 
DC, you'll be unable to run cleanup on that DC, because [the cleanup code will 
see no ranges and exit 
early|https://github.com/apache/cassandra/blob/4cfaf85/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L427-L441]


> nodetool cleanup on KS with no replicas should remove old data, not silently 
> complete
> -
>
> Key: CASSANDRA-13526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13526
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Jeff Jirsa
>
> From the user list:
> https://lists.apache.org/thread.html/5d49cc6bbc6fd2e5f8b12f2308a3e24212a55afbb441af5cb8cd4167@%3Cuser.cassandra.apache.org%3E
> If you have a multi-dc cluster, but some keyspaces not replicated to a given 
> DC, you'll be unable to run cleanup on those keyspaces in that DC, because 
> [the cleanup code will see no ranges and exit 
> early|https://github.com/apache/cassandra/blob/4cfaf85/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L427-L441]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13521) Add configurable upper bound for validation executor threads

2017-05-11 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007139#comment-16007139
 ] 

Jeff Jirsa edited comment on CASSANDRA-13521 at 5/11/17 8:38 PM:
-

{code}
# Number of simultaneous repair validations to allow. Default is unbounded
# Values less than one are interpreted as unbounded (the default)
# concurrent_validations: -1
{code}

For consistency with other config values, can we use {{0}} for 
disabled/default? Your code handles {{0}} the same as {{-1}}, but using {{0}} 
in the yaml would be nice.

Please use primitive 
[int|https://github.com/bdeggleston/cassandra/commit/8630c9de6ec572ba4bf8f91b25869d0a5ec3a898#diff-b66584c9ce7b64019b5db5a531deeda1R174]
 in Config 

Trivial spacing issue 
[here|https://github.com/bdeggleston/cassandra/commit/8630c9de6ec572ba4bf8f91b25869d0a5ec3a898#diff-b76a607445d53f18a98c9df14323c7ddR1370]






was (Author: jjirsa):
{code}
# Number of simultaneous repair validations to allow. Default is unbounded
# Values less than one are interpreted as unbounded (the default)
# concurrent_validations: -1
{code}

For consistency with other config values, can we use {{0}} for 
disabled/default? Your code handles {{0}} the same as {{-1}}, but using {{0}} 
in the yaml would be nice.

Trivial spacing issue 
[here|https://github.com/bdeggleston/cassandra/commit/8630c9de6ec572ba4bf8f91b25869d0a5ec3a898#diff-b76a607445d53f18a98c9df14323c7ddR1370]





> Add configurable upper bound for validation executor threads
> 
>
> Key: CASSANDRA-13521
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13521
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> CompactionManager.validationExecutor has no upper limit on the maximum number 
> of threads it can use. This could cause a node to become overwhelmed with 
> simultaneous validation tasks in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13521) Add configurable upper bound for validation executor threads

2017-05-11 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007139#comment-16007139
 ] 

Jeff Jirsa edited comment on CASSANDRA-13521 at 5/11/17 8:34 PM:
-

{code}
# Number of simultaneous repair validations to allow. Default is unbounded
# Values less than one are interpreted as unbounded (the default)
# concurrent_validations: -1
{code}

For consistency with other config values, can we use {{0}} for 
disabled/default? Your code handles {{0}} the same as {{-1}}, but using {{0}} 
in the yaml would be nice.

Trivial spacing issue 
[here|https://github.com/bdeggleston/cassandra/commit/8630c9de6ec572ba4bf8f91b25869d0a5ec3a898#diff-b76a607445d53f18a98c9df14323c7ddR1370]






was (Author: jjirsa):
{code}
+# Number of simultaneous repair validations to allow. Default is unbounded
 +# Values less than one are interpreted as unbounded (the default)
 +# concurrent_validations: -1
 {code}

For consistency with other config values, can we use {{0}} for 
disabled/default? Your code handles {{0}} the same as {{-1}}, but using {{0}} 
in the yaml would be nice.

Trivial spacing issue 
[here|https://github.com/bdeggleston/cassandra/commit/8630c9de6ec572ba4bf8f91b25869d0a5ec3a898#diff-b76a607445d53f18a98c9df14323c7ddR1370]





> Add configurable upper bound for validation executor threads
> 
>
> Key: CASSANDRA-13521
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13521
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> CompactionManager.validationExecutor has no upper limit on the maximum number 
> of threads it can use. This could cause a node to become overwhelmed with 
> simultaneous validation tasks in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13521) Add configurable upper bound for validation executor threads

2017-05-11 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13521:
---
Reviewer: Jeff Jirsa

> Add configurable upper bound for validation executor threads
> 
>
> Key: CASSANDRA-13521
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13521
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> CompactionManager.validationExecutor has no upper limit on the maximum number 
> of threads it can use. This could cause a node to become overwhelmed with 
> simultaneous validation tasks in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13521) Add configurable upper bound for validation executor threads

2017-05-11 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007139#comment-16007139
 ] 

Jeff Jirsa commented on CASSANDRA-13521:


{code}
+# Number of simultaneous repair validations to allow. Default is unbounded
 +# Values less than one are interpreted as unbounded (the default)
 +# concurrent_validations: -1
 {code}

For consistency with other config values, can we use {{0}} for 
disabled/default? Your code handles {{0}} the same as {{-1}}, but using {{0}} 
in the yaml would be nice.

Trivial spacing issue 
[here|https://github.com/bdeggleston/cassandra/commit/8630c9de6ec572ba4bf8f91b25869d0a5ec3a898#diff-b76a607445d53f18a98c9df14323c7ddR1370]





> Add configurable upper bound for validation executor threads
> 
>
> Key: CASSANDRA-13521
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13521
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> CompactionManager.validationExecutor has no upper limit on the maximum number 
> of threads it can use. This could cause a node to become overwhelmed with 
> simultaneous validation tasks in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-13372) dtest failure in repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test

2017-05-11 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston resolved CASSANDRA-13372.
-
Resolution: Fixed

This hasn't failed since March, and CASSANDRA-13454 would have fixed any 
problems with sstables not being promoted to repaired after a repair

> dtest failure in 
> repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test
> 
>
> Key: CASSANDRA-13372
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13372
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Sean McCarthy
>Assignee: Blake Eggleston
>  Labels: dtest, test-failure
> Attachments: node1_debug.log, node1_gc.log, node1.log, 
> node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1525/testReport/repair_tests.incremental_repair_test/TestIncRepair/sstable_marking_test
> {code}
> Error Message
> 'Repaired at: 0' unexpectedly found in 'SSTable: 
> /tmp/dtest-qoNeEc/test/node1/data0/keyspace1/standard1-3674b7a00e7911e78a4625bec3430063/na-4-big\nPartitioner:
>  org.apache.cassandra.dht.Murmur3Partitioner\nBloom Filter FP chance: 
> 0.01\nMinimum timestamp: 1490129948985001\nMaximum timestamp: 
> 1490129952789002\nSSTable min local deletion time: 2147483647\nSSTable max 
> local deletion time: 2147483647\nCompressor: -\nTTL min: 0\nTTL max: 0\nFirst 
> token: -9222701292667950301 (key=5032394c323239385030)\nLast token: 
> -3062233317334255711 (key=3032503434364f4e4f30)\nEstimated droppable 
> tombstones: 0.0\nSSTable Level: 0\nRepaired at: 0\nPending repair: 
> 45a396b0-0e79-11e7-841e-2d88b3d470cf\nReplay positions covered: 
> {CommitLogPosition(segmentId=1490129923946, 
> position=42824)=CommitLogPosition(segmentId=1490129923946, 
> position=2605214)}\ntotalColumnsSet: 16550\ntotalRows: 3310\nEstimated 
> tombstone drop times:\nCount   Row SizeCell Count\n1  
> 0 0\n2  0 
> 0\n3  0 0\n4  
> 0 0\n5  0  
> 3310\n6  0 0\n7   
>0 0\n8  0 0\n10
>  0 0\n12 0
>  0\n14 0 0\n17
>  0 0\n20 0 
> 0\n24 0 0\n29 
> 0 0\n35 0 0\n42   
>   0 0\n50 0   
>   0\n60 0 0\n72   
>   0 0\n86 0 
> 0\n1030 0\n124
> 0 0\n1490 0\n179  
>   0 0\n2151   
>   0\n258 3309 0\n310  
>   0 0\n3720 
> 0\n4460 0\n535
> 0 0\n6420 0\n770  
>   0 0\n9240   
>   0\n1109   0 0\n1331 
>   0 0\n1597   0 
> 0\n1916   0 0\n2299   
> 0 0\n2759   0 0\n3311 
>   0 0\n3973   0   
>   0\n4768   0 0\n5722 
>   0 0\n6866   0 
> 0\n8239   0 0\n9887   
> 0 0\n11864  0 0\n14237
>   0 0\n17084  0   
>   0\n20501  0 0\n24601
>   0 0\n29521  0 
> 0\n35425  0 0\n42510  
> 0 0\n51012   

[jira] [Updated] (CASSANDRA-13526) nodetool cleanup on KS with no replicas should remove old data, not silently complete

2017-05-11 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13526:
---
Summary: nodetool cleanup on KS with no replicas should remove old data, 
not silently complete  (was: nodetool cleanup on KS with no replicas should 
remove old data, not fail)

> nodetool cleanup on KS with no replicas should remove old data, not silently 
> complete
> -
>
> Key: CASSANDRA-13526
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13526
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Jeff Jirsa
>
> From the user list:
> https://lists.apache.org/thread.html/5d49cc6bbc6fd2e5f8b12f2308a3e24212a55afbb441af5cb8cd4167@%3Cuser.cassandra.apache.org%3E
> If you have a multi-dc cluster, but some keyspaces not replicated to a given 
> DC, you'll be unable to run cleanup on that DC, because [the cleanup code 
> will see no ranges and exit 
> early|https://github.com/apache/cassandra/blob/4cfaf85/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L427-L441]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13526) nodetool cleanup on KS with no replicas should remove old data, not fail

2017-05-11 Thread Jeff Jirsa (JIRA)
Jeff Jirsa created CASSANDRA-13526:
--

 Summary: nodetool cleanup on KS with no replicas should remove old 
data, not fail
 Key: CASSANDRA-13526
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13526
 Project: Cassandra
  Issue Type: Bug
  Components: Compaction
Reporter: Jeff Jirsa


>From the user list:

https://lists.apache.org/thread.html/5d49cc6bbc6fd2e5f8b12f2308a3e24212a55afbb441af5cb8cd4167@%3Cuser.cassandra.apache.org%3E

If you have a multi-dc cluster, but some keyspaces not replicated to a given 
DC, you'll be unable to run cleanup on that DC, because [the cleanup code will 
see no ranges and exit 
early|https://github.com/apache/cassandra/blob/4cfaf85/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L427-L441]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13521) Add configurable upper bound for validation executor threads

2017-05-11 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-13521:

Status: Patch Available  (was: Open)

|[trunk|https://github.com/bdeggleston/cassandra/tree/13521]|[utest|https://circleci.com/gh/bdeggleston/cassandra/42#tests/containers/2]|

> Add configurable upper bound for validation executor threads
> 
>
> Key: CASSANDRA-13521
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13521
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
> Fix For: 4.0
>
>
> CompactionManager.validationExecutor has no upper limit on the maximum number 
> of threads it can use. This could cause a node to become overwhelmed with 
> simultaneous validation tasks in some cases.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-10130) Node failure during 2i update after streaming can have incomplete 2i when restarted

2017-05-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007042#comment-16007042
 ] 

Andrés de la Peña edited comment on CASSANDRA-10130 at 5/11/17 7:40 PM:


[~sbtourist], I think you are right, this could make things much easier :)

Here is the updated patch:

||[trunk|https://github.com/apache/cassandra/compare/trunk...adelapena:10130-trunk]|[utests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-10130-trunk-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-10130-trunk-dtest/]|

I have also updated [the 
dtest|https://github.com/riptano/cassandra-dtest/compare/master...adelapena:CASSANDRA-10130]
 to be sure that it is still possible to query the index when it is not marked 
as built.

[~pauloricardomg], what do you think? Are we missing something? The idea about 
adding a status to the index info table looked nice for future changes. At 
least we could have renamed the column containing the keyspace name as 
{{keyspace_name}} instead of {{table_name}}.


was (Author: adelapena):
[~sbtourist], I think you are right, this could make things much easier :)

Here is the updated patch:

||[trunk|https://github.com/apache/cassandra/compare/trunk...adelapena:10130-trunk]|[utests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-10130-trunk-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-10130-trunk-dtest/]|

I have also updated [the 
dtest|https://github.com/riptano/cassandra-dtest/compare/master...adelapena:CASSANDRA-10130]
 to be sure that it is still possible to query the index when it is not marked 
as built.

> Node failure during 2i update after streaming can have incomplete 2i when 
> restarted
> ---
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Yuki Morishita
>Assignee: Andrés de la Peña
>Priority: Minor
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-10130) Node failure during 2i update after streaming can have incomplete 2i when restarted

2017-05-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007042#comment-16007042
 ] 

Andrés de la Peña edited comment on CASSANDRA-10130 at 5/11/17 7:27 PM:


[~sbtourist], I think you are right, this could make things much easier :)

Here is the updated patch:

||[trunk|https://github.com/apache/cassandra/compare/trunk...adelapena:10130-trunk]|[utests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-10130-trunk-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-10130-trunk-dtest/]|

I have also updated [the 
dtest|https://github.com/riptano/cassandra-dtest/compare/master...adelapena:CASSANDRA-10130]
 to be sure that it is still possible to query the index when it is not marked 
as built.


was (Author: adelapena):
[~sbtourist], I think you are right, this could make things much easier :)

There is the updated patch:

||[trunk|https://github.com/apache/cassandra/compare/trunk...adelapena:10130-trunk]|[utests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-10130-trunk-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-10130-trunk-dtest/]|

I have also updated [the 
dtest|https://github.com/riptano/cassandra-dtest/compare/master...adelapena:CASSANDRA-10130]
 to be sure that it is still possible to query the index when it is not marked 
as built.

> Node failure during 2i update after streaming can have incomplete 2i when 
> restarted
> ---
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Yuki Morishita
>Assignee: Andrés de la Peña
>Priority: Minor
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10130) Node failure during 2i update after streaming can have incomplete 2i when restarted

2017-05-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16007042#comment-16007042
 ] 

Andrés de la Peña commented on CASSANDRA-10130:
---

[~sbtourist], I think you are right, this could make things much easier :)

There is the updated patch:

||[trunk|https://github.com/apache/cassandra/compare/trunk...adelapena:10130-trunk]|[utests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-10130-trunk-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-10130-trunk-dtest/]|

I have also updated [the 
dtest|https://github.com/riptano/cassandra-dtest/compare/master...adelapena:CASSANDRA-10130]
 to be sure that it is still possible to query the index when it is not marked 
as built.

> Node failure during 2i update after streaming can have incomplete 2i when 
> restarted
> ---
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Yuki Morishita
>Assignee: Andrés de la Peña
>Priority: Minor
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-8272) 2ndary indexes can return stale data

2017-05-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006896#comment-16006896
 ] 

Andrés de la Peña commented on CASSANDRA-8272:
--

Indeed, both returning entries that were recently valid but aren't anymore and 
checking if the custom expressions are still valid after reconcilliation is 
something that should be done by index implementations. Also, some 
implementations could not be able or even not be interested on doing so.

Regarding the {{LIMIT}} problem, the new {{DataLimits}} suggested by 
[~slebresne] could be provided by a new method 
{{Index#getPostIndexQueryLimits}}, similar to the existing 
[{{Index#getPostIndexQueryFilter}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/index/Index.java#L341].
 Alternatively, we might just disable the limits at 
[{{ReadCommand#executeLocally}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/ReadCommand.java#L371]
 and 
[{{DataResolver#resolve}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/DataResolver.java#L79]
 for index queries, and let the indexes take care of restricting the limits at 
search time. This way the index implementations wouldn't require to specify a 
{{CustomExpression#isSatistiedBy}} implementation, they would discard the stale 
entries with their post processor, which could also be used for other things, 
like sorting.

> 2ndary indexes can return stale data
> 
>
> Key: CASSANDRA-8272
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8272
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Andrés de la Peña
> Fix For: 3.0.x
>
>
> When replica return 2ndary index results, it's possible for a single replica 
> to return a stale result and that result will be sent back to the user, 
> potentially failing the CL contract.
> For instance, consider 3 replicas A, B and C, and the following situation:
> {noformat}
> CREATE TABLE test (k int PRIMARY KEY, v text);
> CREATE INDEX ON test(v);
> INSERT INTO test(k, v) VALUES (0, 'foo');
> {noformat}
> with every replica up to date. Now, suppose that the following queries are 
> done at {{QUORUM}}:
> {noformat}
> UPDATE test SET v = 'bar' WHERE k = 0;
> SELECT * FROM test WHERE v = 'foo';
> {noformat}
> then, if A and B acknowledge the insert but C respond to the read before 
> having applied the insert, then the now stale result will be returned (since 
> C will return it and A or B will return nothing).
> A potential solution would be that when we read a tombstone in the index (and 
> provided we make the index inherit the gcGrace of it's parent CF), instead of 
> skipping that tombstone, we'd insert in the result a corresponding range 
> tombstone.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10130) Node failure during 2i update after streaming can have incomplete 2i when restarted

2017-05-11 Thread Sergio Bossa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006895#comment-16006895
 ] 

Sergio Bossa commented on CASSANDRA-10130:
--

I had a quick look at this one and I think we could avoid adding yet another 
system table by just reusing the {{BUILT_INDEXES}} table and following the same 
pattern used by {{SIM#rebuildIndexesBlocking()}}, that is:
* Call {{SystemKeyspace#setIndexRemoved()}}.
* Build index.
* Call {{SystemKeyspace#setIndexBuilt()}}.

Thoughts? Am I missing anything actually requiring a new table?

> Node failure during 2i update after streaming can have incomplete 2i when 
> restarted
> ---
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Yuki Morishita
>Assignee: Andrés de la Peña
>Priority: Minor
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13486) Upstream Power arch specific code

2017-05-11 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-13486:
-
Reviewer:   (was: Robert Stupp)

Sorry, not available for a review.

> Upstream Power arch specific code
> -
>
> Key: CASSANDRA-13486
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13486
> Project: Cassandra
>  Issue Type: Task
>  Components: Core
>Reporter: Amitkumar Ghatwal
> Attachments: CAPI_RowCache_JIRA_Description.doc
>
>
> Hi All/ [~snazy], [~jasobrown] , [~ReiOdaira] ,
> As was suggested to my previous JIRA ticket : 
> https://issues.apache.org/jira/browse/CASSANDRA-13345 to create a separate 
> development branch ( for power -ppc64le ) by forking github/apache/cassandra 
> . 
> I have created my own development branch : 
> https://github.com/ghatwala/cassandra/tree/ppc64le-CAPI/trunk and pushed in 
> power arch specific features of CAPI ( Row Cache ). 
> Please refer PR :https://github.com/apache/cassandra/pull/108 to see all the 
> code changes for CAPI-Row Cache . Please kindly review the same.
> Also please refer to the description document on CAPI-Row Cache.
> Regards,
> Amit



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12847) cqlsh DESCRIBE output doesn't properly quote index names

2017-05-11 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-12847:

   Resolution: Fixed
Fix Version/s: (was: 3.11.x)
   (was: 3.0.x)
   4.0
   3.11.0
   3.0.14
   Status: Resolved  (was: Patch Available)

Thanks, committed (finally) to 3.0 in 
{{33344fae6622dc6624e01f7aa3b2b4d378f34d2d}} and merged to 3.11 and trunk

> cqlsh DESCRIBE output doesn't properly quote index names
> 
>
> Key: CASSANDRA-12847
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12847
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>  Labels: cqlsh
> Fix For: 3.0.14, 3.11.0, 4.0
>
>
> CASSANDRA-8365 fixed the CQL grammar so that quoting index names preserves 
> case. The output of DESCRIBE in cqlsh wasn't updated however so this doesn't 
> round-trip properly. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/6] cassandra git commit: Cqlsh DESCRIBE output handles quoted index names incorrectly

2017-05-11 Thread samt
Cqlsh DESCRIBE output handles quoted index names incorrectly

Patch by Sam Tunnicliffe; reviewed by Adam Holmberg and Alex Petrov for 
CASSANDRA-12847


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/33344fae
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/33344fae
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/33344fae

Branch: refs/heads/cassandra-3.11
Commit: 33344fae6622dc6624e01f7aa3b2b4d378f34d2d
Parents: 263740d
Author: Sam Tunnicliffe 
Authored: Thu Jan 19 10:02:39 2017 -0800
Committer: Sam Tunnicliffe 
Committed: Thu May 11 17:43:52 2017 +0100

--
 CHANGES.txt   |   1 +
 ...a-driver-internal-only-3.5.0.post0-d8d0456.zip | Bin 245487 -> 0 bytes
 ...a-driver-internal-only-3.7.1.post0-19c1603.zip | Bin 0 -> 252027 bytes
 3 files changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/33344fae/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 2ef5863..31d5800 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.14
+ * Properly handle quoted index names in cqlsh DESCRIBE output 
(CASSANDRA-12847)
  * Avoid reading static row twice from old format sstables (CASSANDRA-13236)
  * Fix NPE in StorageService.excise() (CASSANDRA-13163)
  * Expire OutboundTcpConnection messages by a single Thread (CASSANDRA-13265)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/33344fae/lib/cassandra-driver-internal-only-3.5.0.post0-d8d0456.zip
--
diff --git a/lib/cassandra-driver-internal-only-3.5.0.post0-d8d0456.zip 
b/lib/cassandra-driver-internal-only-3.5.0.post0-d8d0456.zip
deleted file mode 100644
index 7d23b48..000
Binary files a/lib/cassandra-driver-internal-only-3.5.0.post0-d8d0456.zip and 
/dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/33344fae/lib/cassandra-driver-internal-only-3.7.1.post0-19c1603.zip
--
diff --git a/lib/cassandra-driver-internal-only-3.7.1.post0-19c1603.zip 
b/lib/cassandra-driver-internal-only-3.7.1.post0-19c1603.zip
new file mode 100644
index 000..900d64d
Binary files /dev/null and 
b/lib/cassandra-driver-internal-only-3.7.1.post0-19c1603.zip differ


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[6/6] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-05-11 Thread samt
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/36b63ef5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/36b63ef5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/36b63ef5

Branch: refs/heads/trunk
Commit: 36b63ef51ae8384bfea8abf057b657570ba05cda
Parents: c43371b c0941cf
Author: Sam Tunnicliffe 
Authored: Thu May 11 18:14:32 2017 +0100
Committer: Sam Tunnicliffe 
Committed: Thu May 11 18:14:32 2017 +0100

--
 CHANGES.txt   |   1 +
 ...a-driver-internal-only-3.7.1.post0-19c1603.zip | Bin 0 -> 252027 bytes
 2 files changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/36b63ef5/CHANGES.txt
--
diff --cc CHANGES.txt
index d9b6a71,1860fd8..321e356
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -97,6 -32,8 +97,7 @@@
   * Fix cqlsh automatic protocol downgrade regression (CASSANDRA-13307)
   * Tracing payload not passed from QueryMessage to tracing session 
(CASSANDRA-12835)
  Merged from 3.0:
+  * Properly handle quoted index names in cqlsh DESCRIBE output 
(CASSANDRA-12847)
 - * Avoid reading static row twice from old format sstables (CASSANDRA-13236)
   * Fix NPE in StorageService.excise() (CASSANDRA-13163)
   * Expire OutboundTcpConnection messages by a single Thread (CASSANDRA-13265)
   * Fail repair if insufficient responses received (CASSANDRA-13397)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-05-11 Thread samt
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c0941cf7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c0941cf7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c0941cf7

Branch: refs/heads/trunk
Commit: c0941cf780c2735e613dbdf53cbce2163433bf33
Parents: 3a06433 33344fa
Author: Sam Tunnicliffe 
Authored: Thu May 11 18:10:56 2017 +0100
Committer: Sam Tunnicliffe 
Committed: Thu May 11 18:10:56 2017 +0100

--
 CHANGES.txt   |   1 +
 ...a-driver-internal-only-3.7.1.post0-19c1603.zip | Bin 0 -> 252027 bytes
 2 files changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c0941cf7/CHANGES.txt
--
diff --cc CHANGES.txt
index bb13fc4,31d5800..1860fd8
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,37 -1,5 +1,38 @@@
 -3.0.14
 +3.11.0
 + * Fix duration type validation to prevent overflow (CASSANDRA-13218)
 + * Forbid unsupported creation of SASI indexes over partition key columns 
(CASSANDRA-13228)
 + * Reject multiple values for a key in CQL grammar. (CASSANDRA-13369)
 + * UDA fails without input rows (CASSANDRA-13399)
 + * Fix compaction-stress by using daemonInitialization (CASSANDRA-13188)
 + * V5 protocol flags decoding broken (CASSANDRA-13443)
 + * Use write lock not read lock for removing sstables from compaction 
strategies. (CASSANDRA-13422)
 + * Use corePoolSize equal to maxPoolSize in JMXEnabledThreadPoolExecutors 
(CASSANDRA-13329)
 + * Avoid rebuilding SASI indexes containing no values (CASSANDRA-12962)
 + * Add charset to Analyser input stream (CASSANDRA-13151)
 + * Fix testLimitSSTables flake caused by concurrent flush (CASSANDRA-12820)
 + * cdc column addition strikes again (CASSANDRA-13382)
 + * Fix static column indexes (CASSANDRA-13277)
 + * DataOutputBuffer.asNewBuffer broken (CASSANDRA-13298)
 + * unittest CipherFactoryTest failed on MacOS (CASSANDRA-13370)
 + * Forbid SELECT restrictions and CREATE INDEX over non-frozen UDT columns 
(CASSANDRA-13247)
 + * Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern 
(CASSANDRA-13317)
 + * Possible AssertionError in UnfilteredRowIteratorWithLowerBound 
(CASSANDRA-13366)
 + * Support unaligned memory access for AArch64 (CASSANDRA-13326)
 + * Improve SASI range iterator efficiency on intersection with an empty range 
(CASSANDRA-12915).
 + * Fix equality comparisons of columns using the duration type 
(CASSANDRA-13174)
 + * Obfuscate password in stress-graphs (CASSANDRA-12233)
 + * Move to FastThreadLocalThread and FastThreadLocal (CASSANDRA-13034)
 + * nodetool stopdaemon errors out (CASSANDRA-13030)
 + * Tables in system_distributed should not use gcgs of 0 (CASSANDRA-12954)
 + * Fix primary index calculation for SASI (CASSANDRA-12910)
 + * More fixes to the TokenAllocator (CASSANDRA-12990)
 + * NoReplicationTokenAllocator should work with zero replication factor 
(CASSANDRA-12983)
 + * Address message coalescing regression (CASSANDRA-12676)
 + * Delete illegal character from StandardTokenizerImpl.jflex (CASSANDRA-13417)
 + * Fix cqlsh automatic protocol downgrade regression (CASSANDRA-13307)
 + * Tracing payload not passed from QueryMessage to tracing session 
(CASSANDRA-12835)
 +Merged from 3.0:
+  * Properly handle quoted index names in cqlsh DESCRIBE output 
(CASSANDRA-12847)
   * Avoid reading static row twice from old format sstables (CASSANDRA-13236)
   * Fix NPE in StorageService.excise() (CASSANDRA-13163)
   * Expire OutboundTcpConnection messages by a single Thread (CASSANDRA-13265)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-05-11 Thread samt
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c0941cf7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c0941cf7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c0941cf7

Branch: refs/heads/cassandra-3.11
Commit: c0941cf780c2735e613dbdf53cbce2163433bf33
Parents: 3a06433 33344fa
Author: Sam Tunnicliffe 
Authored: Thu May 11 18:10:56 2017 +0100
Committer: Sam Tunnicliffe 
Committed: Thu May 11 18:10:56 2017 +0100

--
 CHANGES.txt   |   1 +
 ...a-driver-internal-only-3.7.1.post0-19c1603.zip | Bin 0 -> 252027 bytes
 2 files changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c0941cf7/CHANGES.txt
--
diff --cc CHANGES.txt
index bb13fc4,31d5800..1860fd8
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,37 -1,5 +1,38 @@@
 -3.0.14
 +3.11.0
 + * Fix duration type validation to prevent overflow (CASSANDRA-13218)
 + * Forbid unsupported creation of SASI indexes over partition key columns 
(CASSANDRA-13228)
 + * Reject multiple values for a key in CQL grammar. (CASSANDRA-13369)
 + * UDA fails without input rows (CASSANDRA-13399)
 + * Fix compaction-stress by using daemonInitialization (CASSANDRA-13188)
 + * V5 protocol flags decoding broken (CASSANDRA-13443)
 + * Use write lock not read lock for removing sstables from compaction 
strategies. (CASSANDRA-13422)
 + * Use corePoolSize equal to maxPoolSize in JMXEnabledThreadPoolExecutors 
(CASSANDRA-13329)
 + * Avoid rebuilding SASI indexes containing no values (CASSANDRA-12962)
 + * Add charset to Analyser input stream (CASSANDRA-13151)
 + * Fix testLimitSSTables flake caused by concurrent flush (CASSANDRA-12820)
 + * cdc column addition strikes again (CASSANDRA-13382)
 + * Fix static column indexes (CASSANDRA-13277)
 + * DataOutputBuffer.asNewBuffer broken (CASSANDRA-13298)
 + * unittest CipherFactoryTest failed on MacOS (CASSANDRA-13370)
 + * Forbid SELECT restrictions and CREATE INDEX over non-frozen UDT columns 
(CASSANDRA-13247)
 + * Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern 
(CASSANDRA-13317)
 + * Possible AssertionError in UnfilteredRowIteratorWithLowerBound 
(CASSANDRA-13366)
 + * Support unaligned memory access for AArch64 (CASSANDRA-13326)
 + * Improve SASI range iterator efficiency on intersection with an empty range 
(CASSANDRA-12915).
 + * Fix equality comparisons of columns using the duration type 
(CASSANDRA-13174)
 + * Obfuscate password in stress-graphs (CASSANDRA-12233)
 + * Move to FastThreadLocalThread and FastThreadLocal (CASSANDRA-13034)
 + * nodetool stopdaemon errors out (CASSANDRA-13030)
 + * Tables in system_distributed should not use gcgs of 0 (CASSANDRA-12954)
 + * Fix primary index calculation for SASI (CASSANDRA-12910)
 + * More fixes to the TokenAllocator (CASSANDRA-12990)
 + * NoReplicationTokenAllocator should work with zero replication factor 
(CASSANDRA-12983)
 + * Address message coalescing regression (CASSANDRA-12676)
 + * Delete illegal character from StandardTokenizerImpl.jflex (CASSANDRA-13417)
 + * Fix cqlsh automatic protocol downgrade regression (CASSANDRA-13307)
 + * Tracing payload not passed from QueryMessage to tracing session 
(CASSANDRA-12835)
 +Merged from 3.0:
+  * Properly handle quoted index names in cqlsh DESCRIBE output 
(CASSANDRA-12847)
   * Avoid reading static row twice from old format sstables (CASSANDRA-13236)
   * Fix NPE in StorageService.excise() (CASSANDRA-13163)
   * Expire OutboundTcpConnection messages by a single Thread (CASSANDRA-13265)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[3/6] cassandra git commit: Cqlsh DESCRIBE output handles quoted index names incorrectly

2017-05-11 Thread samt
Cqlsh DESCRIBE output handles quoted index names incorrectly

Patch by Sam Tunnicliffe; reviewed by Adam Holmberg and Alex Petrov for 
CASSANDRA-12847


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/33344fae
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/33344fae
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/33344fae

Branch: refs/heads/trunk
Commit: 33344fae6622dc6624e01f7aa3b2b4d378f34d2d
Parents: 263740d
Author: Sam Tunnicliffe 
Authored: Thu Jan 19 10:02:39 2017 -0800
Committer: Sam Tunnicliffe 
Committed: Thu May 11 17:43:52 2017 +0100

--
 CHANGES.txt   |   1 +
 ...a-driver-internal-only-3.5.0.post0-d8d0456.zip | Bin 245487 -> 0 bytes
 ...a-driver-internal-only-3.7.1.post0-19c1603.zip | Bin 0 -> 252027 bytes
 3 files changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/33344fae/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 2ef5863..31d5800 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.14
+ * Properly handle quoted index names in cqlsh DESCRIBE output 
(CASSANDRA-12847)
  * Avoid reading static row twice from old format sstables (CASSANDRA-13236)
  * Fix NPE in StorageService.excise() (CASSANDRA-13163)
  * Expire OutboundTcpConnection messages by a single Thread (CASSANDRA-13265)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/33344fae/lib/cassandra-driver-internal-only-3.5.0.post0-d8d0456.zip
--
diff --git a/lib/cassandra-driver-internal-only-3.5.0.post0-d8d0456.zip 
b/lib/cassandra-driver-internal-only-3.5.0.post0-d8d0456.zip
deleted file mode 100644
index 7d23b48..000
Binary files a/lib/cassandra-driver-internal-only-3.5.0.post0-d8d0456.zip and 
/dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/33344fae/lib/cassandra-driver-internal-only-3.7.1.post0-19c1603.zip
--
diff --git a/lib/cassandra-driver-internal-only-3.7.1.post0-19c1603.zip 
b/lib/cassandra-driver-internal-only-3.7.1.post0-19c1603.zip
new file mode 100644
index 000..900d64d
Binary files /dev/null and 
b/lib/cassandra-driver-internal-only-3.7.1.post0-19c1603.zip differ


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[1/6] cassandra git commit: Cqlsh DESCRIBE output handles quoted index names incorrectly

2017-05-11 Thread samt
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 263740daa -> 33344fae6
  refs/heads/cassandra-3.11 3a0643329 -> c0941cf78
  refs/heads/trunk c43371bf8 -> 36b63ef51


Cqlsh DESCRIBE output handles quoted index names incorrectly

Patch by Sam Tunnicliffe; reviewed by Adam Holmberg and Alex Petrov for 
CASSANDRA-12847


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/33344fae
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/33344fae
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/33344fae

Branch: refs/heads/cassandra-3.0
Commit: 33344fae6622dc6624e01f7aa3b2b4d378f34d2d
Parents: 263740d
Author: Sam Tunnicliffe 
Authored: Thu Jan 19 10:02:39 2017 -0800
Committer: Sam Tunnicliffe 
Committed: Thu May 11 17:43:52 2017 +0100

--
 CHANGES.txt   |   1 +
 ...a-driver-internal-only-3.5.0.post0-d8d0456.zip | Bin 245487 -> 0 bytes
 ...a-driver-internal-only-3.7.1.post0-19c1603.zip | Bin 0 -> 252027 bytes
 3 files changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/33344fae/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 2ef5863..31d5800 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.14
+ * Properly handle quoted index names in cqlsh DESCRIBE output 
(CASSANDRA-12847)
  * Avoid reading static row twice from old format sstables (CASSANDRA-13236)
  * Fix NPE in StorageService.excise() (CASSANDRA-13163)
  * Expire OutboundTcpConnection messages by a single Thread (CASSANDRA-13265)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/33344fae/lib/cassandra-driver-internal-only-3.5.0.post0-d8d0456.zip
--
diff --git a/lib/cassandra-driver-internal-only-3.5.0.post0-d8d0456.zip 
b/lib/cassandra-driver-internal-only-3.5.0.post0-d8d0456.zip
deleted file mode 100644
index 7d23b48..000
Binary files a/lib/cassandra-driver-internal-only-3.5.0.post0-d8d0456.zip and 
/dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/33344fae/lib/cassandra-driver-internal-only-3.7.1.post0-19c1603.zip
--
diff --git a/lib/cassandra-driver-internal-only-3.7.1.post0-19c1603.zip 
b/lib/cassandra-driver-internal-only-3.7.1.post0-19c1603.zip
new file mode 100644
index 000..900d64d
Binary files /dev/null and 
b/lib/cassandra-driver-internal-only-3.7.1.post0-19c1603.zip differ


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13516) Message error for Duration types is confusing

2017-05-11 Thread Jaume M (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006796#comment-16006796
 ] 

Jaume M commented on CASSANDRA-13516:
-

Looks like this is a duplicate of CASSANDRA-13218. Closing it.

> Message error for Duration types is confusing
> -
>
> Key: CASSANDRA-13516
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13516
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 3.11
>Reporter: Jaume M
>
> {code}
> from cassandra.util import Duration
> from cassandra.cluster import Cluster
> cluster = Cluster(['127.0.0.1'])
> session = cluster.connect()
> # Assume table was created earlier like this:
> # CREATE TABLE simplex.t1 (f1 int primary key, f2 duration);
> prepared = session.prepare("""
>  INSERT INTO simplex.t1 (f1, f2)
>  VALUES (?, ?)
>  """)
> d = Duration(int("7FFF", 16), int("8FF0", 16), 0)
> session.execute(prepared, (1, d))
> results = session.execute("SELECT * FROM simplex.t1")
> assert d == results[0][1]
> {code}
> I this example I get the error: cassandra.InvalidRequest: Error from server: 
> code=2200 [Invalid query] message="The duration months, days and nanoseconds 
> must be all of the same sign (2147483647, -1879048208, 0)"
>  , but maybe it should be something about the number being too big? Also if 
> we use the Duration value:
> {code}
> d = Duration(int("8FF0", 16), int("8FF0", 16), 0)
> {code}
> the script ends fine and there is no AssertionError, so not sure why the sign 
> is different in the first example.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-13516) Message error for Duration types is confusing

2017-05-11 Thread Jaume M (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaume M resolved CASSANDRA-13516.
-
Resolution: Duplicate

> Message error for Duration types is confusing
> -
>
> Key: CASSANDRA-13516
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13516
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 3.11
>Reporter: Jaume M
>
> {code}
> from cassandra.util import Duration
> from cassandra.cluster import Cluster
> cluster = Cluster(['127.0.0.1'])
> session = cluster.connect()
> # Assume table was created earlier like this:
> # CREATE TABLE simplex.t1 (f1 int primary key, f2 duration);
> prepared = session.prepare("""
>  INSERT INTO simplex.t1 (f1, f2)
>  VALUES (?, ?)
>  """)
> d = Duration(int("7FFF", 16), int("8FF0", 16), 0)
> session.execute(prepared, (1, d))
> results = session.execute("SELECT * FROM simplex.t1")
> assert d == results[0][1]
> {code}
> I this example I get the error: cassandra.InvalidRequest: Error from server: 
> code=2200 [Invalid query] message="The duration months, days and nanoseconds 
> must be all of the same sign (2147483647, -1879048208, 0)"
>  , but maybe it should be something about the number being too big? Also if 
> we use the Duration value:
> {code}
> d = Duration(int("8FF0", 16), int("8FF0", 16), 0)
> {code}
> the script ends fine and there is no AssertionError, so not sure why the sign 
> is different in the first example.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13517) dtest failure in paxos_tests.TestPaxos.contention_test_many_threads

2017-05-11 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown reassigned CASSANDRA-13517:
---

Assignee: Jason Brown

> dtest failure in paxos_tests.TestPaxos.contention_test_many_threads
> ---
>
> Key: CASSANDRA-13517
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13517
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Ariel Weisberg
>Assignee: Jason Brown
>  Labels: dtest, test-failure, test-failure-fresh
> Attachments: test_failure.txt
>
>
> See attachment for details



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-13507) dtest failure in paging_test.TestPagingWithDeletions.test_ttl_deletions

2017-05-11 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown reassigned CASSANDRA-13507:
---

Assignee: Jason Brown

> dtest failure in paging_test.TestPagingWithDeletions.test_ttl_deletions 
> 
>
> Key: CASSANDRA-13507
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13507
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Ariel Weisberg
>Assignee: Jason Brown
>  Labels: dtest, test-failure, test-failure-fresh
> Attachments: test_ttl_deletions_fail.txt
>
>
> {noformat}
> Failed 7 times in the last 30 runs. Flakiness: 34%, Stability: 76%
> Error Message
> 4 != 8
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: /tmp/dtest-z1xodw
> dtest: DEBUG: Done setting configuration options:
> {   'initial_token': None,
> 'num_tokens': '32',
> 'phi_convict_threshold': 5,
> 'range_request_timeout_in_ms': 1,
> 'read_request_timeout_in_ms': 1,
> 'request_timeout_in_ms': 1,
> 'truncate_request_timeout_in_ms': 1,
> 'write_request_timeout_in_ms': 1}
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.5, 
> scheduling retry in 600.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.5', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 4.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> cassandra.pool: WARNING: Error attempting to reconnect to 127.0.0.3, 
> scheduling retry in 4.0 seconds: [Errno 111] Tried connecting to 
> [('127.0.0.3', 9042)]. Last error: Connection refused
> {noformat}
> Most output omitted. It's attached.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13441) Schema version changes for each upgraded node in a rolling upgrade, causing migration storms

2017-05-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006733#comment-16006733
 ] 

Julius Žaromskis commented on CASSANDRA-13441:
--

Here's a apiece of debug log, maybe this will help to identify if it's the same 
issue or not. 

{noformat}
DEBUG [MemtableFlushWriter:107525] 2017-05-11 16:04:05,896 Memtable.java:401 - 
Completed flushing 
/mnt/storage/cassandra/data/system_schema/indexes-0feb57ac311f382fba6d9024d305702f/mc-22120-big-Data.db
 (0.098KiB) for commitlog position ReplayPosition(segmentId=1483955071260, 
position=8644066)
DEBUG [InternalResponseStage:589] 2017-05-11 16:04:05,941 
MigrationManager.java:556 - Gossiping my schema version 
77a40699-8e9e-35aa-834e-68c32e40a45a
DEBUG [InternalResponseStage:588] 2017-05-11 16:04:05,944 
ColumnFamilyStore.java:850 - Enqueuing flush of keyspaces: 2079 (0%) on-heap, 0 
(0%) off-heap
DEBUG [MemtableFlushWriter:107524] 2017-05-11 16:04:05,944 Memtable.java:368 - 
Writing Memtable-keyspaces@1326973542(0.582KiB serialized bytes, 4 ops, 0%/0% 
of on/off-heap limit)
DEBUG [MemtableFlushWriter:107524] 2017-05-11 16:04:05,951 Memtable.java:401 - 
Completed flushing 
/mnt/storage/cassandra/data/system_schema/keyspaces-abac5682dea631c5b535b3d6cffd0fb6/mc-22200-big-Data.db
 (0.489KiB) for commitlog position ReplayPosition(segmentId=1483955071260, 
position=8685297)
DEBUG [InternalResponseStage:588] 2017-05-11 16:04:05,971 
ColumnFamilyStore.java:850 - Enqueuing flush of tables: 65895 (0%) on-heap, 0 
(0%) off-heap
DEBUG [MemtableFlushWriter:107525] 2017-05-11 16:04:05,972 Memtable.java:368 - 
Writing Memtable-tables@512792876(20.714KiB serialized bytes, 31 ops, 0%/0% of 
on/off-heap limit)
DEBUG [MemtableFlushWriter:107525] 2017-05-11 16:04:05,980 Memtable.java:401 - 
Completed flushing 
/mnt/storage/cassandra/data/system_schema/tables-afddfb9dbc1e30688056eed6c302ba09/mc-22197-big-Data.db
 (13.019KiB) for commitlog position ReplayPosition(segmentId=1483955071260, 
position=8700606)
DEBUG [InternalResponseStage:588] 2017-05-11 16:04:06,005 
ColumnFamilyStore.java:850 - Enqueuing flush of columns: 204664 (0%) on-heap, 0 
(0%) off-heap
DEBUG [MemtableFlushWriter:107524] 2017-05-11 16:04:06,006 Memtable.java:368 - 
Writing Memtable-columns@643217662(43.015KiB serialized bytes, 286 ops, 0%/0% 
of on/off-heap limit)
DEBUG [MemtableFlushWriter:107524] 2017-05-11 16:04:06,018 Memtable.java:401 - 
Completed flushing 
/mnt/storage/cassandra/data/system_schema/columns-24101c25a2ae3af787c1b40ee1aca33f/mc-22195-big-Data.db
 (20.975KiB) for commitlog position ReplayPosition(segmentId=1483955071260, 
position=8707212)
DEBUG [CompactionExecutor:47263] 2017-05-11 16:04:06,055 
CompactionTask.java:146 - Compacting (75968370-3663-11e7-ab7b-d7e32ecfc62d) 
[/mnt/storage/cassandra/data/system_schema/columns-24101c25a2ae3af787c1b40ee1aca33f/mc-22193-big-Data.db:level=0,
 
/mnt/storage/cassandra/data/system_schema/columns-24101c25a2ae3af787c1b40ee1aca33f/mc-22194-big-Data.db:level=0,
 
/mnt/storage/cassandra/data/system_schema/columns-24101c25a2ae3af787c1b40ee1aca33f/mc-22192-big-Data.db:level=0,
 
/mnt/storage/cassandra/data/system_schema/columns-24101c25a2ae3af787c1b40ee1aca33f/mc-22195-big-Data.db:level=0,
 ]
DEBUG [MemtablePostFlush:95264] 2017-05-11 16:04:06,057 
ColumnFamilyStore.java:903 - forceFlush requested but everything is clean in 
dropped_columns
DEBUG [MemtablePostFlush:95264] 2017-05-11 16:04:06,057 
ColumnFamilyStore.java:903 - forceFlush requested but everything is clean in 
triggers
DEBUG [MemtablePostFlush:95264] 2017-05-11 16:04:06,057 
ColumnFamilyStore.java:903 - forceFlush requested but everything is clean in 
views
DEBUG [MemtablePostFlush:95264] 2017-05-11 16:04:06,057 
ColumnFamilyStore.java:903 - forceFlush requested but everything is clean in 
types
DEBUG [MemtablePostFlush:95264] 2017-05-11 16:04:06,057 
ColumnFamilyStore.java:903 - forceFlush requested but everything is clean in 
functions
DEBUG [MemtablePostFlush:95264] 2017-05-11 16:04:06,057 
ColumnFamilyStore.java:903 - forceFlush requested but everything is clean in 
aggregates
DEBUG [InternalResponseStage:588] 2017-05-11 16:04:06,057 
ColumnFamilyStore.java:850 - Enqueuing flush of indexes: 598 (0%) on-heap, 0 
(0%) off-heap
DEBUG [MemtableFlushWriter:107525] 2017-05-11 16:04:06,057 Memtable.java:368 - 
Writing Memtable-indexes@1653592334(0.127KiB serialized bytes, 1 ops, 0%/0% of 
on/off-heap limit)
DEBUG [MemtableFlushWriter:107525] 2017-05-11 16:04:06,068 Memtable.java:401 - 
Completed flushing 
/mnt/storage/cassandra/data/system_schema/indexes-0feb57ac311f382fba6d9024d305702f/mc-22121-big-Data.db
 (0.098KiB) for commitlog position ReplayPosition(segmentId=1483955071260, 
position=8742156)
DEBUG [CompactionExecutor:47266] 2017-05-11 16:04:06,090 
CompactionTask.java:146 - Compacting (759bdaa0-3663-11e7-ab7b-d7e32ecfc62d) 

[jira] [Commented] (CASSANDRA-10726) Read repair inserts should not be blocking

2017-05-11 Thread Xiaolong Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006704#comment-16006704
 ] 

Xiaolong Jiang commented on CASSANDRA-10726:


regarding distinctHostNum:
  /**
 * When doing the read repair, the mutation is per partition key, so 
it's possible we will repair multiple
 * partitions into different hosts. let's say RF = 5, we need to read 
partition p1, p2, p3, p4 from three nodes,
 * n1, n2, n3. If n1 contains latest data, n2 is missing p1 and p2, n2 
is missing p3 and p4. So we need to run
 * repair for n2 by sending p1 and p2 partitions and run repair for n3 
by sending p3 and p4 partitions. it's
 * possible p1 and p3 repair is slow, so beloew distinctHostNum will 
return 2. In this case, I will not retry
 * a new node for read repair since this read repair retry will only 
handle one slow host. If p3 and p4 is fast,
 * p1 and p2 repair is slow or just p1 repair is slow, below 
distinctHostNum will return 1, in this case, I will
 * retry 1 extra node and send p1, p2 to extra node or just p1 if only 
p1 read repair times out.
 * In same host, we can have multiple partition read repair and we can 
only handle one host slowness, so we should
 * get distinct host from read repair response future.
 */

> Read repair inserts should not be blocking
> --
>
> Key: CASSANDRA-10726
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10726
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Richard Low
>Assignee: Xiaolong Jiang
> Fix For: 3.0.x
>
>
> Today, if there’s a digest mismatch in a foreground read repair, the insert 
> to update out of date replicas is blocking. This means, if it fails, the read 
> fails with a timeout. If a node is dropping writes (maybe it is overloaded or 
> the mutation stage is backed up for some other reason), all reads to a 
> replica set could fail. Further, replicas dropping writes get more out of 
> sync so will require more read repair.
> The comment on the code for why the writes are blocking is:
> {code}
> // wait for the repair writes to be acknowledged, to minimize impact on any 
> replica that's
> // behind on writes in case the out-of-sync row is read multiple times in 
> quick succession
> {code}
> but the bad side effect is that reads timeout. Either the writes should not 
> be blocking or we should return success for the read even if the write times 
> out.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10726) Read repair inserts should not be blocking

2017-05-11 Thread Xiaolong Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006701#comment-16006701
 ] 

Xiaolong Jiang commented on CASSANDRA-10726:


regarding the response snapshot count:

we need to capture this count and save to this data resolver. All return 
results is based on this many
  reponse count and below "responseCntSnapshot" is used to calculate 
whether read repair is ok or not
since response list can get more response later, but we only iterator 
"responseCntSnapshot" for any future
operations including read repair retry. So we have to save this count 
in current state.
  

> Read repair inserts should not be blocking
> --
>
> Key: CASSANDRA-10726
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10726
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Richard Low
>Assignee: Xiaolong Jiang
> Fix For: 3.0.x
>
>
> Today, if there’s a digest mismatch in a foreground read repair, the insert 
> to update out of date replicas is blocking. This means, if it fails, the read 
> fails with a timeout. If a node is dropping writes (maybe it is overloaded or 
> the mutation stage is backed up for some other reason), all reads to a 
> replica set could fail. Further, replicas dropping writes get more out of 
> sync so will require more read repair.
> The comment on the code for why the writes are blocking is:
> {code}
> // wait for the repair writes to be acknowledged, to minimize impact on any 
> replica that's
> // behind on writes in case the out-of-sync row is read multiple times in 
> quick succession
> {code}
> but the bad side effect is that reads timeout. Either the writes should not 
> be blocking or we should return success for the read even if the write times 
> out.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13523) StreamReceiveTask: java.lang.OutOfMemoryError: Map failed

2017-05-11 Thread Matthew O'Riordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006700#comment-16006700
 ] 

Matthew O'Riordan commented on CASSANDRA-13523:
---

> Should check your limits ({{ulimit -a}})

They are all maxed out i.e. we are not hitting file handle limits.

> Check {{cat /proc/sys/vm/max_map_count}}

Sure, but any advice on what that should be?  We've not tweaked that to date.

> but most likely "The system is out of physical RAM or swap space". should 
> also check your jvm options and docker settings to make sure your system can 
> handle load you set it up with. Maybe decrease heap size so your physical 
> system can match what your giving the jvm.

I'm not following you.  I thought the JVM is running out of memory?

> That swap use isnt a good thing also fwiw.

There pretty much isn't.  With 100 swap operations the swap file is effectively 
not being used.  When the system actually uses swap, it very quickly climbs 
into 1000s or 10,000s of operations.  As far as I can tell from the graphs at 
https://dl.dropboxusercontent.com/u/1575409/Ably/logs/2017-05-10-cassandra-crash/ap-southeast-1/Voila_Capture%202017-05-11_09-08-53_am.png,
 no swap was actually being used.


> StreamReceiveTask: java.lang.OutOfMemoryError: Map failed
> -
>
> Key: CASSANDRA-13523
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13523
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: Ubuntu 14.04.5 LTS, Docker version 1.9.1, run as a 
> container, 4 core server with 16GB memory.
>Reporter: Matthew O'Riordan
>  Labels: bug, crash
> Fix For: 2.1.13
>
>
> During a nodetool repair -par on one of our keyspaces, Cassandra crashed due 
> to what seems like memory exhaustion within the JVM.  The machine itself had 
> plenty of available memory at the time and did not appear to be under any 
> significant load.
> In the system log, before the crash, the following was logged:
> {code}
> ...
> INFO  [AntiEntropySessions:55] 2017-05-10 18:18:20,627  RepairJob.java:163 - 
> [repair #0330beb0-35ad-11e7-b7e4-091830ac5256] requesting merkle trees for 
> stats_day_aggregates (to [/54.162.66.114, /54.236.226.76, /52.221.228.170, 
> /54.154.35.144, /54.154.96.213, /52.221.217.27])
> INFO  [ValidationExecutor:54] 2017-05-10 18:18:20,628  
> ColumnFamilyStore.java:905 - Enqueuing flush of stats_day_aggregates: 7018 
> (0%) on-heap, 0 (0%) off-heap
> INFO  [MemtableFlushWriter:13608] 2017-05-10 18:18:20,628  Memtable.java:347 
> - Writing Memtable-stats_day_aggregates@3469792(2.734KiB serialized bytes, 14 
> ops, 0%/0% of on/off-heap limit)
> INFO  [MemtableFlushWriter:13608] 2017-05-10 18:18:20,629  Memtable.java:382 
> - Completed flushing 
> /var/lib/cassandra/data/ably_production_0_stats/stats_day_aggregates-b6e29201e3d111e5bbf3091830ac5256/ably_production_0_stats-stats_day_aggregates-tmp-ka-43026-Data.db
>  (0.000KiB) for commitlog position ReplayPosition(segmentId=1491420635101, 
> position=24224955)
> INFO  [StreamReceiveTask:638] 2017-05-10 18:18:21,220  
> StreamResultFuture.java:180 - [Stream #0008cac1-35ad-11e7-b7e4-091830ac5256] 
> Session with /52.203.21.193 is complete
> INFO  [StreamReceiveTask:638] 2017-05-10 18:18:21,220  
> StreamResultFuture.java:212 - [Stream #0008cac1-35ad-11e7-b7e4-091830ac5256] 
> All sessions completed
> INFO  [StreamReceiveTask:638] 2017-05-10 18:18:21,221  
> StreamingRepairTask.java:96 - [repair #fe7e3320-35ac-11e7-b7e4-091830ac5256] 
> streaming task succeed, returning response to /52.221.217.27
> INFO  [CqlSlowLog-Writer-thread-0] 2017-05-10 18:18:26,230  
> CqlSlowLogWriter.java:151 - Recording statements with duration of 4844 in 
> slow log
> INFO  [Service Thread] 2017-05-10 18:18:26,233  GCInspector.java:258 - G1 Old 
> Generation GC in 4781ms.  G1 Eden Space: 131072 -> 0; G1 Old Gen: 
> 2774539816 -> 1830851216; G1 Survivor Space: 37748736 -> 0; 
> INFO  [AntiEntropyStage:1] 2017-05-10 18:18:26,237  RepairSession.java:171 - 
> [repair #0330beb0-35ad-11e7-b7e4-091830ac5256] Received merkle tree for 
> stats_day_aggregates from /54.162.66.114
> INFO  [StreamConnectionEstablisher:1] 2017-05-10 18:18:26,239  
> StreamCoordinator.java:209 - [Stream #0293bb60-35ad-11e7-b7e4-091830ac5256, 
> ID#0] Beginning stream session with /54.154.226.20
> INFO  [AntiEntropyStage:1] 2017-05-10 18:18:26,294  RepairSession.java:171 - 
> [repair #0330beb0-35ad-11e7-b7e4-091830ac5256] Received merkle tree for 
> stats_day_aggregates from /54.236.226.76
> INFO  [Service Thread] 2017-05-10 18:18:26,298  StatusLogger.java:51 - Pool 
> NameActive   Pending  Completed   Blocked  All Time 
> Blocked
> INFO  [AntiEntropyStage:1] 2017-05-10 18:18:26,344  RepairSession.java:171 - 
> 

[jira] [Commented] (CASSANDRA-13441) Schema version changes for each upgraded node in a rolling upgrade, causing migration storms

2017-05-11 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006683#comment-16006683
 ] 

Jeff Jirsa commented on CASSANDRA-13441:


Hi [~juliuszaromskis] - if you're upgrading from 3.0.9 to 3.0.13, it's unlikely 
that this is your issue (this would mostly impact people going from 2.1 -> 3.0, 
or 2.2 -> 3.0. Unless you're very confident that the schema version on 
{{10.240.0.6}} is different and more desirable than that on the other two 
nodes, the most likely solution is to issue a {{nodetool resetlocalschema}} on 
10.240.0.6, allowing it to re-pull its schema from .7 and .8.



> Schema version changes for each upgraded node in a rolling upgrade, causing 
> migration storms
> 
>
> Key: CASSANDRA-13441
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13441
> Project: Cassandra
>  Issue Type: Bug
>  Components: Schema
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
> Fix For: 3.0.14, 3.11.0, 4.0
>
>
> In versions < 3.0, during a rolling upgrade (say 2.0 -> 2.1), the first node 
> to upgrade to 2.1 would add the new tables, setting the new 2.1 version ID, 
> and subsequently upgraded hosts would settle on that version.
> When a 3.0 node upgrades and writes its own new-in-3.0 system tables, it'll 
> write the same tables that exist in the schema with brand new timestamps. As 
> written, this will cause all nodes in the cluster to change schema (to the 
> version with the newest timestamp). On a sufficiently large cluster with a 
> non-trivial schema, this could cause (literally) millions of migration tasks 
> to needlessly bounce across the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13441) Schema version changes for each upgraded node in a rolling upgrade, causing migration storms

2017-05-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006613#comment-16006613
 ] 

Julius Žaromskis edited comment on CASSANDRA-13441 at 5/11/17 3:24 PM:
---

Hi, any workaround for this issue? I've hit this after upgrading from 3.0.9 to 
3.0.13 and doing sstableupgrade. Noticed weird disk write patterns and started 
seeing migration tasks bouncing around. I've only managed to update first of 
the 3 nodes. Migration tasks have stopped after I've rebooted first node.

{noformat}
Cluster Information:
Name: cloud.zaromskis.lt cluster
Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
600b7268-d42a-3b72-8706-093b6c8cfaff: [10.240.0.6]
77a40699-8e9e-35aa-834e-68c32e40a45a: [10.240.0.7, 10.240.0.8]
{noformat}

{noformat}
Datacenter: dc1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens   Owns (effective)  Host ID   
Rack
UN  10.240.0.6  284.95 GB  256  63.4% 
d0d83d9d-0dec-45cd-9ca9-93515fa131f3  rack1
UN  10.240.0.7  288.53 GB  256  64.1% 
6d9709a0-0e10-46a1-9afa-d106b74ca9e0  rack1
UN  10.240.0.8  326.31 GB  256  72.5% 
5c969700-8bd9-49a4-9772-1284439f8364  rack1
{noformat}

The migrations are in fact executed, as I can see that on the disk, new files 
are created every second in system keyspace. Why doesn't cluster settle on the 
same schema version then?

The schema version of first node would not propagate to other nodes. I'm afraid 
further upgrades might create new schema versions? I can't afford to lose any 
data. Any advise?




was (Author: juliuszaromskis):
Hi, any workaround for this issue? I've hit this after upgrading from 3.0.9 to 
3.0.13 and doing sstableupgrade. Noticed weird disk write patterns and started 
seeing migration tasks bouncing around. I've only managed to update first of 
the 3 nodes. Migrations tasks have stopped after I've rebooted first node.

{noformat}
Cluster Information:
Name: cloud.zaromskis.lt cluster
Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
600b7268-d42a-3b72-8706-093b6c8cfaff: [10.240.0.6]
77a40699-8e9e-35aa-834e-68c32e40a45a: [10.240.0.7, 10.240.0.8]
{noformat}

{noformat}
Datacenter: dc1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens   Owns (effective)  Host ID   
Rack
UN  10.240.0.6  284.95 GB  256  63.4% 
d0d83d9d-0dec-45cd-9ca9-93515fa131f3  rack1
UN  10.240.0.7  288.53 GB  256  64.1% 
6d9709a0-0e10-46a1-9afa-d106b74ca9e0  rack1
UN  10.240.0.8  326.31 GB  256  72.5% 
5c969700-8bd9-49a4-9772-1284439f8364  rack1
{noformat}

The migrations are in fact executed, as I can see that on the disk, new files 
are created every second in system keyspace. Why doesn't cluster settle on the 
same schema version then?

The schema version of first node would not propagate to other nodes. I'm afraid 
further upgrades might create new schema versions? I can't afford to lose any 
data. Any advise?



> Schema version changes for each upgraded node in a rolling upgrade, causing 
> migration storms
> 
>
> Key: CASSANDRA-13441
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13441
> Project: Cassandra
>  Issue Type: Bug
>  Components: Schema
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
> Fix For: 3.0.14, 3.11.0, 4.0
>
>
> In versions < 3.0, during a rolling upgrade (say 2.0 -> 2.1), the first node 
> to upgrade to 2.1 would add the new tables, setting the new 2.1 version ID, 
> and subsequently upgraded hosts would settle on that version.
> When a 3.0 node upgrades and writes its own new-in-3.0 system tables, it'll 
> write the same tables that exist in the schema with brand new timestamps. As 
> written, this will cause all nodes in the cluster to change schema (to the 
> version with the newest timestamp). On a sufficiently large cluster with a 
> non-trivial schema, this could cause (literally) millions of migration tasks 
> to needlessly bounce across the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13441) Schema version changes for each upgraded node in a rolling upgrade, causing migration storms

2017-05-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006613#comment-16006613
 ] 

Julius Žaromskis edited comment on CASSANDRA-13441 at 5/11/17 3:22 PM:
---

Hi, any workaround for this issue? I've hit this after upgrading from 3.0.9 to 
3.0.13 and doing sstableupgrade. Noticed weird disk write patterns and started 
seeing migration tasks bouncing around. I've only managed to update first of 
the 3 nodes. Migrations tasks have stopped after I've rebooted first node.

{noformat}
Cluster Information:
Name: cloud.zaromskis.lt cluster
Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
600b7268-d42a-3b72-8706-093b6c8cfaff: [10.240.0.6]
77a40699-8e9e-35aa-834e-68c32e40a45a: [10.240.0.7, 10.240.0.8]
{noformat}

{noformat}
Datacenter: dc1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens   Owns (effective)  Host ID   
Rack
UN  10.240.0.6  284.95 GB  256  63.4% 
d0d83d9d-0dec-45cd-9ca9-93515fa131f3  rack1
UN  10.240.0.7  288.53 GB  256  64.1% 
6d9709a0-0e10-46a1-9afa-d106b74ca9e0  rack1
UN  10.240.0.8  326.31 GB  256  72.5% 
5c969700-8bd9-49a4-9772-1284439f8364  rack1
{noformat}

The migrations are in fact executed, as I can see that on the disk, new files 
are created every second in system keyspace. Why doesn't cluster settle on the 
same schema version then?

The schema version of first node would not propagate to other nodes. I'm afraid 
further upgrades might create new schema versions? I can't afford to lose any 
data. Any advise?




was (Author: juliuszaromskis):
Hi, any workaround for this issue? I've hit this after upgrading from 3.0.9 to 
3.0.13 and doing sstableupgrade. Noticed weird disk write patterns and started 
seeing migration tasks bouncing around. I've only managed to update first of 
the 3 nodes. Migrations tasks have stopped after I've rebooted first node.

{noformat}
Cluster Information:
Name: cloud.zaromskis.lt cluster
Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
600b7268-d42a-3b72-8706-093b6c8cfaff: [10.240.0.6]
77a40699-8e9e-35aa-834e-68c32e40a45a: [10.240.0.7, 10.240.0.8]
{noformat}

{noformat}
Datacenter: dc1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens   Owns (effective)  Host ID   
Rack
UN  10.240.0.6  284.95 GB  256  63.4% 
d0d83d9d-0dec-45cd-9ca9-93515fa131f3  rack1
UN  10.240.0.7  288.53 GB  256  64.1% 
6d9709a0-0e10-46a1-9afa-d106b74ca9e0  rack1
UN  10.240.0.8  326.31 GB  256  72.5% 
5c969700-8bd9-49a4-9772-1284439f8364  rack1
{noformat}

The migrations are in fact executed, as I can see that on the disk, new files 
are created every second in system keyspace. Why would cluster settle on the 
same schema version then?

The schema version of first node would not propagate to other nodes. I'm afraid 
further upgrades might create new schema versions? I can't afford to lose any 
data. Any advise?



> Schema version changes for each upgraded node in a rolling upgrade, causing 
> migration storms
> 
>
> Key: CASSANDRA-13441
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13441
> Project: Cassandra
>  Issue Type: Bug
>  Components: Schema
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
> Fix For: 3.0.14, 3.11.0, 4.0
>
>
> In versions < 3.0, during a rolling upgrade (say 2.0 -> 2.1), the first node 
> to upgrade to 2.1 would add the new tables, setting the new 2.1 version ID, 
> and subsequently upgraded hosts would settle on that version.
> When a 3.0 node upgrades and writes its own new-in-3.0 system tables, it'll 
> write the same tables that exist in the schema with brand new timestamps. As 
> written, this will cause all nodes in the cluster to change schema (to the 
> version with the newest timestamp). On a sufficiently large cluster with a 
> non-trivial schema, this could cause (literally) millions of migration tasks 
> to needlessly bounce across the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-13441) Schema version changes for each upgraded node in a rolling upgrade, causing migration storms

2017-05-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006613#comment-16006613
 ] 

Julius Žaromskis edited comment on CASSANDRA-13441 at 5/11/17 3:22 PM:
---

Hi, any workaround for this issue? I've hit this after upgrading from 3.0.9 to 
3.0.13 and doing sstableupgrade. Noticed weird disk write patterns and started 
seeing migration tasks bouncing around. I've only managed to update first of 
the 3 nodes. Migrations tasks have stopped after I've rebooted first node.

{noformat}
Cluster Information:
Name: cloud.zaromskis.lt cluster
Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
600b7268-d42a-3b72-8706-093b6c8cfaff: [10.240.0.6]
77a40699-8e9e-35aa-834e-68c32e40a45a: [10.240.0.7, 10.240.0.8]
{noformat}

{noformat}
Datacenter: dc1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens   Owns (effective)  Host ID   
Rack
UN  10.240.0.6  284.95 GB  256  63.4% 
d0d83d9d-0dec-45cd-9ca9-93515fa131f3  rack1
UN  10.240.0.7  288.53 GB  256  64.1% 
6d9709a0-0e10-46a1-9afa-d106b74ca9e0  rack1
UN  10.240.0.8  326.31 GB  256  72.5% 
5c969700-8bd9-49a4-9772-1284439f8364  rack1
{noformat}

The migrations are in fact executed, as I can see that on the disk, new files 
are created every second in system keyspace. Why would cluster settle on the 
same schema version then?

The schema version of first node would not propagate to other nodes. I'm afraid 
further upgrades might create new schema versions? I can't afford to lose any 
data. Any advise?




was (Author: juliuszaromskis):
Hi, any workaround for this issue? I've hit this after upgrading from 3.0.9 to 
3.0.13 and doing sstableupgrade. Noticed weird disk write patterns and started 
seeing migration tasks bouncing around. I've only managed to update first of 
the 3 nodes. Migrations tasks have stopped after I've rebooted first node.

{noformat}
Cluster Information:
Name: cloud.zaromskis.lt cluster
Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
600b7268-d42a-3b72-8706-093b6c8cfaff: [10.240.0.6]
77a40699-8e9e-35aa-834e-68c32e40a45a: [10.240.0.7, 10.240.0.8]
{noformat}

{noformat}
Datacenter: dc1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens   Owns (effective)  Host ID   
Rack
UN  10.240.0.6  284.95 GB  256  63.4% 
d0d83d9d-0dec-45cd-9ca9-93515fa131f3  rack1
UN  10.240.0.7  288.53 GB  256  64.1% 
6d9709a0-0e10-46a1-9afa-d106b74ca9e0  rack1
UN  10.240.0.8  326.31 GB  256  72.5% 
5c969700-8bd9-49a4-9772-1284439f8364  rack1
{noformat}

The schema version of first node would not propagate to other nodes. I'm afraid 
further upgrades might create new schema versions? I can't afford to lose any 
data. Any advise?

> Schema version changes for each upgraded node in a rolling upgrade, causing 
> migration storms
> 
>
> Key: CASSANDRA-13441
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13441
> Project: Cassandra
>  Issue Type: Bug
>  Components: Schema
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
> Fix For: 3.0.14, 3.11.0, 4.0
>
>
> In versions < 3.0, during a rolling upgrade (say 2.0 -> 2.1), the first node 
> to upgrade to 2.1 would add the new tables, setting the new 2.1 version ID, 
> and subsequently upgraded hosts would settle on that version.
> When a 3.0 node upgrades and writes its own new-in-3.0 system tables, it'll 
> write the same tables that exist in the schema with brand new timestamps. As 
> written, this will cause all nodes in the cluster to change schema (to the 
> version with the newest timestamp). On a sufficiently large cluster with a 
> non-trivial schema, this could cause (literally) millions of migration tasks 
> to needlessly bounce across the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13523) StreamReceiveTask: java.lang.OutOfMemoryError: Map failed

2017-05-11 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006620#comment-16006620
 ] 

Chris Lohfink commented on CASSANDRA-13523:
---

This isnt really a bug so much as an operational/config issue. Repairs and 
streaming are pretty memory intensive and will use all of your heap + some.

Should check your limits ({{ulimit -a}}). Check max mmapped files ({{cat 
/proc/sys/vm/max_map_count}}). but most likely "The system is out of physical 
RAM or swap space". should also check your jvm options and docker settings to 
make sure your system can handle load you set it up with. Maybe decrease heap 
size so your physical system can match what your giving the jvm.

That swap use isnt a good thing also fwiw.

> StreamReceiveTask: java.lang.OutOfMemoryError: Map failed
> -
>
> Key: CASSANDRA-13523
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13523
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: Ubuntu 14.04.5 LTS, Docker version 1.9.1, run as a 
> container, 4 core server with 16GB memory.
>Reporter: Matthew O'Riordan
>  Labels: bug, crash
> Fix For: 2.1.13
>
>
> During a nodetool repair -par on one of our keyspaces, Cassandra crashed due 
> to what seems like memory exhaustion within the JVM.  The machine itself had 
> plenty of available memory at the time and did not appear to be under any 
> significant load.
> In the system log, before the crash, the following was logged:
> {code}
> ...
> INFO  [AntiEntropySessions:55] 2017-05-10 18:18:20,627  RepairJob.java:163 - 
> [repair #0330beb0-35ad-11e7-b7e4-091830ac5256] requesting merkle trees for 
> stats_day_aggregates (to [/54.162.66.114, /54.236.226.76, /52.221.228.170, 
> /54.154.35.144, /54.154.96.213, /52.221.217.27])
> INFO  [ValidationExecutor:54] 2017-05-10 18:18:20,628  
> ColumnFamilyStore.java:905 - Enqueuing flush of stats_day_aggregates: 7018 
> (0%) on-heap, 0 (0%) off-heap
> INFO  [MemtableFlushWriter:13608] 2017-05-10 18:18:20,628  Memtable.java:347 
> - Writing Memtable-stats_day_aggregates@3469792(2.734KiB serialized bytes, 14 
> ops, 0%/0% of on/off-heap limit)
> INFO  [MemtableFlushWriter:13608] 2017-05-10 18:18:20,629  Memtable.java:382 
> - Completed flushing 
> /var/lib/cassandra/data/ably_production_0_stats/stats_day_aggregates-b6e29201e3d111e5bbf3091830ac5256/ably_production_0_stats-stats_day_aggregates-tmp-ka-43026-Data.db
>  (0.000KiB) for commitlog position ReplayPosition(segmentId=1491420635101, 
> position=24224955)
> INFO  [StreamReceiveTask:638] 2017-05-10 18:18:21,220  
> StreamResultFuture.java:180 - [Stream #0008cac1-35ad-11e7-b7e4-091830ac5256] 
> Session with /52.203.21.193 is complete
> INFO  [StreamReceiveTask:638] 2017-05-10 18:18:21,220  
> StreamResultFuture.java:212 - [Stream #0008cac1-35ad-11e7-b7e4-091830ac5256] 
> All sessions completed
> INFO  [StreamReceiveTask:638] 2017-05-10 18:18:21,221  
> StreamingRepairTask.java:96 - [repair #fe7e3320-35ac-11e7-b7e4-091830ac5256] 
> streaming task succeed, returning response to /52.221.217.27
> INFO  [CqlSlowLog-Writer-thread-0] 2017-05-10 18:18:26,230  
> CqlSlowLogWriter.java:151 - Recording statements with duration of 4844 in 
> slow log
> INFO  [Service Thread] 2017-05-10 18:18:26,233  GCInspector.java:258 - G1 Old 
> Generation GC in 4781ms.  G1 Eden Space: 131072 -> 0; G1 Old Gen: 
> 2774539816 -> 1830851216; G1 Survivor Space: 37748736 -> 0; 
> INFO  [AntiEntropyStage:1] 2017-05-10 18:18:26,237  RepairSession.java:171 - 
> [repair #0330beb0-35ad-11e7-b7e4-091830ac5256] Received merkle tree for 
> stats_day_aggregates from /54.162.66.114
> INFO  [StreamConnectionEstablisher:1] 2017-05-10 18:18:26,239  
> StreamCoordinator.java:209 - [Stream #0293bb60-35ad-11e7-b7e4-091830ac5256, 
> ID#0] Beginning stream session with /54.154.226.20
> INFO  [AntiEntropyStage:1] 2017-05-10 18:18:26,294  RepairSession.java:171 - 
> [repair #0330beb0-35ad-11e7-b7e4-091830ac5256] Received merkle tree for 
> stats_day_aggregates from /54.236.226.76
> INFO  [Service Thread] 2017-05-10 18:18:26,298  StatusLogger.java:51 - Pool 
> NameActive   Pending  Completed   Blocked  All Time 
> Blocked
> INFO  [AntiEntropyStage:1] 2017-05-10 18:18:26,344  RepairSession.java:171 - 
> [repair #0330beb0-35ad-11e7-b7e4-091830ac5256] Received merkle tree for 
> stats_day_aggregates from /54.154.35.144
> INFO  [CqlSlowLog-Writer-thread-0] 2017-05-10 18:18:26,344  
> CqlSlowLogWriter.java:151 - Recording statements with duration of 5035 in 
> slow log
> WARN  [GossipTasks:1] 2017-05-10 18:18:26,344  FailureDetector.java:258 - Not 
> marking nodes down due to local pause of 5109502584 > 50
> INFO  [AntiEntropyStage:1] 2017-05-10 18:18:26,344  

[jira] [Commented] (CASSANDRA-13441) Schema version changes for each upgraded node in a rolling upgrade, causing migration storms

2017-05-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006613#comment-16006613
 ] 

Julius Žaromskis commented on CASSANDRA-13441:
--

Hi, any workaround for this issue? I've hit this after upgrading from 3.0.9 to 
3.0.13 and doing sstableupgrade. Noticed weird disk write patterns and started 
seeing migration tasks bouncing around. I've only managed to update first of 
the 3 nodes. Migrations tasks have stopped after I've rebooted first node.

{noformat}
Cluster Information:
Name: cloud.zaromskis.lt cluster
Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
600b7268-d42a-3b72-8706-093b6c8cfaff: [10.240.0.6]
77a40699-8e9e-35aa-834e-68c32e40a45a: [10.240.0.7, 10.240.0.8]
{noformat}

{noformat}
Datacenter: dc1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens   Owns (effective)  Host ID   
Rack
UN  10.240.0.6  284.95 GB  256  63.4% 
d0d83d9d-0dec-45cd-9ca9-93515fa131f3  rack1
UN  10.240.0.7  288.53 GB  256  64.1% 
6d9709a0-0e10-46a1-9afa-d106b74ca9e0  rack1
UN  10.240.0.8  326.31 GB  256  72.5% 
5c969700-8bd9-49a4-9772-1284439f8364  rack1
{noformat}

The schema version of first node would not propagate to other nodes. I'm afraid 
further upgrades might create new schema versions? I can't afford to lose any 
data. Any advise?

> Schema version changes for each upgraded node in a rolling upgrade, causing 
> migration storms
> 
>
> Key: CASSANDRA-13441
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13441
> Project: Cassandra
>  Issue Type: Bug
>  Components: Schema
>Reporter: Jeff Jirsa
>Assignee: Jeff Jirsa
> Fix For: 3.0.14, 3.11.0, 4.0
>
>
> In versions < 3.0, during a rolling upgrade (say 2.0 -> 2.1), the first node 
> to upgrade to 2.1 would add the new tables, setting the new 2.1 version ID, 
> and subsequently upgraded hosts would settle on that version.
> When a 3.0 node upgrades and writes its own new-in-3.0 system tables, it'll 
> write the same tables that exist in the schema with brand new timestamps. As 
> written, this will cause all nodes in the cluster to change schema (to the 
> version with the newest timestamp). On a sufficiently large cluster with a 
> non-trivial schema, this could cause (literally) millions of migration tasks 
> to needlessly bounce across the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13426) Make all DDL statements idempotent and not dependent on global state

2017-05-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006594#comment-16006594
 ] 

Aleksey Yeschenko commented on CASSANDRA-13426:
---

Pushed WIP branch [here|https://github.com/iamaleksey/cassandra/commits/13426]. 
A few statements need to be swapped in still, but they are already written and 
there. Unit tests are currently passing, dtests haven't been run yet.

> Make all DDL statements idempotent and not dependent on global state
> 
>
> Key: CASSANDRA-13426
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13426
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Distributed Metadata
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
> Fix For: 4.0
>
>
> A follow-up to CASSANDRA-9425 and a pre-requisite for CASSANDRA-10699.
> It's necessary for the latter to be able to apply any DDL statement several 
> times without side-effects. As part of the ticket I think we should also 
> clean up validation logic and our error texts. One example is varying 
> treatment of missing keyspace for DROP TABLE/INDEX/etc. statements with IF 
> EXISTS.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-10130) Node failure during 2i update after streaming can have incomplete 2i when restarted

2017-05-11 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006528#comment-16006528
 ] 

Paulo Motta edited comment on CASSANDRA-10130 at 5/11/17 2:44 PM:
--

The approach looks mostly good, great job! I was thinking we could perhaps 
merge the {{IndexInfo}} and {{indexes_to_rebuild}} table into a single 
{{index_build_status}} table, which has the same schema as {{IndexInfo}} plus 
an additional {{status}} field with the possible values and implications:
* {{not_built}} or not present: throws unavailable exception when queried
* {{built}}: serves requests normally
* {{rebuilding}}: serves requests normally on already started node, rebuild on 
restart.
* {{needs_rebuild}}: serves requests normally on already started node while 
printing a periodic message asking the user to run a manual rebuild, rebuilds 
on restart.

WDYT?

We would need to deprecate the {{IndexInfo}} table for a couple of major 
versions and also require a migration of values from the {{IndexInfo}} to the 
{{index_build_status}} table though, see {{LegacyHintsMigrator}} and 
{{LegacyBatchlogMigrator}} - we should at some point generalize these 
migrations.


was (Author: pauloricardomg):
The approach looks mostly good, great job! I was thinking we could perhaps 
merge the {{IndexInfo}} and {{indexes_to_rebuild}} table into a single 
{{index_build_status}}, which has the same schema as {{indexes_to_rebuild}} 
plus an additional {{status}} field with the possible values and implications:
* {{not_built}} or not present: throws unavailable exception when queried
* {{built}}: serves requests normally
* {{rebuilding}}: serves requests normally on already started node, rebuild on 
restart.
* {{needs_rebuild}}: serves requests normally on already started node while 
printing a periodic message asking the user to run a manual rebuild, rebuilds 
on restart.

WDYT?

We would need to deprecate the {{IndexInfo}} table for a couple of major 
versions and also require a migration of values from the {{IndexInfo}} to the 
{{index_build_status}} table though, see {{LegacyHintsMigrator}} and 
{{LegacyBatchlogMigrator}} - we should at some point generalize these 
migrations.

> Node failure during 2i update after streaming can have incomplete 2i when 
> restarted
> ---
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Yuki Morishita
>Assignee: Andrés de la Peña
>Priority: Minor
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-8272) 2ndary indexes can return stale data

2017-05-11 Thread Sergio Bossa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006545#comment-16006545
 ] 

Sergio Bossa commented on CASSANDRA-8272:
-

bq. my main point is that on principle we should be careful to look at the 
whole solution before comparing it to alternatives and deciding which one we 
"stick with". I've seen simple solutions get pretty messy once you fix all edge 
cases to the point that it wasn't the best solution anymore.

Of course. And I indeed gave some thoughts on my own to the tombstones solution 
(as I'm sure [~adelapena] did as well), and I've found it quite more complex 
that the current one, with little/no gains in return, and, something I didn't 
mention before, not really complete for indexes covering multiple columns, or 
if we'll ever want to support multiple indexes per row: in such cases, mixing 
tombstones and valid column values for all combinations would easily turn into 
a mess IMHO, while actually returning the row and later post-filter is IMHO 
cleaner and less error prone. To be noted, we could still "skim" the row when 
we detect it's related to a stale entry and only keep the index-related columns 
(and easily add a merging step in the future for the multiple indexes cases): 
this would buy us the performance optimization you mentioned above, but I see 
it slightly error prone and I'd rather go with a functionally complete solution 
first.

bq. It's in particular not true that fixing this bug will be "invalidated when 
filtering is applied"

I disagree here: if filtering is applied on top of index results, you'll still 
get wrong results, which is confusing to me (as a user). I understand filtering 
is also orthogonal, so what about fixing filtering (that is, moving to 
coordinator-side filtering) only when indexes are present?

bq. That [fixing other index implementations] I agree is something we should 
consider. Though tbh, I have doubts we can have a solution that is completely 
index agnostic. 

Of course. But we can still provide some API (i.e. the {{isSatisfiedBy()}} you 
mentioned) they can leverage. And if we do this kind of work on the 
SASI-enabled branches, we'll have two different index implementations to test 
the goodness of our API.

bq. One thing that hasn't been mentioned is that the fix has impact on 
upgrades. Namely, in a mixed cluster, some replica will start to return invalid 
results and if the coordinator isn't upgraded yet, it won't filter those, which 
means we'll return invalid entries.

Excellent point! And definitely something to avoid.

bq. That does mean we should consider starting to filter entries on index 
queries coordinator-side in 3.0/3.11 (even though we never return them), and 
only do the replica-side parts in 4.0, with a fat warning that you need to only 
upgrade to 4.0 from a 3.X version that has the coordinator-side fix.

Mmm ... clunky. And error prone as the 3.X code would be probably 
untestable. Couldn't the replica detect the coordinator version and return 
results accordingly?

bq. Worth noting that this doesn't entirely fly for index using custom indexes: 
we'd need to have them implement the CustomExpression#isSatistiedBy method in 
3.X in that scheme since we need it for the coordinator-side filtering as well, 
but making that method abstract in 3.X is, as said above, a breaking change.

I'm not sure I get why you _have to_ make that abstract: I think it's fine to 
leave it as it is and warn users they'll have to override it on upgrade if they 
want consistent results. And for those implementations that can't implement it, 
we should maybe add a {{isConsistent}} predicate to disable "consistent 
filtering" altogether.

> 2ndary indexes can return stale data
> 
>
> Key: CASSANDRA-8272
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8272
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Andrés de la Peña
> Fix For: 3.0.x
>
>
> When replica return 2ndary index results, it's possible for a single replica 
> to return a stale result and that result will be sent back to the user, 
> potentially failing the CL contract.
> For instance, consider 3 replicas A, B and C, and the following situation:
> {noformat}
> CREATE TABLE test (k int PRIMARY KEY, v text);
> CREATE INDEX ON test(v);
> INSERT INTO test(k, v) VALUES (0, 'foo');
> {noformat}
> with every replica up to date. Now, suppose that the following queries are 
> done at {{QUORUM}}:
> {noformat}
> UPDATE test SET v = 'bar' WHERE k = 0;
> SELECT * FROM test WHERE v = 'foo';
> {noformat}
> then, if A and B acknowledge the insert but C respond to the read before 
> having applied the insert, then the now stale result will be returned (since 
> C will return it and A or B will return nothing).
> A potential 

[jira] [Commented] (CASSANDRA-10130) Node failure during 2i update after streaming can have incomplete 2i when restarted

2017-05-11 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006528#comment-16006528
 ] 

Paulo Motta commented on CASSANDRA-10130:
-

The approach looks mostly good, great job! I was thinking we could perhaps 
merge the {{IndexInfo}} and {{indexes_to_rebuild}} table into a single 
{{index_build_status}}, which has the same schema as {{indexes_to_rebuild}} 
plus an additional {{status}} field with the possible values and implications:
* {{not_built}} or not present: throws unavailable exception when queried
* {{built}}: serves requests normally
* {{rebuilding}}: serves requests normally on already started node, rebuild on 
restart.
* {{needs_rebuild}}: serves requests normally on already started node while 
printing a periodic message asking the user to run a manual rebuild, rebuilds 
on restart.

WDYT?

We would need to deprecate the {{IndexInfo}} table for a couple of major 
versions and also require a migration of values from the {{IndexInfo}} to the 
{{index_build_status}} table though, see {{LegacyHintsMigrator}} and 
{{LegacyBatchlogMigrator}} - we should at some point generalize these 
migrations.

> Node failure during 2i update after streaming can have incomplete 2i when 
> restarted
> ---
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Yuki Morishita
>Assignee: Andrés de la Peña
>Priority: Minor
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10130) Node failure during 2i update after streaming can have incomplete 2i when restarted

2017-05-11 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10130:

Status: Open  (was: Patch Available)

> Node failure during 2i update after streaming can have incomplete 2i when 
> restarted
> ---
>
> Key: CASSANDRA-10130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10130
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Yuki Morishita
>Assignee: Andrés de la Peña
>Priority: Minor
>
> Since MV/2i update happens after SSTables are received, node failure during 
> MV/2i update can leave received SSTables live when restarted while MV/2i are 
> partially up to date.
> We can add some kind of tracking mechanism to automatically rebuild at the 
> startup, or at least warn user when the node restarts.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13525) ReverseIndexedReader may drop rows during 2.1 to 3.0 upgrade

2017-05-11 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006512#comment-16006512
 ] 

Sam Tunnicliffe commented on CASSANDRA-13525:
-

Fixed the 3.0 -> 3.11 merge issue, so both branches are up now, I'll update 
with CI results shortly.
Trunk obviously isn't affected through removing support for pre-3.0 sstables.

||branch||circle-ci||
|[13525-3.0|https://github.com/beobal/cassandra/tree/13525-3.0]|
|[13525-3.11|https://github.com/beobal/cassandra/tree/13525-3.11]|



> ReverseIndexedReader may drop rows during 2.1 to 3.0 upgrade
> 
>
> Key: CASSANDRA-13525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13525
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>
> During an upgrade from 2.1 (or 2.2) to 3.0 (or 3.x) queries which perform 
> reverse iteration may silently drop rows from their results. This can happen 
> before sstableupgrade is run and when the sstables are indexed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[3/5] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-05-11 Thread samt
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3a064332
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3a064332
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3a064332

Branch: refs/heads/trunk
Commit: 3a06433299aae8f48b7a84ca6aa42e612dd9d2df
Parents: 8693357 263740d
Author: Sam Tunnicliffe 
Authored: Thu May 11 15:16:31 2017 +0100
Committer: Sam Tunnicliffe 
Committed: Thu May 11 15:16:31 2017 +0100

--

--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[4/5] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2017-05-11 Thread samt
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3a064332
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3a064332
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3a064332

Branch: refs/heads/cassandra-3.11
Commit: 3a06433299aae8f48b7a84ca6aa42e612dd9d2df
Parents: 8693357 263740d
Author: Sam Tunnicliffe 
Authored: Thu May 11 15:16:31 2017 +0100
Committer: Sam Tunnicliffe 
Committed: Thu May 11 15:16:31 2017 +0100

--

--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[5/5] cassandra git commit: Merge branch 'cassandra-3.11' into trunk

2017-05-11 Thread samt
Merge branch 'cassandra-3.11' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c43371bf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c43371bf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c43371bf

Branch: refs/heads/trunk
Commit: c43371bf8fb7e1ec761a37a5f9938df9731fac4e
Parents: 6deafff 3a06433
Author: Sam Tunnicliffe 
Authored: Thu May 11 15:18:00 2017 +0100
Committer: Sam Tunnicliffe 
Committed: Thu May 11 15:18:00 2017 +0100

--

--



-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[2/5] cassandra git commit: Fix 2ndary indexes on primary key columns to don't create expiring entries (CASSANDRA-13412)

2017-05-11 Thread samt
Fix 2ndary indexes on primary key columns to don't create expiring entries 
(CASSANDRA-13412)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/263740da
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/263740da
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/263740da

Branch: refs/heads/trunk
Commit: 263740daa4c8162a157aa6fbb97793f158d142d1
Parents: 415d06b
Author: adelapena 
Authored: Fri Apr 21 09:33:16 2017 +0100
Committer: adelapena 
Committed: Wed May 10 11:55:45 2017 +0100

--
 .../validation/entities/SecondaryIndexTest.java | 42 
 1 file changed, 42 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/263740da/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
 
b/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
index 8376652..1a1b881 100644
--- 
a/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
@@ -1231,4 +1231,46 @@ public class SecondaryIndexTest extends CQLTester
 return super.getInvalidateTask();
 }
 }
+
+@Test
+public void testIndexOnPartitionKeyInsertExpiringColumn() throws Throwable
+{
+createTable("CREATE TABLE %s (k1 int, k2 int, a int, b int, PRIMARY 
KEY ((k1, k2)))");
+createIndex("CREATE INDEX on %s(k1)");
+execute("INSERT INTO %s (k1, k2, a, b) VALUES (1, 2, 3, 4)");
+assertRows(execute("SELECT * FROM %s WHERE k1 = 1"), row(1, 2, 3, 4));
+execute("UPDATE %s USING TTL 1 SET b = 10 WHERE k1 = 1 AND k2 = 2");
+Thread.sleep(1000);
+assertRows(execute("SELECT * FROM %s WHERE k1 = 1"), row(1, 2, 3, 
null));
+}
+
+@Test
+public void testIndexOnClusteringKeyInsertExpiringColumn() throws Throwable
+{
+createTable("CREATE TABLE %s (pk int, ck int, a int, b int, PRIMARY 
KEY (pk, ck))");
+createIndex("CREATE INDEX on %s(ck)");
+execute("INSERT INTO %s (pk, ck, a, b) VALUES (1, 2, 3, 4)");
+assertRows(execute("SELECT * FROM %s WHERE ck = 2"), row(1, 2, 3, 4));
+execute("UPDATE %s USING TTL 1 SET b = 10 WHERE pk = 1 AND ck = 2");
+Thread.sleep(1000);
+assertRows(execute("SELECT * FROM %s WHERE ck = 2"), row(1, 2, 3, 
null));
+}
+
+@Test
+public void testIndexOnRegularColumnInsertExpiringColumn() throws Throwable
+{
+createTable("CREATE TABLE %s (pk int, ck int, a int, b int, PRIMARY 
KEY (pk, ck))");
+createIndex("CREATE INDEX on %s(a)");
+execute("INSERT INTO %s (pk, ck, a, b) VALUES (1, 2, 3, 4)");
+assertRows(execute("SELECT * FROM %s WHERE a = 3"), row(1, 2, 3, 4));
+
+execute("UPDATE %s USING TTL 1 SET b = 10 WHERE pk = 1 AND ck = 2");
+Thread.sleep(1000);
+assertRows(execute("SELECT * FROM %s WHERE a = 3"), row(1, 2, 3, 
null));
+
+execute("UPDATE %s USING TTL 1 SET a = 5 WHERE pk = 1 AND ck = 2");
+Thread.sleep(1000);
+assertEmpty(execute("SELECT * FROM %s WHERE a = 3"));
+assertEmpty(execute("SELECT * FROM %s WHERE a = 5"));
+}
 }


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[1/5] cassandra git commit: Fix 2ndary indexes on primary key columns to don't create expiring entries (CASSANDRA-13412)

2017-05-11 Thread samt
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.11 869335710 -> 3a0643329
  refs/heads/trunk 6deafff5a -> c43371bf8


Fix 2ndary indexes on primary key columns to don't create expiring entries 
(CASSANDRA-13412)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/263740da
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/263740da
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/263740da

Branch: refs/heads/cassandra-3.11
Commit: 263740daa4c8162a157aa6fbb97793f158d142d1
Parents: 415d06b
Author: adelapena 
Authored: Fri Apr 21 09:33:16 2017 +0100
Committer: adelapena 
Committed: Wed May 10 11:55:45 2017 +0100

--
 .../validation/entities/SecondaryIndexTest.java | 42 
 1 file changed, 42 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/263740da/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
 
b/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
index 8376652..1a1b881 100644
--- 
a/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
@@ -1231,4 +1231,46 @@ public class SecondaryIndexTest extends CQLTester
 return super.getInvalidateTask();
 }
 }
+
+@Test
+public void testIndexOnPartitionKeyInsertExpiringColumn() throws Throwable
+{
+createTable("CREATE TABLE %s (k1 int, k2 int, a int, b int, PRIMARY 
KEY ((k1, k2)))");
+createIndex("CREATE INDEX on %s(k1)");
+execute("INSERT INTO %s (k1, k2, a, b) VALUES (1, 2, 3, 4)");
+assertRows(execute("SELECT * FROM %s WHERE k1 = 1"), row(1, 2, 3, 4));
+execute("UPDATE %s USING TTL 1 SET b = 10 WHERE k1 = 1 AND k2 = 2");
+Thread.sleep(1000);
+assertRows(execute("SELECT * FROM %s WHERE k1 = 1"), row(1, 2, 3, 
null));
+}
+
+@Test
+public void testIndexOnClusteringKeyInsertExpiringColumn() throws Throwable
+{
+createTable("CREATE TABLE %s (pk int, ck int, a int, b int, PRIMARY 
KEY (pk, ck))");
+createIndex("CREATE INDEX on %s(ck)");
+execute("INSERT INTO %s (pk, ck, a, b) VALUES (1, 2, 3, 4)");
+assertRows(execute("SELECT * FROM %s WHERE ck = 2"), row(1, 2, 3, 4));
+execute("UPDATE %s USING TTL 1 SET b = 10 WHERE pk = 1 AND ck = 2");
+Thread.sleep(1000);
+assertRows(execute("SELECT * FROM %s WHERE ck = 2"), row(1, 2, 3, 
null));
+}
+
+@Test
+public void testIndexOnRegularColumnInsertExpiringColumn() throws Throwable
+{
+createTable("CREATE TABLE %s (pk int, ck int, a int, b int, PRIMARY 
KEY (pk, ck))");
+createIndex("CREATE INDEX on %s(a)");
+execute("INSERT INTO %s (pk, ck, a, b) VALUES (1, 2, 3, 4)");
+assertRows(execute("SELECT * FROM %s WHERE a = 3"), row(1, 2, 3, 4));
+
+execute("UPDATE %s USING TTL 1 SET b = 10 WHERE pk = 1 AND ck = 2");
+Thread.sleep(1000);
+assertRows(execute("SELECT * FROM %s WHERE a = 3"), row(1, 2, 3, 
null));
+
+execute("UPDATE %s USING TTL 1 SET a = 5 WHERE pk = 1 AND ck = 2");
+Thread.sleep(1000);
+assertEmpty(execute("SELECT * FROM %s WHERE a = 3"));
+assertEmpty(execute("SELECT * FROM %s WHERE a = 5"));
+}
 }


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-8272) 2ndary indexes can return stale data

2017-05-11 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006428#comment-16006428
 ] 

Sylvain Lebresne commented on CASSANDRA-8272:
-

bq. 1) Stick with the current approach! It's good and I do not think using 
special tombstones would buy us anything.

It would solve the {{LIMIT}} issue for instance. It's also theoretically a bit 
more efficient as won't ship full rows to the coordinator that it end up 
discarding.

Not that I'm suggesting we use special tombstones: thinking about it more I 
think it's actually broken in some cases where we delete/re-insert an entry in 
rapid succession. Basically, the deletion could end up deleting a valid entry 
post-re-insert if one of the node haven't seen that re-insert yet.

Besides, the {{LIMIT}} issue isn't that hard to fix and the performance impact 
is unlikely big in most cases. But my main point is that on principle we should 
be careful to look at the whole solution before comparing it to alternatives 
and deciding which one we "stick with". I've seen simple solutions get pretty 
messy once you fix all edge cases to the point that it wasn't the best solution 
anymore.

bq. 3) Generalize the approach so we can fix filtering

I wouldn't make that a requirement to commit this ticket. Filtering is 
genuinely orthogonal to indexing: it's as valid to use with and without 
indexing, and the code for both is mainly orthogonal. It's in particular not 
true that fixing this bug will be "invalidated when filtering is applied", 
especially when 2i is used but filtering isn't (hopefully the most common case 
in production since filtering is what it is).

Don't get wrong, both problems are certainly related, in that the underlying 
problem is that replica can return stale data where up-to-date replica don't 
return anything to indicate this is stale.

But in the indexing cases, we have more information in that we have the index 
tombstones to know what entries might be stale on another node, so we know *a 
relatively minimal* set of info (rows) to return to "fix" potentially stale 
entries from other replica.

In the filtering case, we don't have similar information to help\[1\]: the 
equivalent to the solution from [~adelapena]'s patch is to say that when a 
replica sees a row they should filter out, they still return it in case it may 
be needed to "fix" a stale replica. But doing so exactly amounts to not doing 
any filtering replica-side and moving it all server-side, which is what I'm 
suggesting in CASSANDRA-8273.

In other words, this ticket has a mostly-replica-side based solution but the 
filtering one probably doesn't. That's enough difference imo to not wed 
ourselves to fixing both problems in the same ticket, and to keep discussion 
around filtering on CASSANDRA-8273 (doesn't mean we can't cross-reference of 
course when appropriate).

bq. and any other indexing implementation (most notably SASI)

That I agree is something we should consider. Though tbh, I have doubts we can 
have a solution that is completely index agnostic. What we need is that indexes 
return any entries that _was_ recently valid but isn't anymore (up to gc_grace) 
so that the result from any replica having the old entry can be properly 
reconciled and skipped. That has to be something the index implementation 
itself provides to us, and the best we can do is simply specify that index 
implementations have to do that to be correct. Though I suspect not all custom 
index implementations will be able to provided that at all in practice.

Don't know about SASI in particular though. I assume it kind of has to keep 
tombstones for old entries in the first place (not sure how it handles deletes 
otherwise) and if so, we should certainly update it to implement the new 
requirement described above.

There is the question of the {{LIMIT}} problem. I think the proper way to fix 
that is to create a new {{DataLimits}} that keep the {{RowFilter}} around and 
that doesn't count entries that don't match it (which will have a small cost 
btw, but I don't see an easy way out). In which case, any "normal" CQL 
expression will be covered and that will include SASI for this particular part. 
For custom expressions however, we currently unfornately have 
{{RowFilter.CustomExpression#isSatisfiedBy()}} always return {{true}} 
currently, so we'd have to make that method abstract and require index using 
custom expressions to implement it (which is, strictly speaking, a breaking 
change and implies 4.0 at this point; more on that below).


bq. I'd also suggest to only apply it to 3.11 onwards

One thing that hasn't been mentioned is that the fix has impact on upgrades. 
Namely, in a mixed cluster, some replica will start to return invalid results 
and if the coordinator isn't upgraded yet, it won't filter those, which means 
we'll return invalid entries. More precisely, we may return up-to-date 

[jira] [Commented] (CASSANDRA-8272) 2ndary indexes can return stale data

2017-05-11 Thread Sergio Bossa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006297#comment-16006297
 ] 

Sergio Bossa commented on CASSANDRA-8272:
-

One more thing I'm realizing we should fix is paging, which seems broken in a 
similar way to limiting: that is, if the page size is less than the number of 
mismatched rows we might end up going through "empty" pages until we get to the 
valid results.

> 2ndary indexes can return stale data
> 
>
> Key: CASSANDRA-8272
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8272
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Andrés de la Peña
> Fix For: 3.0.x
>
>
> When replica return 2ndary index results, it's possible for a single replica 
> to return a stale result and that result will be sent back to the user, 
> potentially failing the CL contract.
> For instance, consider 3 replicas A, B and C, and the following situation:
> {noformat}
> CREATE TABLE test (k int PRIMARY KEY, v text);
> CREATE INDEX ON test(v);
> INSERT INTO test(k, v) VALUES (0, 'foo');
> {noformat}
> with every replica up to date. Now, suppose that the following queries are 
> done at {{QUORUM}}:
> {noformat}
> UPDATE test SET v = 'bar' WHERE k = 0;
> SELECT * FROM test WHERE v = 'foo';
> {noformat}
> then, if A and B acknowledge the insert but C respond to the read before 
> having applied the insert, then the now stale result will be returned (since 
> C will return it and A or B will return nothing).
> A potential solution would be that when we read a tombstone in the index (and 
> provided we make the index inherit the gcGrace of it's parent CF), instead of 
> skipping that tombstone, we'd insert in the result a corresponding range 
> tombstone.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13525) ReverseIndexedReader may drop rows during 2.1 to 3.0 upgrade

2017-05-11 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-13525:

Reproduced In: 3.10, 3.0.13  (was: 3.0.13, 3.10)
   Status: Patch Available  (was: Open)


The problem occurs when a row spans an block in the row index. When deciding 
whether to continue reading from disk or move to the next index block, 
{{UnfilteredDeserializer.OldFormatDeserializer}} accounts for the fact that 
{{hasNext()}} has bumped the file pointer past the end of the last consumed 
{{Unfiltered}}. But, it doesn't account for reading the legacy atoms from disk 
to feed the unfiltered iterator. The legacy atom iterator actually reads the 
first atom of the {{Unfiltered}} *after* the next one, so that needs to be 
included when calculating the {{lastConsumedPosition}}, which in turn 
determines whether the current index block has been exhausted. This only 
affects the reverse iterator as it's mitigation for rows which cross index 
block boundaries is more complex than the forward iterator. 

For 3.0, I've pushed [a 
branch|https://github.com/beobal/cassandra/tree/13525-3.0] with a fix and unit 
test. Also, it adds a unit test for CASSANDRA-13236 (there's a [dtest 
PR|https://github.com/riptano/cassandra-dtest/pull/1469] open already), this 
issue was discovered during investigation into that one but I struggled to 
figure out a decent unit test for it at the time. 

The 3.11 branch should be pretty much identical, but merging 3.0 -> 3.11 is 
broken at the moment, (looks like 
[263740daa4|https://github.com/apache/cassandra/commit/263740daa4c8162a157aa6fbb97793f158d142d1]
 needs to be merged with {{-s ours}}, but I'll double check). When that's fixed 
I'll push a 3.11 branch.


> ReverseIndexedReader may drop rows during 2.1 to 3.0 upgrade
> 
>
> Key: CASSANDRA-13525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13525
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>
> During an upgrade from 2.1 (or 2.2) to 3.0 (or 3.x) queries which perform 
> reverse iteration may silently drop rows from their results. This can happen 
> before sstableupgrade is run and when the sstables are indexed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13525) ReverseIndexedReader may drop rows during 2.1 to 3.0 upgrade

2017-05-11 Thread Sam Tunnicliffe (JIRA)
Sam Tunnicliffe created CASSANDRA-13525:
---

 Summary: ReverseIndexedReader may drop rows during 2.1 to 3.0 
upgrade
 Key: CASSANDRA-13525
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13525
 Project: Cassandra
  Issue Type: Bug
  Components: Local Write-Read Paths
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe


During an upgrade from 2.1 (or 2.2) to 3.0 (or 3.x) queries which perform 
reverse iteration may silently drop rows from their results. This can happen 
before sstableupgrade is run and when the sstables are indexed.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-8272) 2ndary indexes can return stale data

2017-05-11 Thread Sergio Bossa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006245#comment-16006245
 ] 

Sergio Bossa commented on CASSANDRA-8272:
-

[~adelapena], I gave a first review pass and the approach looks sensible, so +1 
on that.

Unfortunately, the problem is actually quite subtle and there are at least a 
couple cases where it doesn't fully work.

First of all, when a {{LIMIT}} clause is provided, the query might return no 
results when there actually are some valid ones: this is because the rows 
returned as a result of an "index mismatch" are still counted against the limit 
(by {{CQLCounter}}), which means the coordinator might end up with less valid 
rows than the requested limit, simply because some replicas returned only 
mismatched rows. Here's a simple scenario with two nodes:
1) Write row {{key=1,index=1}}.
2) Write row {{key=2,index=1}}.
3) Shutdown node 2.
4) Delete column {{index}} from row {{key=1}}: the delete will go to node 1, 
while node 2 will miss it.
5) Restart node 2 (hints need to be disabled).
6) Query for {{index=1}}.
7) Node 1 will return the first row found, i.e. the "mismatched" one {{key=1}}.
8) Node 2 will return the "missed delete" with {{key=1}}.
9) Coordinator will merge/post-process the rows, realize there's a mismatch and 
return no results, while it should have instead returned {{key=2}}.

Second, this patch doesn't fix filtering; while it's true we have a different 
issue for that ({{CASSANDRA-8273}}), and while we could argue filtering isn't 
exactly a form of indexing, it is still used in conjunction with indexing, and 
fixing indexing just to have its results invalidated when filtering is applied 
seems quite confusing to me.

In the end, I'd suggest the following:
1) Stick with the current approach! It's good and I do not think using special 
tombstones would buy us anything.
2) Fix the first problem above.
3) Generalize the approach so we can fix filtering and any other indexing 
implementation (most notably SASI).
4) To ease the burden of porting between versions, and given this is not a 
trivial bug fix at all, I'd also suggest to only apply it to 3.11 onwards.

Thoughts?

> 2ndary indexes can return stale data
> 
>
> Key: CASSANDRA-8272
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8272
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Andrés de la Peña
> Fix For: 3.0.x
>
>
> When replica return 2ndary index results, it's possible for a single replica 
> to return a stale result and that result will be sent back to the user, 
> potentially failing the CL contract.
> For instance, consider 3 replicas A, B and C, and the following situation:
> {noformat}
> CREATE TABLE test (k int PRIMARY KEY, v text);
> CREATE INDEX ON test(v);
> INSERT INTO test(k, v) VALUES (0, 'foo');
> {noformat}
> with every replica up to date. Now, suppose that the following queries are 
> done at {{QUORUM}}:
> {noformat}
> UPDATE test SET v = 'bar' WHERE k = 0;
> SELECT * FROM test WHERE v = 'foo';
> {noformat}
> then, if A and B acknowledge the insert but C respond to the read before 
> having applied the insert, then the now stale result will be returned (since 
> C will return it and A or B will return nothing).
> A potential solution would be that when we read a tombstone in the index (and 
> provided we make the index inherit the gcGrace of it's parent CF), instead of 
> skipping that tombstone, we'd insert in the result a corresponding range 
> tombstone.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13523) StreamReceiveTask: java.lang.OutOfMemoryError: Map failed

2017-05-11 Thread Matthew O'Riordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006236#comment-16006236
 ] 

Matthew O'Riordan commented on CASSANDRA-13523:
---

Note: rerunning the same repair caused Cassandra to exit again.

> StreamReceiveTask: java.lang.OutOfMemoryError: Map failed
> -
>
> Key: CASSANDRA-13523
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13523
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: Ubuntu 14.04.5 LTS, Docker version 1.9.1, run as a 
> container, 4 core server with 16GB memory.
>Reporter: Matthew O'Riordan
>  Labels: bug, crash
> Fix For: 2.1.13
>
>
> During a nodetool repair -par on one of our keyspaces, Cassandra crashed due 
> to what seems like memory exhaustion within the JVM.  The machine itself had 
> plenty of available memory at the time and did not appear to be under any 
> significant load.
> In the system log, before the crash, the following was logged:
> {code}
> ...
> INFO  [AntiEntropySessions:55] 2017-05-10 18:18:20,627  RepairJob.java:163 - 
> [repair #0330beb0-35ad-11e7-b7e4-091830ac5256] requesting merkle trees for 
> stats_day_aggregates (to [/54.162.66.114, /54.236.226.76, /52.221.228.170, 
> /54.154.35.144, /54.154.96.213, /52.221.217.27])
> INFO  [ValidationExecutor:54] 2017-05-10 18:18:20,628  
> ColumnFamilyStore.java:905 - Enqueuing flush of stats_day_aggregates: 7018 
> (0%) on-heap, 0 (0%) off-heap
> INFO  [MemtableFlushWriter:13608] 2017-05-10 18:18:20,628  Memtable.java:347 
> - Writing Memtable-stats_day_aggregates@3469792(2.734KiB serialized bytes, 14 
> ops, 0%/0% of on/off-heap limit)
> INFO  [MemtableFlushWriter:13608] 2017-05-10 18:18:20,629  Memtable.java:382 
> - Completed flushing 
> /var/lib/cassandra/data/ably_production_0_stats/stats_day_aggregates-b6e29201e3d111e5bbf3091830ac5256/ably_production_0_stats-stats_day_aggregates-tmp-ka-43026-Data.db
>  (0.000KiB) for commitlog position ReplayPosition(segmentId=1491420635101, 
> position=24224955)
> INFO  [StreamReceiveTask:638] 2017-05-10 18:18:21,220  
> StreamResultFuture.java:180 - [Stream #0008cac1-35ad-11e7-b7e4-091830ac5256] 
> Session with /52.203.21.193 is complete
> INFO  [StreamReceiveTask:638] 2017-05-10 18:18:21,220  
> StreamResultFuture.java:212 - [Stream #0008cac1-35ad-11e7-b7e4-091830ac5256] 
> All sessions completed
> INFO  [StreamReceiveTask:638] 2017-05-10 18:18:21,221  
> StreamingRepairTask.java:96 - [repair #fe7e3320-35ac-11e7-b7e4-091830ac5256] 
> streaming task succeed, returning response to /52.221.217.27
> INFO  [CqlSlowLog-Writer-thread-0] 2017-05-10 18:18:26,230  
> CqlSlowLogWriter.java:151 - Recording statements with duration of 4844 in 
> slow log
> INFO  [Service Thread] 2017-05-10 18:18:26,233  GCInspector.java:258 - G1 Old 
> Generation GC in 4781ms.  G1 Eden Space: 131072 -> 0; G1 Old Gen: 
> 2774539816 -> 1830851216; G1 Survivor Space: 37748736 -> 0; 
> INFO  [AntiEntropyStage:1] 2017-05-10 18:18:26,237  RepairSession.java:171 - 
> [repair #0330beb0-35ad-11e7-b7e4-091830ac5256] Received merkle tree for 
> stats_day_aggregates from /54.162.66.114
> INFO  [StreamConnectionEstablisher:1] 2017-05-10 18:18:26,239  
> StreamCoordinator.java:209 - [Stream #0293bb60-35ad-11e7-b7e4-091830ac5256, 
> ID#0] Beginning stream session with /54.154.226.20
> INFO  [AntiEntropyStage:1] 2017-05-10 18:18:26,294  RepairSession.java:171 - 
> [repair #0330beb0-35ad-11e7-b7e4-091830ac5256] Received merkle tree for 
> stats_day_aggregates from /54.236.226.76
> INFO  [Service Thread] 2017-05-10 18:18:26,298  StatusLogger.java:51 - Pool 
> NameActive   Pending  Completed   Blocked  All Time 
> Blocked
> INFO  [AntiEntropyStage:1] 2017-05-10 18:18:26,344  RepairSession.java:171 - 
> [repair #0330beb0-35ad-11e7-b7e4-091830ac5256] Received merkle tree for 
> stats_day_aggregates from /54.154.35.144
> INFO  [CqlSlowLog-Writer-thread-0] 2017-05-10 18:18:26,344  
> CqlSlowLogWriter.java:151 - Recording statements with duration of 5035 in 
> slow log
> WARN  [GossipTasks:1] 2017-05-10 18:18:26,344  FailureDetector.java:258 - Not 
> marking nodes down due to local pause of 5109502584 > 50
> INFO  [AntiEntropyStage:1] 2017-05-10 18:18:26,344  RepairSession.java:171 - 
> [repair #0330beb0-35ad-11e7-b7e4-091830ac5256] Received merkle tree for 
> stats_day_aggregates from /54.154.96.213
> INFO  [AntiEntropyStage:1] 2017-05-10 18:18:26,344  RepairSession.java:171 - 
> [repair #0330beb0-35ad-11e7-b7e4-091830ac5256] Received merkle tree for 
> stats_day_aggregates from /52.221.228.170
> INFO  [Service Thread] 2017-05-10 18:18:26,345  StatusLogger.java:66 - 
> MutationStage 0 0 1199223432 0
>  0
> 

[jira] [Resolved] (CASSANDRA-13524) cassandra core connector - Guava incompatibility: Detected incompatible version of Guava in the classpath. You need 16.0.1 or higher

2017-05-11 Thread Michael Hornung (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Hornung resolved CASSANDRA-13524.
-
Resolution: Fixed

> cassandra core connector - Guava incompatibility: Detected incompatible 
> version of Guava in the classpath. You need 16.0.1 or higher
> 
>
> Key: CASSANDRA-13524
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13524
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Hornung
> Attachments: build.sbt, error_log.txt
>
>
> Hallo,
> with my application I have a AKKA-http microservice which want’s to acess a 
> cassandra database table from scala.
> Therefore I included this dependency in SBT:
>  "com.datastax.cassandra" % "cassandra-driver-core"   % "3.2.0"
> In my scalafile I have this coding:
> --
> ….
> import com.datastax.driver.core.Cluster
> import com.google.common.util.concurrent._
> ….
> val cassandraHost= "localhost"
> val keyStore = "data4service"
> //setup cassandra
> val cluster = {
> Cluster.builder()
>   .addContactPoint(cassandraHost)
>   .withCredentials("user", "password")
>   .build()
>   }
> //connect to cassandra keystore
> val session = cluster.connect(keyStore)
> val product = "123e4567-e89b-12d3-a456-426655440003"
> val select = s"SELECT quantity FROM stock WHERE product = $product"
> val result = session.execute(select)   
> ….
> --
> During build guava Version 19.0 is downloaded automatically
> Unfortunately if I run my application I get this Error during runtime: 
> --
> com.datastax.driver.core.exceptions.DriverInternalError: Detected 
> incompatible version of Guava in the classpath. You need 16.0.1 or higher.
> at 
> com.datastax.driver.core.GuavaCompatibility.selectImplementation(GuavaCompatibility.java:138)
> at 
> com.datastax.driver.core.GuavaCompatibility.(GuavaCompatibility.java:52)
> --
> that is not logical bevause Guava 19.0 is on the system. Can you help me?
> Kind regards,
> Michael



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13218) Duration validation error is unclear in case of overflow.

2017-05-11 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-13218:
---
Fix Version/s: (was: 3.11.x)
   4.0
   3.11.0

> Duration validation error is unclear in case of overflow.
> -
>
> Key: CASSANDRA-13218
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13218
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 3.11.0, 4.0
>
>
> If a user try to insert a {{duration}} with a number of months or days that 
> cannot fit in an {{int}} (for example: {{9223372036854775807mo1d}}), the 
> error message is confusing.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-13218) Duration validation error is unclear in case of overflow.

2017-05-11 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-13218:
---
Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

Committed into 3.11 at 8693357109a6e59117a641e109c3865501e3eee6 and merged into 
trunk.

> Duration validation error is unclear in case of overflow.
> -
>
> Key: CASSANDRA-13218
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13218
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 3.11.x
>
>
> If a user try to insert a {{duration}} with a number of months or days that 
> cannot fit in an {{int}} (for example: {{9223372036854775807mo1d}}), the 
> error message is confusing.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13524) cassandra core connector - Guava incompatibility: Detected incompatible version of Guava in the classpath. You need 16.0.1 or higher

2017-05-11 Thread Michael Hornung (JIRA)
Michael Hornung created CASSANDRA-13524:
---

 Summary: cassandra core connector - Guava incompatibility: 
Detected incompatible version of Guava in the classpath. You need 16.0.1 or 
higher
 Key: CASSANDRA-13524
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13524
 Project: Cassandra
  Issue Type: Bug
Reporter: Michael Hornung
 Attachments: build.sbt, error_log.txt

Hallo,

with my application I have a AKKA-http microservice which want’s to acess a 
cassandra database table from scala.
Therefore I included this dependency in SBT:
 "com.datastax.cassandra" % "cassandra-driver-core"   % "3.2.0"
In my scalafile I have this coding:
--
….
import com.datastax.driver.core.Cluster
import com.google.common.util.concurrent._
….
val cassandraHost= "localhost"
val keyStore = "data4service"
//setup cassandra
val cluster = {
Cluster.builder()
  .addContactPoint(cassandraHost)
  .withCredentials("user", "password")
  .build()
  }
//connect to cassandra keystore
val session = cluster.connect(keyStore)
val product = "123e4567-e89b-12d3-a456-426655440003"
val select = s"SELECT quantity FROM stock WHERE product = $product"
val result = session.execute(select)   
….
--

During build guava Version 19.0 is downloaded automatically

Unfortunately if I run my application I get this Error during runtime: 
--
com.datastax.driver.core.exceptions.DriverInternalError: Detected incompatible 
version of Guava in the classpath. You need 16.0.1 or higher.
at 
com.datastax.driver.core.GuavaCompatibility.selectImplementation(GuavaCompatibility.java:138)
at 
com.datastax.driver.core.GuavaCompatibility.(GuavaCompatibility.java:52)
--

that is not logical bevause Guava 19.0 is on the system. Can you help me?

Kind regards,
Michael




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13218) Duration validation error is unclear in case of overflow.

2017-05-11 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006103#comment-16006103
 ] 

Benjamin Lerer commented on CASSANDRA-13218:


Thanks for the review.

> Duration validation error is unclear in case of overflow.
> -
>
> Key: CASSANDRA-13218
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13218
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 3.11.x
>
>
> If a user try to insert a {{duration}} with a number of months or days that 
> cannot fit in an {{int}} (for example: {{9223372036854775807mo1d}}), the 
> error message is confusing.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[3/3] cassandra git commit: Merge branch cassandra-3.11 into trunk

2017-05-11 Thread blerer
Merge branch cassandra-3.11 into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6deafff5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6deafff5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6deafff5

Branch: refs/heads/trunk
Commit: 6deafff5aedbfc1da317c97b668a02f360444521
Parents: 122bf06 8693357
Author: Benjamin Lerer 
Authored: Thu May 11 10:29:08 2017 +0200
Committer: Benjamin Lerer 
Committed: Thu May 11 10:40:51 2017 +0200

--
 CHANGES.txt |  1 +
 doc/native_protocol_v5.spec |  2 ++
 doc/source/cql/types.rst|  2 ++
 .../cassandra/db/marshal/DurationType.java  |  1 -
 .../serializers/DurationSerializer.java | 25 ++--
 .../cql3/validation/operations/CreateTest.java  |  8 ++-
 6 files changed, 35 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6deafff5/CHANGES.txt
--
diff --cc CHANGES.txt
index ae8de54,bb13fc4..d9b6a71
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,70 -1,5 +1,71 @@@
 +4.0
 + * Log time elapsed for each incremental repair phase (CASSANDRA-13498)
 + * Add multiple table operation support to cassandra-stress (CASSANDRA-8780)
 + * Fix incorrect cqlsh results when selecting same columns multiple times 
(CASSANDRA-13262)
 + * Fix WriteResponseHandlerTest is sensitive to test execution order 
(CASSANDRA-13421)
 + * Improve incremental repair logging (CASSANDRA-13468)
 + * Start compaction when incremental repair finishes (CASSANDRA-13454)
 + * Add repair streaming preview (CASSANDRA-13257)
 + * Cleanup isIncremental/repairedAt usage (CASSANDRA-13430)
 + * Change protocol to allow sending key space independent of query string 
(CASSANDRA-10145)
 + * Make gc_log and gc_warn settable at runtime (CASSANDRA-12661)
 + * Take number of files in L0 in account when estimating remaining compaction 
tasks (CASSANDRA-13354)
 + * Skip building views during base table streams on range movements 
(CASSANDRA-13065)
 + * Improve error messages for +/- operations on maps and tuples 
(CASSANDRA-13197)
 + * Remove deprecated repair JMX APIs (CASSANDRA-11530)
 + * Fix version check to enable streaming keep-alive (CASSANDRA-12929)
 + * Make it possible to monitor an ideal consistency level separate from 
actual consistency level (CASSANDRA-13289)
 + * Outbound TCP connections ignore internode authenticator (CASSANDRA-13324)
 + * Upgrade junit from 4.6 to 4.12 (CASSANDRA-13360)
 + * Cleanup ParentRepairSession after repairs (CASSANDRA-13359)
 + * Incremental repair not streaming correct sstables (CASSANDRA-13328)
 + * Upgrade the jna version to 4.3.0 (CASSANDRA-13300)
 + * Add the currentTimestamp, currentDate, currentTime and currentTimeUUID 
functions (CASSANDRA-13132)
 + * Remove config option index_interval (CASSANDRA-10671)
 + * Reduce lock contention for collection types and serializers 
(CASSANDRA-13271)
 + * Make it possible to override MessagingService.Verb ids (CASSANDRA-13283)
 + * Avoid synchronized on prepareForRepair in ActiveRepairService 
(CASSANDRA-9292)
 + * Adds the ability to use uncompressed chunks in compressed files 
(CASSANDRA-10520)
 + * Don't flush sstables when streaming for incremental repair 
(CASSANDRA-13226)
 + * Remove unused method (CASSANDRA-13227)
 + * Fix minor bugs related to #9143 (CASSANDRA-13217)
 + * Output warning if user increases RF (CASSANDRA-13079)
 + * Remove pre-3.0 streaming compatibility code for 4.0 (CASSANDRA-13081)
 + * Add support for + and - operations on dates (CASSANDRA-11936)
 + * Fix consistency of incrementally repaired data (CASSANDRA-9143)
 + * Increase commitlog version (CASSANDRA-13161)
 + * Make TableMetadata immutable, optimize Schema (CASSANDRA-9425)
 + * Refactor ColumnCondition (CASSANDRA-12981)
 + * Parallelize streaming of different keyspaces (CASSANDRA-4663)
 + * Improved compactions metrics (CASSANDRA-13015)
 + * Speed-up start-up sequence by avoiding un-needed flushes (CASSANDRA-13031)
 + * Use Caffeine (W-TinyLFU) for on-heap caches (CASSANDRA-10855)
 + * Thrift removal (CASSANDRA-5)
 + * Remove pre-3.0 compatibility code for 4.0 (CASSANDRA-12716)
 + * Add column definition kind to dropped columns in schema (CASSANDRA-12705)
 + * Add (automate) Nodetool Documentation (CASSANDRA-12672)
 + * Update bundled cqlsh python driver to 3.7.0 (CASSANDRA-12736)
 + * Reject invalid replication settings when creating or altering a keyspace 
(CASSANDRA-12681)
 + * Clean up the SSTableReader#getScanner API wrt removal of RateLimiter 
(CASSANDRA-12422)
 + * Use new token allocation for non bootstrap case as well (CASSANDRA-13080)
 + * Avoid byte-array copy 

[2/3] cassandra git commit: Fix duration type validation to prevent overflow

2017-05-11 Thread blerer
Fix duration type validation to prevent overflow

patch by Benjamin Lerer; reviewed by Andrés de la Peña for CASSANDRA-13218


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/86933571
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/86933571
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/86933571

Branch: refs/heads/trunk
Commit: 8693357109a6e59117a641e109c3865501e3eee6
Parents: f33cdcb
Author: Benjamin Lerer 
Authored: Thu May 11 10:26:36 2017 +0200
Committer: Benjamin Lerer 
Committed: Thu May 11 10:26:36 2017 +0200

--
 CHANGES.txt |  1 +
 doc/native_protocol_v5.spec |  2 ++
 doc/source/cql/types.rst|  2 ++
 .../cassandra/db/marshal/DurationType.java  |  1 -
 .../serializers/DurationSerializer.java | 25 ++--
 .../cql3/validation/operations/CreateTest.java  |  8 ++-
 6 files changed, 35 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/86933571/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b8146af..bb13fc4 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.11.0
+ * Fix duration type validation to prevent overflow (CASSANDRA-13218)
  * Forbid unsupported creation of SASI indexes over partition key columns 
(CASSANDRA-13228)
  * Reject multiple values for a key in CQL grammar. (CASSANDRA-13369)
  * UDA fails without input rows (CASSANDRA-13399)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/86933571/doc/native_protocol_v5.spec
--
diff --git a/doc/native_protocol_v5.spec b/doc/native_protocol_v5.spec
index ac3373c..36ffc4c 100644
--- a/doc/native_protocol_v5.spec
+++ b/doc/native_protocol_v5.spec
@@ -905,6 +905,8 @@ Table of Contents
   A duration is composed of 3 signed variable length integers ([vint]s).
   The first [vint] represents a number of months, the second [vint] represents
   a number of days, and the last [vint] represents a number of nanoseconds.
+  The number of months and days must be valid 32 bits integers whereas the
+  number of nanoseconds must be a valid 64 bits integer.
   A duration can either be positive or negative. If a duration is positive
   all the integers must be positive or zero. If a duration is
   negative all the numbers must be negative or zero.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/86933571/doc/source/cql/types.rst
--
diff --git a/doc/source/cql/types.rst b/doc/source/cql/types.rst
index 8954d87..509a756 100644
--- a/doc/source/cql/types.rst
+++ b/doc/source/cql/types.rst
@@ -189,6 +189,8 @@ Working with durations
 Values of the ``duration`` type are encoded as 3 signed integer of variable 
lengths. The first integer represents the
 number of months, the second the number of days and the third the number of 
nanoseconds. This is due to the fact that
 the number of days in a month can change, and a day can have 23 or 25 hours 
depending on the daylight saving.
+Internally, the number of months and days are decoded as 32 bits integers 
whereas the number of nanoseconds is decoded
+as a 64 bits integer.
 
 A duration can be input as:
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/86933571/src/java/org/apache/cassandra/db/marshal/DurationType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/DurationType.java 
b/src/java/org/apache/cassandra/db/marshal/DurationType.java
index 63e634c..b6f2062 100644
--- a/src/java/org/apache/cassandra/db/marshal/DurationType.java
+++ b/src/java/org/apache/cassandra/db/marshal/DurationType.java
@@ -28,7 +28,6 @@ import org.apache.cassandra.serializers.MarshalException;
 import org.apache.cassandra.serializers.TypeSerializer;
 import org.apache.cassandra.transport.ProtocolVersion;
 import org.apache.cassandra.utils.ByteBufferUtil;
-import org.apache.cassandra.utils.FastByteOperations;
 
 /**
  * Represents a duration. The duration is stored as  months, days, and 
nanoseconds. This is done

http://git-wip-us.apache.org/repos/asf/cassandra/blob/86933571/src/java/org/apache/cassandra/serializers/DurationSerializer.java
--
diff --git a/src/java/org/apache/cassandra/serializers/DurationSerializer.java 
b/src/java/org/apache/cassandra/serializers/DurationSerializer.java
index 03d08ae..c752c40 100644
--- a/src/java/org/apache/cassandra/serializers/DurationSerializer.java
+++ 

[1/3] cassandra git commit: Fix duration type validation to prevent overflow

2017-05-11 Thread blerer
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.11 f33cdcb8a -> 869335710
  refs/heads/trunk 122bf06e4 -> 6deafff5a


Fix duration type validation to prevent overflow

patch by Benjamin Lerer; reviewed by Andrés de la Peña for CASSANDRA-13218


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/86933571
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/86933571
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/86933571

Branch: refs/heads/cassandra-3.11
Commit: 8693357109a6e59117a641e109c3865501e3eee6
Parents: f33cdcb
Author: Benjamin Lerer 
Authored: Thu May 11 10:26:36 2017 +0200
Committer: Benjamin Lerer 
Committed: Thu May 11 10:26:36 2017 +0200

--
 CHANGES.txt |  1 +
 doc/native_protocol_v5.spec |  2 ++
 doc/source/cql/types.rst|  2 ++
 .../cassandra/db/marshal/DurationType.java  |  1 -
 .../serializers/DurationSerializer.java | 25 ++--
 .../cql3/validation/operations/CreateTest.java  |  8 ++-
 6 files changed, 35 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/86933571/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b8146af..bb13fc4 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.11.0
+ * Fix duration type validation to prevent overflow (CASSANDRA-13218)
  * Forbid unsupported creation of SASI indexes over partition key columns 
(CASSANDRA-13228)
  * Reject multiple values for a key in CQL grammar. (CASSANDRA-13369)
  * UDA fails without input rows (CASSANDRA-13399)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/86933571/doc/native_protocol_v5.spec
--
diff --git a/doc/native_protocol_v5.spec b/doc/native_protocol_v5.spec
index ac3373c..36ffc4c 100644
--- a/doc/native_protocol_v5.spec
+++ b/doc/native_protocol_v5.spec
@@ -905,6 +905,8 @@ Table of Contents
   A duration is composed of 3 signed variable length integers ([vint]s).
   The first [vint] represents a number of months, the second [vint] represents
   a number of days, and the last [vint] represents a number of nanoseconds.
+  The number of months and days must be valid 32 bits integers whereas the
+  number of nanoseconds must be a valid 64 bits integer.
   A duration can either be positive or negative. If a duration is positive
   all the integers must be positive or zero. If a duration is
   negative all the numbers must be negative or zero.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/86933571/doc/source/cql/types.rst
--
diff --git a/doc/source/cql/types.rst b/doc/source/cql/types.rst
index 8954d87..509a756 100644
--- a/doc/source/cql/types.rst
+++ b/doc/source/cql/types.rst
@@ -189,6 +189,8 @@ Working with durations
 Values of the ``duration`` type are encoded as 3 signed integer of variable 
lengths. The first integer represents the
 number of months, the second the number of days and the third the number of 
nanoseconds. This is due to the fact that
 the number of days in a month can change, and a day can have 23 or 25 hours 
depending on the daylight saving.
+Internally, the number of months and days are decoded as 32 bits integers 
whereas the number of nanoseconds is decoded
+as a 64 bits integer.
 
 A duration can be input as:
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/86933571/src/java/org/apache/cassandra/db/marshal/DurationType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/DurationType.java 
b/src/java/org/apache/cassandra/db/marshal/DurationType.java
index 63e634c..b6f2062 100644
--- a/src/java/org/apache/cassandra/db/marshal/DurationType.java
+++ b/src/java/org/apache/cassandra/db/marshal/DurationType.java
@@ -28,7 +28,6 @@ import org.apache.cassandra.serializers.MarshalException;
 import org.apache.cassandra.serializers.TypeSerializer;
 import org.apache.cassandra.transport.ProtocolVersion;
 import org.apache.cassandra.utils.ByteBufferUtil;
-import org.apache.cassandra.utils.FastByteOperations;
 
 /**
  * Represents a duration. The duration is stored as  months, days, and 
nanoseconds. This is done

http://git-wip-us.apache.org/repos/asf/cassandra/blob/86933571/src/java/org/apache/cassandra/serializers/DurationSerializer.java
--
diff --git a/src/java/org/apache/cassandra/serializers/DurationSerializer.java 
b/src/java/org/apache/cassandra/serializers/DurationSerializer.java
index 

[jira] [Commented] (CASSANDRA-12996) update slf4j dependency to 1.7.21

2017-05-11 Thread Tomas Repik (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006064#comment-16006064
 ] 

Tomas Repik commented on CASSANDRA-12996:
-

Let's assume we've integrated your official RPM and a happy Fedora user 
installed this RPM. What he got is cassandra but also many other jars in lib 
folder as dependencies (slf4j, airline and more). But these jars could have 
already been comfortably sitting in the system before the installation. What we 
want is to avoid duplicity and confusion. So we package each jar as a separate 
package and make these depend one on each other.
So Yes as close as possible to upstream, BUT without bundling.

> update slf4j dependency to 1.7.21
> -
>
> Key: CASSANDRA-12996
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12996
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Libraries
>Reporter: Tomas Repik
> Fix For: 4.0
>
> Attachments: cassandra-3.9-slf4j.patch
>
>
> We want to include Cassandra into Fedora, and there are some tweaks to 
> cassandra sources we need to do. The slf4j dependency is one of those tweak 
> we gotta do. Cassandra depends on slf4j 1.7.7, but In Fedora we have the 
> latest upstream version 1.7.21 It was released some time ago on April 6 2016. 
> I attached a patch updating cassandra sources to depend on the newest slf4j 
> sources. The only actual change is the number of parameters accepted by 
> SubstituteLogger class. Please consider updating.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-13523) StreamReceiveTask: java.lang.OutOfMemoryError: Map failed

2017-05-11 Thread Matthew O'Riordan (JIRA)
Matthew O'Riordan created CASSANDRA-13523:
-

 Summary: StreamReceiveTask: java.lang.OutOfMemoryError: Map failed
 Key: CASSANDRA-13523
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13523
 Project: Cassandra
  Issue Type: Bug
  Components: Streaming and Messaging
 Environment: Ubuntu 14.04.5 LTS, Docker version 1.9.1, run as a 
container, 4 core server with 16GB memory.
Reporter: Matthew O'Riordan
 Fix For: 2.1.13


During a nodetool repair -par on one of our keyspaces, Cassandra crashed due to 
what seems like memory exhaustion within the JVM.  The machine itself had 
plenty of available memory at the time and did not appear to be under any 
significant load.

In the system log, before the crash, the following was logged:

{code}
...
INFO  [AntiEntropySessions:55] 2017-05-10 18:18:20,627  RepairJob.java:163 - 
[repair #0330beb0-35ad-11e7-b7e4-091830ac5256] requesting merkle trees for 
stats_day_aggregates (to [/54.162.66.114, /54.236.226.76, /52.221.228.170, 
/54.154.35.144, /54.154.96.213, /52.221.217.27])
INFO  [ValidationExecutor:54] 2017-05-10 18:18:20,628  
ColumnFamilyStore.java:905 - Enqueuing flush of stats_day_aggregates: 7018 (0%) 
on-heap, 0 (0%) off-heap
INFO  [MemtableFlushWriter:13608] 2017-05-10 18:18:20,628  Memtable.java:347 - 
Writing Memtable-stats_day_aggregates@3469792(2.734KiB serialized bytes, 14 
ops, 0%/0% of on/off-heap limit)
INFO  [MemtableFlushWriter:13608] 2017-05-10 18:18:20,629  Memtable.java:382 - 
Completed flushing 
/var/lib/cassandra/data/ably_production_0_stats/stats_day_aggregates-b6e29201e3d111e5bbf3091830ac5256/ably_production_0_stats-stats_day_aggregates-tmp-ka-43026-Data.db
 (0.000KiB) for commitlog position ReplayPosition(segmentId=1491420635101, 
position=24224955)
INFO  [StreamReceiveTask:638] 2017-05-10 18:18:21,220  
StreamResultFuture.java:180 - [Stream #0008cac1-35ad-11e7-b7e4-091830ac5256] 
Session with /52.203.21.193 is complete
INFO  [StreamReceiveTask:638] 2017-05-10 18:18:21,220  
StreamResultFuture.java:212 - [Stream #0008cac1-35ad-11e7-b7e4-091830ac5256] 
All sessions completed
INFO  [StreamReceiveTask:638] 2017-05-10 18:18:21,221  
StreamingRepairTask.java:96 - [repair #fe7e3320-35ac-11e7-b7e4-091830ac5256] 
streaming task succeed, returning response to /52.221.217.27
INFO  [CqlSlowLog-Writer-thread-0] 2017-05-10 18:18:26,230  
CqlSlowLogWriter.java:151 - Recording statements with duration of 4844 in slow 
log
INFO  [Service Thread] 2017-05-10 18:18:26,233  GCInspector.java:258 - G1 Old 
Generation GC in 4781ms.  G1 Eden Space: 131072 -> 0; G1 Old Gen: 
2774539816 -> 1830851216; G1 Survivor Space: 37748736 -> 0; 
INFO  [AntiEntropyStage:1] 2017-05-10 18:18:26,237  RepairSession.java:171 - 
[repair #0330beb0-35ad-11e7-b7e4-091830ac5256] Received merkle tree for 
stats_day_aggregates from /54.162.66.114
INFO  [StreamConnectionEstablisher:1] 2017-05-10 18:18:26,239  
StreamCoordinator.java:209 - [Stream #0293bb60-35ad-11e7-b7e4-091830ac5256, 
ID#0] Beginning stream session with /54.154.226.20
INFO  [AntiEntropyStage:1] 2017-05-10 18:18:26,294  RepairSession.java:171 - 
[repair #0330beb0-35ad-11e7-b7e4-091830ac5256] Received merkle tree for 
stats_day_aggregates from /54.236.226.76
INFO  [Service Thread] 2017-05-10 18:18:26,298  StatusLogger.java:51 - Pool 
NameActive   Pending  Completed   Blocked  All Time 
Blocked
INFO  [AntiEntropyStage:1] 2017-05-10 18:18:26,344  RepairSession.java:171 - 
[repair #0330beb0-35ad-11e7-b7e4-091830ac5256] Received merkle tree for 
stats_day_aggregates from /54.154.35.144
INFO  [CqlSlowLog-Writer-thread-0] 2017-05-10 18:18:26,344  
CqlSlowLogWriter.java:151 - Recording statements with duration of 5035 in slow 
log
WARN  [GossipTasks:1] 2017-05-10 18:18:26,344  FailureDetector.java:258 - Not 
marking nodes down due to local pause of 5109502584 > 50
INFO  [AntiEntropyStage:1] 2017-05-10 18:18:26,344  RepairSession.java:171 - 
[repair #0330beb0-35ad-11e7-b7e4-091830ac5256] Received merkle tree for 
stats_day_aggregates from /54.154.96.213
INFO  [AntiEntropyStage:1] 2017-05-10 18:18:26,344  RepairSession.java:171 - 
[repair #0330beb0-35ad-11e7-b7e4-091830ac5256] Received merkle tree for 
stats_day_aggregates from /52.221.228.170
INFO  [Service Thread] 2017-05-10 18:18:26,345  StatusLogger.java:66 - 
MutationStage 0 0 1199223432 0  
   0
INFO  [CqlSlowLog-Writer-thread-0] 2017-05-10 18:18:26,345  
CqlSlowLogWriter.java:151 - Recording statements with duration of 5095 in slow 
log
ERROR [StreamReceiveTask:640] 2017-05-10 18:18:26,346  
StreamReceiveTask.java:183 - Error applying streamed data: 
org.apache.cassandra.io.FSReadError: java.io.IOException: Map failed
at 
org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:399)
 

[jira] [Updated] (CASSANDRA-13522) AbstractTracingAwareExecutorService - Uncaught exception on thread - leads to JVM exit

2017-05-11 Thread Matthew O'Riordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew O'Riordan updated CASSANDRA-13522:
--
Description: 
Initially saw the following exception numerous times:

{code}
WARN  [SharedPool-Worker-8] 2017-05-09 23:04:00,018  
AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-8,5,main]: {}
java.lang.NullPointerException: null
at java.lang.Double.compareTo(Double.java:49) ~[na:1.8.0_101]
at 
java.util.concurrent.ConcurrentSkipListMap.cpr(ConcurrentSkipListMap.java:655) 
~[na:1.8.0_101]
at 
java.util.concurrent.ConcurrentSkipListMap.doPut(ConcurrentSkipListMap.java:835)
 ~[na:1.8.0_101]
at 
java.util.concurrent.ConcurrentSkipListMap.putIfAbsent(ConcurrentSkipListMap.java:1962)
 ~[na:1.8.0_101]
at 
com.yammer.metrics.stats.ExponentiallyDecayingSample.update(ExponentiallyDecayingSample.java:104)
 ~[metrics-core-2.2.0.jar:na]
at 
com.yammer.metrics.stats.ExponentiallyDecayingSample.update(ExponentiallyDecayingSample.java:81)
 ~[metrics-core-2.2.0.jar:na]
at com.yammer.metrics.core.Histogram.update(Histogram.java:110) 
~[metrics-core-2.2.0.jar:na]
at com.yammer.metrics.core.Timer.update(Timer.java:198) 
~[metrics-core-2.2.0.jar:na]
at com.yammer.metrics.core.Timer.update(Timer.java:76) 
~[metrics-core-2.2.0.jar:na]
at 
org.apache.cassandra.metrics.LatencyMetrics.addNano(LatencyMetrics.java:108) 
~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at 
org.apache.cassandra.metrics.LatencyMetrics.addNano(LatencyMetrics.java:114) 
~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1863)
 ~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:353) 
~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at 
org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:53)
 ~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at 
org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47) 
~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_101]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 ~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
{code}

Then this lead to a high rate of these warnings:

{code}
WARN  [SharedPool-Worker-91] 2017-05-09 23:04:14,682  
AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-91,5,main]: {}
java.lang.ClassCastException: null
WARN  [SharedPool-Worker-92] 2017-05-09 23:04:14,704  
AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-92,5,main]: {}
java.lang.RuntimeException: java.lang.ClassCastException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2244)
 ~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_101]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 ~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
{code}

The same errors continued until the last error reported:

{code}
WARN  [SharedPool-Worker-161] 2017-05-09 23:06:18,617  
AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-161,5,main]: {}
java.lang.RuntimeException: java.lang.ClassCastException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2244)
 ~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_101]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 ~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
Caused by: java.lang.ClassCastException: null
{code}

At 

[jira] [Created] (CASSANDRA-13522) AbstractTracingAwareExecutorService - Uncaught exception on thread - leads to JVM exit

2017-05-11 Thread Matthew O'Riordan (JIRA)
Matthew O'Riordan created CASSANDRA-13522:
-

 Summary: AbstractTracingAwareExecutorService - Uncaught exception 
on thread - leads to JVM exit
 Key: CASSANDRA-13522
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13522
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 14.04.5 LTS, Docker version 1.9.1, run as a 
container, 4 core server with 16GB memory.
Reporter: Matthew O'Riordan
 Fix For: 2.1.13


Initially saw the following exception numerous times:

{code}
WARN  [SharedPool-Worker-8] 2017-05-09 23:04:00,018  
AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-8,5,main]: {}
java.lang.NullPointerException: null
at java.lang.Double.compareTo(Double.java:49) ~[na:1.8.0_101]
at 
java.util.concurrent.ConcurrentSkipListMap.cpr(ConcurrentSkipListMap.java:655) 
~[na:1.8.0_101]
at 
java.util.concurrent.ConcurrentSkipListMap.doPut(ConcurrentSkipListMap.java:835)
 ~[na:1.8.0_101]
at 
java.util.concurrent.ConcurrentSkipListMap.putIfAbsent(ConcurrentSkipListMap.java:1962)
 ~[na:1.8.0_101]
at 
com.yammer.metrics.stats.ExponentiallyDecayingSample.update(ExponentiallyDecayingSample.java:104)
 ~[metrics-core-2.2.0.jar:na]
at 
com.yammer.metrics.stats.ExponentiallyDecayingSample.update(ExponentiallyDecayingSample.java:81)
 ~[metrics-core-2.2.0.jar:na]
at com.yammer.metrics.core.Histogram.update(Histogram.java:110) 
~[metrics-core-2.2.0.jar:na]
at com.yammer.metrics.core.Timer.update(Timer.java:198) 
~[metrics-core-2.2.0.jar:na]
at com.yammer.metrics.core.Timer.update(Timer.java:76) 
~[metrics-core-2.2.0.jar:na]
at 
org.apache.cassandra.metrics.LatencyMetrics.addNano(LatencyMetrics.java:108) 
~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at 
org.apache.cassandra.metrics.LatencyMetrics.addNano(LatencyMetrics.java:114) 
~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1863)
 ~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:353) 
~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at 
org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:53)
 ~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at 
org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47) 
~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_101]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 ~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
{code}

Then this lead to a high rate of these warnings:

{code}
WARN  [SharedPool-Worker-91] 2017-05-09 23:04:14,682  
AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-91,5,main]: {}
java.lang.ClassCastException: null
WARN  [SharedPool-Worker-92] 2017-05-09 23:04:14,704  
AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-92,5,main]: {}
java.lang.RuntimeException: java.lang.ClassCastException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2244)
 ~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_101]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 ~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
{code}

The same errors continued until the last error reported:

{code}
WARN  [SharedPool-Worker-161] 2017-05-09 23:06:18,617  
AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-161,5,main]: {}
java.lang.RuntimeException: java.lang.ClassCastException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2244)
 ~[cassandra-all-2.1.13.1218.jar:2.1.13.1218]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_101]
at 

[jira] [Commented] (CASSANDRA-12996) update slf4j dependency to 1.7.21

2017-05-11 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16006036#comment-16006036
 ] 

Stefan Podkowinski commented on CASSANDRA-12996:


If having Cassandra as "close as possible to upstream" is desired, why not 
integrating our official RPM (see CASSANDRA-13433)?

> update slf4j dependency to 1.7.21
> -
>
> Key: CASSANDRA-12996
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12996
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Libraries
>Reporter: Tomas Repik
> Fix For: 4.0
>
> Attachments: cassandra-3.9-slf4j.patch
>
>
> We want to include Cassandra into Fedora, and there are some tweaks to 
> cassandra sources we need to do. The slf4j dependency is one of those tweak 
> we gotta do. Cassandra depends on slf4j 1.7.7, but In Fedora we have the 
> latest upstream version 1.7.21 It was released some time ago on April 6 2016. 
> I attached a patch updating cassandra sources to depend on the newest slf4j 
> sources. The only actual change is the number of parameters accepted by 
> SubstituteLogger class. Please consider updating.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org