[jira] [Created] (CASSANDRA-15947) nodetool gossipinfo doc does not document the output

2020-07-15 Thread Jens Rantil (Jira)
Jens Rantil created CASSANDRA-15947:
---

 Summary: nodetool gossipinfo doc does not document the output
 Key: CASSANDRA-15947
 URL: https://issues.apache.org/jira/browse/CASSANDRA-15947
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jens Rantil


[https://cassandra.apache.org/doc/latest/tools/nodetool/gossipinfo.html] does 
not contain any sample output, nor does does it explain what the fields mean.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-11720) Changing `max_hint_window_in_ms` at runtime

2016-05-05 Thread Jens Rantil (JIRA)
Jens Rantil created CASSANDRA-11720:
---

 Summary: Changing `max_hint_window_in_ms` at runtime
 Key: CASSANDRA-11720
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11720
 Project: Cassandra
  Issue Type: Wish
  Components: Coordination
Reporter: Jens Rantil
Priority: Minor


Scenario: A larger node (in terms of data it holds) goes down. You realize that 
it will take slightly more than `max_hint_window_in_ms` to fix it. You have a 
the disk space to store some additional hints.

Proposal: Support changing `max_hint_window_in_ms` at runtime. The change 
doesn't have to be persisted somewhere. I'm thinking similar to changing the 
`compactionthroughput` etc. using `nodetool`.

Workaround: Change the value in the configuration file and do a rolling restart 
of all the nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11583) Exception when streaming sstables using `sstableloader`

2016-04-15 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243694#comment-15243694
 ] 

Jens Rantil commented on CASSANDRA-11583:
-

I'm speculating here, but could the issue be that we open the sstable once, but 
decrement the reference once per host we are streaming to?

> Exception when streaming sstables using `sstableloader`
> ---
>
> Key: CASSANDRA-11583
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11583
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: $ uname -a
> Linux bigdb-100 3.2.0-99-virtual #139-Ubuntu SMP Mon Feb 1 23:52:21 UTC 2016 
> x86_64 x86_64 x86_64 GNU/Linux
> I am using Datastax Enterprise 4.7.8-1 which is based on 2.1.13.
>Reporter: Jens Rantil
>
> This bug came out of CASSANDRA-11562.
> I have have a keyspace snapshotted from a 2.1.11 (DSE 4.7.5-1) node. When I'm 
> running the `sstableloader` I get the following output/exception:
> {noformat}
> # sstableloader --nodes X.X.X.20 --username YYY --password ZZZ --ignore XXX 
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2
> Established connection to initial hosts
> Opening sstables and calculating sections to stream
> Streaming relevant part of 
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/XXX-ZZZ-ka-6463-Data.db
>  
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/tink-ZZZ-ka-6464-Data.db
>  to [/X.X.X.33, /X.X.X.113, /X.X.X.32, /X.X.X.20, /X.X.X.122, /X.X.X.176, 
> /X.X.X.143, /X.X.X.172, /X.X.X.50, /X.X.X.51, /X.X.X.52, /X.X.X.71, 
> /X.X.X.53, /X.X.X.54, /X.X.X.47, /X.X.X.31, /X.X.X.8]
> progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % [/X.X.X.172]0:0/2 0  
> % [/X.X.X.20]0:0/2 0  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % 
> [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
> [/X.X.X.143]0:0/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
> [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % [/X.X.X.143]0:1/2 1  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % 
> [/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 11 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 19 % 
> [/X.X.X.143]0:1/2 11 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 19 % [/X.X.X.143]0:1/2 15 % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 15 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % 
> [/X.X.X.143]0:1/2 20 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 21 % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 21 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 3  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 42 % 
> [/X.X.X.143]0:1/2 27 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 3  % 
> [/X.X.X.71]0:1/2 6  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 
> [...]
> progress: [/X.X.X.113]0:2/2 100% [/X.X.X.143]0:2/2 100% [/X.X.X.172]0:0/2 78 
> % [/

[jira] [Comment Edited] (CASSANDRA-11583) Exception when streaming sstables using `sstableloader`

2016-04-15 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243634#comment-15243634
 ] 

Jens Rantil edited comment on CASSANDRA-11583 at 4/15/16 9:26 PM:
--

Hm, rereading 
https://github.com/apache/cassandra/blob/cassandra-2.1.13/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java#L208,
 it looks like the assert is only made when the streaming is done, right? Can I 
be comfortable that `sstableloader` finished succesfully if it doesn't print 
any error before the exception (and the assert fails in "onSuccess" method)?


was (Author: ztyx):
Hm, rereading 
https://github.com/apache/cassandra/blob/cassandra-2.1.13/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java#L208,
 it looks like the assert is only made when the streaming is done, right? Can I 
be comfortable that `sstableloader` finished succesfully if it doesn't print 
any error before the exception?

> Exception when streaming sstables using `sstableloader`
> ---
>
> Key: CASSANDRA-11583
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11583
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: $ uname -a
> Linux bigdb-100 3.2.0-99-virtual #139-Ubuntu SMP Mon Feb 1 23:52:21 UTC 2016 
> x86_64 x86_64 x86_64 GNU/Linux
> I am using Datastax Enterprise 4.7.8-1 which is based on 2.1.13.
>Reporter: Jens Rantil
>
> This bug came out of CASSANDRA-11562.
> I have have a keyspace snapshotted from a 2.1.11 (DSE 4.7.5-1) node. When I'm 
> running the `sstableloader` I get the following output/exception:
> {noformat}
> # sstableloader --nodes X.X.X.20 --username YYY --password ZZZ --ignore XXX 
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2
> Established connection to initial hosts
> Opening sstables and calculating sections to stream
> Streaming relevant part of 
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/XXX-ZZZ-ka-6463-Data.db
>  
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/tink-ZZZ-ka-6464-Data.db
>  to [/X.X.X.33, /X.X.X.113, /X.X.X.32, /X.X.X.20, /X.X.X.122, /X.X.X.176, 
> /X.X.X.143, /X.X.X.172, /X.X.X.50, /X.X.X.51, /X.X.X.52, /X.X.X.71, 
> /X.X.X.53, /X.X.X.54, /X.X.X.47, /X.X.X.31, /X.X.X.8]
> progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % [/X.X.X.172]0:0/2 0  
> % [/X.X.X.20]0:0/2 0  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % 
> [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
> [/X.X.X.143]0:0/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
> [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % [/X.X.X.143]0:1/2 1  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % 
> [/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 11 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 19 % 
> [/X.X.X.143]0:1/2 11 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 19 % [/X.X.X.143]0:1/2 15 % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 15 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % 
> [/X.X.X.143]0:1/2 20 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.1

[jira] [Commented] (CASSANDRA-11583) Exception when streaming sstables using `sstableloader`

2016-04-15 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243634#comment-15243634
 ] 

Jens Rantil commented on CASSANDRA-11583:
-

Hm, rereading 
https://github.com/apache/cassandra/blob/cassandra-2.1.13/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java#L208,
 it looks like the assert is only made when the streaming is done, right? Can I 
be comfortable that `sstableloader` finished succesfully if it doesn't print 
any error before the exception?

> Exception when streaming sstables using `sstableloader`
> ---
>
> Key: CASSANDRA-11583
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11583
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: $ uname -a
> Linux bigdb-100 3.2.0-99-virtual #139-Ubuntu SMP Mon Feb 1 23:52:21 UTC 2016 
> x86_64 x86_64 x86_64 GNU/Linux
> I am using Datastax Enterprise 4.7.8-1 which is based on 2.1.13.
>Reporter: Jens Rantil
>
> This bug came out of CASSANDRA-11562.
> I have have a keyspace snapshotted from a 2.1.11 (DSE 4.7.5-1) node. When I'm 
> running the `sstableloader` I get the following output/exception:
> {noformat}
> # sstableloader --nodes X.X.X.20 --username YYY --password ZZZ --ignore XXX 
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2
> Established connection to initial hosts
> Opening sstables and calculating sections to stream
> Streaming relevant part of 
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/XXX-ZZZ-ka-6463-Data.db
>  
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/tink-ZZZ-ka-6464-Data.db
>  to [/X.X.X.33, /X.X.X.113, /X.X.X.32, /X.X.X.20, /X.X.X.122, /X.X.X.176, 
> /X.X.X.143, /X.X.X.172, /X.X.X.50, /X.X.X.51, /X.X.X.52, /X.X.X.71, 
> /X.X.X.53, /X.X.X.54, /X.X.X.47, /X.X.X.31, /X.X.X.8]
> progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % [/X.X.X.172]0:0/2 0  
> % [/X.X.X.20]0:0/2 0  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % 
> [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
> [/X.X.X.143]0:0/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
> [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % [/X.X.X.143]0:1/2 1  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % 
> [/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 11 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 19 % 
> [/X.X.X.143]0:1/2 11 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 19 % [/X.X.X.143]0:1/2 15 % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 15 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % 
> [/X.X.X.143]0:1/2 20 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 21 % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 21 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 3  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 42 % 
> [/X.X.X.143]0:1/2 27 % [/X.X.X.172]0:0

[jira] [Commented] (CASSANDRA-11583) Exception when streaming sstables using `sstableloader`

2016-04-15 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15243406#comment-15243406
 ] 

Jens Rantil commented on CASSANDRA-11583:
-

I've now upgraded the full cluster to 2.1.13. I am still receiving the same 
exception. So, this does not seem to be a version incompatibility issue.

What's interesting is also that I set up a temporary one-node (2.1.13) cluster. 
Importing the same sstables to that cluster worked without any exceptions. 
Also, I've excluded bad firewall being an issue (temporarily disabled it).

> Exception when streaming sstables using `sstableloader`
> ---
>
> Key: CASSANDRA-11583
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11583
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: $ uname -a
> Linux bigdb-100 3.2.0-99-virtual #139-Ubuntu SMP Mon Feb 1 23:52:21 UTC 2016 
> x86_64 x86_64 x86_64 GNU/Linux
> I am using Datastax Enterprise 4.7.8-1 which is based on 2.1.13.
>Reporter: Jens Rantil
>
> This bug came out of CASSANDRA-11562.
> I have have a keyspace snapshotted from a 2.1.11 (DSE 4.7.5-1) node. When I'm 
> running the `sstableloader` I get the following output/exception:
> {noformat}
> # sstableloader --nodes X.X.X.20 --username YYY --password ZZZ --ignore XXX 
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2
> Established connection to initial hosts
> Opening sstables and calculating sections to stream
> Streaming relevant part of 
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/XXX-ZZZ-ka-6463-Data.db
>  
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/tink-ZZZ-ka-6464-Data.db
>  to [/X.X.X.33, /X.X.X.113, /X.X.X.32, /X.X.X.20, /X.X.X.122, /X.X.X.176, 
> /X.X.X.143, /X.X.X.172, /X.X.X.50, /X.X.X.51, /X.X.X.52, /X.X.X.71, 
> /X.X.X.53, /X.X.X.54, /X.X.X.47, /X.X.X.31, /X.X.X.8]
> progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % [/X.X.X.172]0:0/2 0  
> % [/X.X.X.20]0:0/2 0  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % 
> [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
> [/X.X.X.143]0:0/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
> [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % [/X.X.X.143]0:1/2 1  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % 
> [/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 11 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 19 % 
> [/X.X.X.143]0:1/2 11 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 19 % [/X.X.X.143]0:1/2 15 % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 15 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % 
> [/X.X.X.143]0:1/2 20 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 21 % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 21 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 3  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113

[jira] [Updated] (CASSANDRA-11583) Exception when streaming sstables using `sstableloader`

2016-04-15 Thread Jens Rantil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Rantil updated CASSANDRA-11583:

Description: 
This bug came out of CASSANDRA-11562.

I have have a keyspace snapshotted from a 2.1.11 (DSE 4.7.5-1) node. When I'm 
running the `sstableloader` I get the following output/exception:

{noformat}
# sstableloader --nodes X.X.X.20 --username YYY --password ZZZ --ignore XXX 
/var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2
Established connection to initial hosts
Opening sstables and calculating sections to stream
Streaming relevant part of 
/var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/XXX-ZZZ-ka-6463-Data.db
 
/var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/tink-ZZZ-ka-6464-Data.db
 to [/X.X.X.33, /X.X.X.113, /X.X.X.32, /X.X.X.20, /X.X.X.122, /X.X.X.176, 
/X.X.X.143, /X.X.X.172, /X.X.X.50, /X.X.X.51, /X.X.X.52, /X.X.X.71, /X.X.X.53, 
/X.X.X.54, /X.X.X.47, /X.X.X.31, /X.X.X.8]
progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % [/X.X.X.172]0:0/2 0  % 
[/X.X.X.20]0:0/2 0  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
[/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % 
[/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % 
[/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
[/X.X.X.143]0:0/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
[/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: 
[/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
[/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
[/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % 
[/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
[/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
[/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
[/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
[/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
[/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
[/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % [/X.X.X.143]0:1/2 1  % 
[/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
[/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % 
[/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
[/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
[/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % 
[/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
[/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 11 % 
[/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
[/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 19 % 
[/X.X.X.143]0:1/2 11 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
[/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
[/X.X.X.113]0:0/2 19 % [/X.X.X.143]0:1/2 15 % [/X.X.X.172]0:0/2 0  % 
[/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
[/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 15 % 
[/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
[/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % 
[/X.X.X.143]0:1/2 20 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
[/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
[/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 21 % [/X.X.X.172]0:0/2 0  % 
[/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
[/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 21 % 
[/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 3  % [/X.X.X.71]0:1/2 1  % 
[/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 42 % 
[/X.X.X.143]0:1/2 27 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 3  % 
[/X.X.X.71]0:1/2 6  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 
[...]
progress: [/X.X.X.113]0:2/2 100% [/X.X.X.143]0:2/2 100% [/X.X.X.172]0:0/2 78 % 
[/X.X.X.20]0:2/2 100% [/X.X.X.71]0:2/2 100% [/X.X.X.122]0:1/2 97 % 
[/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:2/2 100% [/X.X.X.143]0:2/2 100% 
[/X.X.X.172]0:0/2 78 % [/X.X.X.20]0:2/2 100% [/X.X.X.71]0:2/2 100% 
[/X.X.X.122]0:1/2 97 % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:2/2 100% 
[/X.X.X.143]0:2/2 100% [/X.X.X.172]0:0/2 86 % [/X.X.X.20]0:2/2 100% 
[/X.X.X.71]0:2/2 100% [/X.X.X.122]0:2/2 100% [/X.X.X.47]0:0/2 progress: 
[/X.X.X.113]0:2/2 100% [/X.X.X.143]0:2/2 100% [/X.X.X.172]0:0/2 86 % 
[/X.X.X.20]0:2/2 100% [/X.X.X.71]0:2/2 100% [/X.X.X.122]0:2/2 100% 
[/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:2/2 100% [/X.X.X.143]0:2/2 100% 
[/X.X.X.172]0:0/2 86 % [/X.X.X.20]0:2/2 100% [/X.X.X.71]0:2/2 100% 
[/X.X.X.122]0:2/2 100% [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:2/2 100% 
[/X.X.X.143]0:2/2 100% [/X.X.X.172]0:0/2 86 % [/X.X.X.20]0:2/2 100% 
[/X.X.X.71]0:2/2 100% [/X.

[jira] [Commented] (CASSANDRA-11583) Exception when streaming sstables using `sstableloader`

2016-04-15 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242689#comment-15242689
 ] 

Jens Rantil commented on CASSANDRA-11583:
-

For the record, this is the assertion that fails: 
https://github.com/apache/cassandra/blob/cassandra-2.1.13/src/java/org/apache/cassandra/io/sstable/SSTableLoader.java#L208

> Exception when streaming sstables using `sstableloader`
> ---
>
> Key: CASSANDRA-11583
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11583
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: $ uname -a
> Linux bigdb-100 3.2.0-99-virtual #139-Ubuntu SMP Mon Feb 1 23:52:21 UTC 2016 
> x86_64 x86_64 x86_64 GNU/Linux
> I am using Datastax Enterprise 4.7.8-1 which is based on 2.1.13.
>Reporter: Jens Rantil
>
> This bug came out of CASSANDRA-11562.
> I have have a keyspace snapshotted from a 2.1.11 (DSE 4.7.5-1) node. When I'm 
> running the `sstableloader` I get the following output/exception:
> {noformat}
> # sstableloader --nodes X.X.X.20 --username YYY --password ZZZ --ignore XXX 
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2
> Established connection to initial hosts
> Opening sstables and calculating sections to stream
> Streaming relevant part of 
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/XXX-ZZZ-ka-6463-Data.db
>  
> /var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/tink-ZZZ-ka-6464-Data.db
>  to [/X.X.X.33, /X.X.X.113, /X.X.X.32, /X.X.X.20, /X.X.X.122, /X.X.X.176, 
> /X.X.X.143, /X.X.X.172, /X.X.X.50, /X.X.X.51, /X.X.X.52, /X.X.X.71, 
> /X.X.X.53, /X.X.X.54, /X.X.X.47, /X.X.X.31, /X.X.X.8]
> progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % [/X.X.X.172]0:0/2 0  
> % [/X.X.X.20]0:0/2 0  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % 
> [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
> [/X.X.X.143]0:0/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
> [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % [/X.X.X.143]0:1/2 1  % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % 
> [/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 11 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 19 % 
> [/X.X.X.143]0:1/2 11 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 19 % [/X.X.X.143]0:1/2 15 % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 15 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % 
> [/X.X.X.143]0:1/2 20 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
> [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
> [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 21 % [/X.X.X.172]0:0/2 0  % 
> [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
> [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 21 % 
> [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 3  % [/X.X.X.71]0:1/2 1  % 
> [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 42 % 
> [/X.X.X.143]0:1/2 27 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 3  % 
> [/X.X.X.71]0:1/2 6  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 
> [...]
> progress: [/X.X.X.113]0:2/2 100% [/X.X.X.143]0:2/2

[jira] [Updated] (CASSANDRA-11583) Exception when streaming sstables using `sstableloader`

2016-04-15 Thread Jens Rantil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Rantil updated CASSANDRA-11583:

Description: 
This bug came out of CASSANDRA-11562.

I have have a keyspace snapshotted from a 2.1.11 (DSE 4.7.5-1) node. When I'm 
running the `sstableloader` I get the following output/exception:

{noformat}
# sstableloader --nodes X.X.X.20 --username YYY --password ZZZ --ignore XXX 
/var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2
Established connection to initial hosts
Opening sstables and calculating sections to stream
Streaming relevant part of 
/var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/XXX-ZZZ-ka-6463-Data.db
 
/var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/tink-ZZZ-ka-6464-Data.db
 to [/X.X.X.33, /X.X.X.113, /X.X.X.32, /X.X.X.20, /X.X.X.122, /X.X.X.176, 
/X.X.X.143, /X.X.X.172, /X.X.X.50, /X.X.X.51, /X.X.X.52, /X.X.X.71, /X.X.X.53, 
/X.X.X.54, /X.X.X.47, /X.X.X.31, /X.X.X.8]
progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % [/X.X.X.172]0:0/2 0  % 
[/X.X.X.20]0:0/2 0  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
[/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % 
[/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % 
[/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
[/X.X.X.143]0:0/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
[/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: 
[/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
[/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
[/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % 
[/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
[/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
[/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
[/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
[/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
[/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
[/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % [/X.X.X.143]0:1/2 1  % 
[/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
[/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % 
[/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
[/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
[/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % 
[/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
[/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 11 % 
[/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
[/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 19 % 
[/X.X.X.143]0:1/2 11 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
[/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
[/X.X.X.113]0:0/2 19 % [/X.X.X.143]0:1/2 15 % [/X.X.X.172]0:0/2 0  % 
[/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
[/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 15 % 
[/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
[/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % 
[/X.X.X.143]0:1/2 20 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
[/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
[/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 21 % [/X.X.X.172]0:0/2 0  % 
[/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
[/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 21 % 
[/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 3  % [/X.X.X.71]0:1/2 1  % 
[/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 42 % 
[/X.X.X.143]0:1/2 27 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 3  % 
[/X.X.X.71]0:1/2 6  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 
[...]
progress: [/X.X.X.113]0:2/2 100% [/X.X.X.143]0:2/2 100% [/X.X.X.172]0:0/2 78 % 
[/X.X.X.20]0:2/2 100% [/X.X.X.71]0:2/2 100% [/X.X.X.122]0:1/2 97 % 
[/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:2/2 100% [/X.X.X.143]0:2/2 100% 
[/X.X.X.172]0:0/2 78 % [/X.X.X.20]0:2/2 100% [/X.X.X.71]0:2/2 100% 
[/X.X.X.122]0:1/2 97 % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:2/2 100% 
[/X.X.X.143]0:2/2 100% [/X.X.X.172]0:0/2 86 % [/X.X.X.20]0:2/2 100% 
[/X.X.X.71]0:2/2 100% [/X.X.X.122]0:2/2 100% [/X.X.X.47]0:0/2 progress: 
[/X.X.X.113]0:2/2 100% [/X.X.X.143]0:2/2 100% [/X.X.X.172]0:0/2 86 % 
[/X.X.X.20]0:2/2 100% [/X.X.X.71]0:2/2 100% [/X.X.X.122]0:2/2 100% 
[/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:2/2 100% [/X.X.X.143]0:2/2 100% 
[/X.X.X.172]0:0/2 86 % [/X.X.X.20]0:2/2 100% [/X.X.X.71]0:2/2 100% 
[/X.X.X.122]0:2/2 100% [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:2/2 100% 
[/X.X.X.143]0:2/2 100% [/X.X.X.172]0:0/2 86 % [/X.X.X.20]0:2/2 100% 
[/X.X.X.71]0:2/2 100% [/X.

[jira] [Resolved] (CASSANDRA-11562) "Could not retrieve endpoint ranges" for sstableloader

2016-04-15 Thread Jens Rantil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Rantil resolved CASSANDRA-11562.
-
   Resolution: Duplicate
Reproduced In: 2.1.11

> "Could not retrieve endpoint ranges" for sstableloader
> --
>
> Key: CASSANDRA-11562
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11562
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: $ uname -a
> Linux bigdb-100 3.2.0-99-virtual #139-Ubuntu SMP Mon Feb 1 23:52:21 UTC 2016 
> x86_64 x86_64 x86_64 GNU/Linux
> I am using Datastax Enterprise 4.7.5-1 which is based on 2.1.11.
>Reporter: Jens Rantil
>
> I am setting up a second datacenter and have a very slow and shaky VPN 
> connection to my old datacenter. To speed up import process I am trying to 
> seed the new datacenter with a backup (that has been transferred encrypted 
> out of bands from the VPN). When this is done I will issue a final 
> clusterwide repair.
> However...sstableloader crashes with the following:
> {noformat}
> sstableloader -v --nodes XXX --username MYUSERNAME --password MYPASSWORD 
> --ignore YYY,ZZZ ./backupdir/MYKEYSPACE/MYTABLE/
> Could not retrieve endpoint ranges:
> java.lang.IllegalArgumentException
> java.lang.RuntimeException: Could not retrieve endpoint ranges:
> at 
> org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:338)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:156)
> at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:106)
> Caused by: java.lang.IllegalArgumentException
> at java.nio.Buffer.limit(Buffer.java:267)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:543)
> at 
> org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:124)
> at 
> org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:101)
> at 
> org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:30)
> at 
> org.apache.cassandra.serializers.CollectionSerializer.deserialize(CollectionSerializer.java:50)
> at 
> org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:68)
> at 
> org.apache.cassandra.cql3.UntypedResultSet$Row.getMap(UntypedResultSet.java:287)
> at 
> org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1833)
> at 
> org.apache.cassandra.config.CFMetaData.fromThriftCqlRow(CFMetaData.java:1126)
> at 
> org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:330)
> ... 2 more
> {noformat}
> (where YYY,ZZZ are nodes in the old DC)
> The files in ./backupdir/MYKEYSPACE/MYTABLE/ are an exact copy of a snapshot 
> from the older datacenter that has been taken with the exact same version of 
> Datastax Enterprise/Cassandra. The backup was taken 2-3 days ago.
> Question: ./backupdir/MYKEYSPACE/MYTABLE/ contains the non-"*.db" file  
> "manifest.json". Is that an issue?
> My workaround for my quest will probably be to copy the snapshot directories 
> out to the nodes of the new datacenter and do a DC-local repair+cleanup.
> Let me know if I can assist in debugging this further.
> References:
>  * This _might_ be a duplicate of 
> https://issues.apache.org/jira/browse/CASSANDRA-10629.
>  * http://stackoverflow.com/q/34757922/260805. 
> http://stackoverflow.com/a/35213418/260805 claims this could happen when 
> dropping a column, but don't think I've dropped any column for this column 
> ever.
>  * http://stackoverflow.com/q/28632555/260805
>  * http://stackoverflow.com/q/34487567/260805



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11583) Exception when streaming sstables using `sstableloader`

2016-04-15 Thread Jens Rantil (JIRA)
Jens Rantil created CASSANDRA-11583:
---

 Summary: Exception when streaming sstables using `sstableloader`
 Key: CASSANDRA-11583
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11583
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: $ uname -a
Linux bigdb-100 3.2.0-99-virtual #139-Ubuntu SMP Mon Feb 1 23:52:21 UTC 2016 
x86_64 x86_64 x86_64 GNU/Linux
I am using Datastax Enterprise 4.7.8-1 which is based on 2.1.13.
Reporter: Jens Rantil


This bug came out of CASSANDRA-11562.

I have have a keyspace snapshotted from a 2.1.11 (DSE 4.7.5-1) node. When I'm 
running the `sstableloader` I get the following output/exception:

{noformat}
# sstableloader --nodes X.X.X.20 --username YYY --password ZZZ --ignore XXX 
/var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2
Established connection to initial hosts
Opening sstables and calculating sections to stream
Streaming relevant part of 
/var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/XXX-ZZZ-ka-6463-Data.db
 
/var/lib/cassandra/data/XXX/ZZZ-f7ebdf0daa3a3062828fddebc109a3b2/tink-ZZZ-ka-6464-Data.db
 to [/X.X.X.33, /X.X.X.113, /X.X.X.32, /X.X.X.20, /X.X.X.122, /X.X.X.176, 
/X.X.X.143, /X.X.X.172, /X.X.X.50, /X.X.X.51, /X.X.X.52, /X.X.X.71, /X.X.X.53, 
/X.X.X.54, /X.X.X.47, /X.X.X.31, /X.X.X.8]
progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % [/X.X.X.172]0:0/2 0  % 
[/X.X.X.20]0:0/2 0  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
[/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:0/2 0  % 
[/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % 
[/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
[/X.X.X.143]0:0/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
[/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: 
[/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
[/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:0/2 0  % [/X.X.X.122]0:0/2 0  % 
[/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % 
[/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
[/X.X.X.122]0:0/2 0  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 0  % 
[/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
[/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
[/X.X.X.113]0:0/2 0  % [/X.X.X.143]0:1/2 1  % [/X.X.X.172]0:0/2 0  % 
[/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
[/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % [/X.X.X.143]0:1/2 1  % 
[/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
[/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 7  % 
[/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
[/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
[/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 6  % [/X.X.X.172]0:0/2 0  % 
[/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
[/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 12 % [/X.X.X.143]0:1/2 11 % 
[/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
[/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 19 % 
[/X.X.X.143]0:1/2 11 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
[/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
[/X.X.X.113]0:0/2 19 % [/X.X.X.143]0:1/2 15 % [/X.X.X.172]0:0/2 0  % 
[/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
[/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 15 % 
[/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % 
[/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % 
[/X.X.X.143]0:1/2 20 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 1  % 
[/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: 
[/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 21 % [/X.X.X.172]0:0/2 0  % 
[/X.X.X.20]0:1/2 1  % [/X.X.X.71]0:1/2 1  % [/X.X.X.122]0:1/2 1  % 
[/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 26 % [/X.X.X.143]0:1/2 21 % 
[/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 3  % [/X.X.X.71]0:1/2 1  % 
[/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:0/2 42 % 
[/X.X.X.143]0:1/2 27 % [/X.X.X.172]0:0/2 0  % [/X.X.X.20]0:1/2 3  % 
[/X.X.X.71]0:1/2 6  % [/X.X.X.122]0:1/2 1  % [/X.X.X.47]0:0/2 
[...]
progress: [/X.X.X.113]0:2/2 100% [/X.X.X.143]0:2/2 100% [/X.X.X.172]0:0/2 78 % 
[/X.X.X.20]0:2/2 100% [/X.X.X.71]0:2/2 100% [/X.X.X.122]0:1/2 97 % 
[/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:2/2 100% [/X.X.X.143]0:2/2 100% 
[/X.X.X.172]0:0/2 78 % [/X.X.X.20]0:2/2 100% [/X.X.X.71]0:2/2 100% 
[/X.X.X.122]0:1/2 97 % [/X.X.X.47]0:0/2 progress: [/X.X.X.113]0:2/2 100% 
[/X.X.X.143]0:2/2 100% [/X.X.X.172]0:0/2 86 % [/X.X.X.20]0:2/2 100% 
[/X.X.X.71]0:2/2 100% [/X.X.X.122]0:2/2 100% [/X.X.X.47]0:0/2 progress: 
[/X.X.X.113]0:2/2 100% [/X.X.X.143]0:2/2 100% [/X.X.X.172]0:0/2 86 % 
[/X.X

[jira] [Commented] (CASSANDRA-11562) "Could not retrieve endpoint ranges" for sstableloader

2016-04-15 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242686#comment-15242686
 ] 

Jens Rantil commented on CASSANDRA-11562:
-

I can verify that upgrading to DSE 4.7.5-1 to 4.7.8-1 (which bundles Cassandra 
2.1.13) doen't throw the above exception anymore. I hit another issue, but have 
created a separate issue for that (CASSANDRA-11583).

> "Could not retrieve endpoint ranges" for sstableloader
> --
>
> Key: CASSANDRA-11562
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11562
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: $ uname -a
> Linux bigdb-100 3.2.0-99-virtual #139-Ubuntu SMP Mon Feb 1 23:52:21 UTC 2016 
> x86_64 x86_64 x86_64 GNU/Linux
> I am using Datastax Enterprise 4.7.5-1 which is based on 2.1.11.
>Reporter: Jens Rantil
>
> I am setting up a second datacenter and have a very slow and shaky VPN 
> connection to my old datacenter. To speed up import process I am trying to 
> seed the new datacenter with a backup (that has been transferred encrypted 
> out of bands from the VPN). When this is done I will issue a final 
> clusterwide repair.
> However...sstableloader crashes with the following:
> {noformat}
> sstableloader -v --nodes XXX --username MYUSERNAME --password MYPASSWORD 
> --ignore YYY,ZZZ ./backupdir/MYKEYSPACE/MYTABLE/
> Could not retrieve endpoint ranges:
> java.lang.IllegalArgumentException
> java.lang.RuntimeException: Could not retrieve endpoint ranges:
> at 
> org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:338)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:156)
> at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:106)
> Caused by: java.lang.IllegalArgumentException
> at java.nio.Buffer.limit(Buffer.java:267)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:543)
> at 
> org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:124)
> at 
> org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:101)
> at 
> org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:30)
> at 
> org.apache.cassandra.serializers.CollectionSerializer.deserialize(CollectionSerializer.java:50)
> at 
> org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:68)
> at 
> org.apache.cassandra.cql3.UntypedResultSet$Row.getMap(UntypedResultSet.java:287)
> at 
> org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1833)
> at 
> org.apache.cassandra.config.CFMetaData.fromThriftCqlRow(CFMetaData.java:1126)
> at 
> org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:330)
> ... 2 more
> {noformat}
> (where YYY,ZZZ are nodes in the old DC)
> The files in ./backupdir/MYKEYSPACE/MYTABLE/ are an exact copy of a snapshot 
> from the older datacenter that has been taken with the exact same version of 
> Datastax Enterprise/Cassandra. The backup was taken 2-3 days ago.
> Question: ./backupdir/MYKEYSPACE/MYTABLE/ contains the non-"*.db" file  
> "manifest.json". Is that an issue?
> My workaround for my quest will probably be to copy the snapshot directories 
> out to the nodes of the new datacenter and do a DC-local repair+cleanup.
> Let me know if I can assist in debugging this further.
> References:
>  * This _might_ be a duplicate of 
> https://issues.apache.org/jira/browse/CASSANDRA-10629.
>  * http://stackoverflow.com/q/34757922/260805. 
> http://stackoverflow.com/a/35213418/260805 claims this could happen when 
> dropping a column, but don't think I've dropped any column for this column 
> ever.
>  * http://stackoverflow.com/q/28632555/260805
>  * http://stackoverflow.com/q/34487567/260805



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11562) "Could not retrieve endpoint ranges" for sstableloader

2016-04-15 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242671#comment-15242671
 ] 

Jens Rantil commented on CASSANDRA-11562:
-

> My workaround for my quest will probably be to copy the snapshot directories 
> out to the nodes of the new datacenter and do a DC-local repair+cleanup.

For the record, this doesn't work. See 
http://stackoverflow.com/q/36638830/260805.

> "Could not retrieve endpoint ranges" for sstableloader
> --
>
> Key: CASSANDRA-11562
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11562
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: $ uname -a
> Linux bigdb-100 3.2.0-99-virtual #139-Ubuntu SMP Mon Feb 1 23:52:21 UTC 2016 
> x86_64 x86_64 x86_64 GNU/Linux
> I am using Datastax Enterprise 4.7.5-1 which is based on 2.1.11.
>Reporter: Jens Rantil
>
> I am setting up a second datacenter and have a very slow and shaky VPN 
> connection to my old datacenter. To speed up import process I am trying to 
> seed the new datacenter with a backup (that has been transferred encrypted 
> out of bands from the VPN). When this is done I will issue a final 
> clusterwide repair.
> However...sstableloader crashes with the following:
> {noformat}
> sstableloader -v --nodes XXX --username MYUSERNAME --password MYPASSWORD 
> --ignore YYY,ZZZ ./backupdir/MYKEYSPACE/MYTABLE/
> Could not retrieve endpoint ranges:
> java.lang.IllegalArgumentException
> java.lang.RuntimeException: Could not retrieve endpoint ranges:
> at 
> org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:338)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:156)
> at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:106)
> Caused by: java.lang.IllegalArgumentException
> at java.nio.Buffer.limit(Buffer.java:267)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:543)
> at 
> org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:124)
> at 
> org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:101)
> at 
> org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:30)
> at 
> org.apache.cassandra.serializers.CollectionSerializer.deserialize(CollectionSerializer.java:50)
> at 
> org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:68)
> at 
> org.apache.cassandra.cql3.UntypedResultSet$Row.getMap(UntypedResultSet.java:287)
> at 
> org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1833)
> at 
> org.apache.cassandra.config.CFMetaData.fromThriftCqlRow(CFMetaData.java:1126)
> at 
> org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:330)
> ... 2 more
> {noformat}
> (where YYY,ZZZ are nodes in the old DC)
> The files in ./backupdir/MYKEYSPACE/MYTABLE/ are an exact copy of a snapshot 
> from the older datacenter that has been taken with the exact same version of 
> Datastax Enterprise/Cassandra. The backup was taken 2-3 days ago.
> Question: ./backupdir/MYKEYSPACE/MYTABLE/ contains the non-"*.db" file  
> "manifest.json". Is that an issue?
> My workaround for my quest will probably be to copy the snapshot directories 
> out to the nodes of the new datacenter and do a DC-local repair+cleanup.
> Let me know if I can assist in debugging this further.
> References:
>  * This _might_ be a duplicate of 
> https://issues.apache.org/jira/browse/CASSANDRA-10629.
>  * http://stackoverflow.com/q/34757922/260805. 
> http://stackoverflow.com/a/35213418/260805 claims this could happen when 
> dropping a column, but don't think I've dropped any column for this column 
> ever.
>  * http://stackoverflow.com/q/28632555/260805
>  * http://stackoverflow.com/q/34487567/260805



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11562) "Could not retrieve endpoint ranges" for sstableloader

2016-04-13 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239488#comment-15239488
 ] 

Jens Rantil commented on CASSANDRA-11562:
-

Tested moving "manifest.json" out of the directory. Still getting the same 
error message.

> "Could not retrieve endpoint ranges" for sstableloader
> --
>
> Key: CASSANDRA-11562
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11562
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: $ uname -a
> Linux bigdb-100 3.2.0-99-virtual #139-Ubuntu SMP Mon Feb 1 23:52:21 UTC 2016 
> x86_64 x86_64 x86_64 GNU/Linux
> I am using Datastax Enterprise 4.7.5-1 which is based on 2.1.11.
>Reporter: Jens Rantil
>
> I am setting up a second datacenter and have a very slow and shaky VPN 
> connection to my old datacenter. To speed up import process I am trying to 
> seed the new datacenter with a backup (that has been transferred encrypted 
> out of bands from the VPN). When this is done I will issue a final 
> clusterwide repair.
> However...sstableloader crashes with the following:
> {noformat}
> sstableloader -v --nodes XXX --username MYUSERNAME --password MYPASSWORD 
> --ignore YYY,ZZZ ./backupdir/MYKEYSPACE/MYTABLE/
> Could not retrieve endpoint ranges:
> java.lang.IllegalArgumentException
> java.lang.RuntimeException: Could not retrieve endpoint ranges:
> at 
> org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:338)
> at 
> org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:156)
> at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:106)
> Caused by: java.lang.IllegalArgumentException
> at java.nio.Buffer.limit(Buffer.java:267)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:543)
> at 
> org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:124)
> at 
> org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:101)
> at 
> org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:30)
> at 
> org.apache.cassandra.serializers.CollectionSerializer.deserialize(CollectionSerializer.java:50)
> at 
> org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:68)
> at 
> org.apache.cassandra.cql3.UntypedResultSet$Row.getMap(UntypedResultSet.java:287)
> at 
> org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1833)
> at 
> org.apache.cassandra.config.CFMetaData.fromThriftCqlRow(CFMetaData.java:1126)
> at 
> org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:330)
> ... 2 more
> {noformat}
> (where YYY,ZZZ are nodes in the old DC)
> The files in ./backupdir/MYKEYSPACE/MYTABLE/ are an exact copy of a snapshot 
> from the older datacenter that has been taken with the exact same version of 
> Datastax Enterprise/Cassandra. The backup was taken 2-3 days ago.
> Question: ./backupdir/MYKEYSPACE/MYTABLE/ contains the non-"*.db" file  
> "manifest.json". Is that an issue?
> My workaround for my quest will probably be to copy the snapshot directories 
> out to the nodes of the new datacenter and do a DC-local repair+cleanup.
> Let me know if I can assist in debugging this further.
> References:
>  * This _might_ be a duplicate of 
> https://issues.apache.org/jira/browse/CASSANDRA-10629.
>  * http://stackoverflow.com/q/34757922/260805. 
> http://stackoverflow.com/a/35213418/260805 claims this could happen when 
> dropping a column, but don't think I've dropped any column for this column 
> ever.
>  * http://stackoverflow.com/q/28632555/260805
>  * http://stackoverflow.com/q/34487567/260805



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11562) "Could not retrieve endpoint ranges" for sstableloader

2016-04-13 Thread Jens Rantil (JIRA)
Jens Rantil created CASSANDRA-11562:
---

 Summary: "Could not retrieve endpoint ranges" for sstableloader
 Key: CASSANDRA-11562
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11562
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: $ uname -a
Linux bigdb-100 3.2.0-99-virtual #139-Ubuntu SMP Mon Feb 1 23:52:21 UTC 2016 
x86_64 x86_64 x86_64 GNU/Linux

I am using Datastax Enterprise 4.7.5-1 which is based on 2.1.11.
Reporter: Jens Rantil


I am setting up a second datacenter and have a very slow and shaky VPN 
connection to my old datacenter. To speed up import process I am trying to seed 
the new datacenter with a backup (that has been transferred encrypted out of 
bands from the VPN). When this is done I will issue a final clusterwide repair.

However...sstableloader crashes with the following:

{noformat}
sstableloader -v --nodes XXX --username MYUSERNAME --password MYPASSWORD 
--ignore YYY,ZZZ ./backupdir/MYKEYSPACE/MYTABLE/
Could not retrieve endpoint ranges:
java.lang.IllegalArgumentException
java.lang.RuntimeException: Could not retrieve endpoint ranges:
at 
org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:338)
at 
org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:156)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:106)
Caused by: java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:267)
at 
org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:543)
at 
org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:124)
at 
org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:101)
at 
org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:30)
at 
org.apache.cassandra.serializers.CollectionSerializer.deserialize(CollectionSerializer.java:50)
at 
org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:68)
at 
org.apache.cassandra.cql3.UntypedResultSet$Row.getMap(UntypedResultSet.java:287)
at 
org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1833)
at 
org.apache.cassandra.config.CFMetaData.fromThriftCqlRow(CFMetaData.java:1126)
at 
org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:330)
... 2 more
{noformat}
(where YYY,ZZZ are nodes in the old DC)

The files in ./backupdir/MYKEYSPACE/MYTABLE/ are an exact copy of a snapshot 
from the older datacenter that has been taken with the exact same version of 
Datastax Enterprise/Cassandra. The backup was taken 2-3 days ago.

Question: ./backupdir/MYKEYSPACE/MYTABLE/ contains the non-"*.db" file  
"manifest.json". Is that an issue?

My workaround for my quest will probably be to copy the snapshot directories 
out to the nodes of the new datacenter and do a DC-local repair+cleanup.

Let me know if I can assist in debugging this further.

References:
 * This _might_ be a duplicate of 
https://issues.apache.org/jira/browse/CASSANDRA-10629.
 * http://stackoverflow.com/q/34757922/260805. 
http://stackoverflow.com/a/35213418/260805 claims this could happen when 
dropping a column, but don't think I've dropped any column for this column ever.
 * http://stackoverflow.com/q/28632555/260805
 * http://stackoverflow.com/q/34487567/260805



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10767) Checking version of Cassandra command creates `cassandra.logdir_IS_UNDEFINED/`

2015-11-24 Thread Jens Rantil (JIRA)
Jens Rantil created CASSANDRA-10767:
---

 Summary: Checking version of Cassandra command creates 
`cassandra.logdir_IS_UNDEFINED/`
 Key: CASSANDRA-10767
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10767
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: $ cassandra -v

2.1.2

MacOSX 10.9.5

$ brew info cassandra   
   [14:15:41]
cassandra: stable 2.2.3 (bottled)
Eventually consistent, distributed key-value store
https://cassandra.apache.org
/usr/local/Cellar/cassandra/2.1.2 (3975 files, 92M) *
  Built from source
From: 
https://github.com/Homebrew/homebrew/blob/master/Library/Formula/cassandra.rb
==> Caveats
To have launchd start cassandra at login:
  ln -sfv /usr/local/opt/cassandra/*.plist ~/Library/LaunchAgents
Then to load cassandra now:
  launchctl load ~/Library/LaunchAgents/homebrew.mxcl.cassandra.plist
Reporter: Jens Rantil


When I execute `cassandra -v` on the terminal the directory 
`cassandra.logdir_IS_UNDEFINED` is created in my CWD:

{noformat}
$ tree cassandra.logdir_IS_UNDEFINED
cassandra.logdir_IS_UNDEFINED
└── system.log

0 directories, 1 file
{noformat}

Expected: That no log file nor directory is created when I'm simply checking 
the version of Cassandra. Feels a bit ridiculous.

Additionals: Just double checking, is this a bundling issue that should be 
reported to Homebrew? Probably not, right?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9589) Unclear difference between "Improvement" and "Wish" in JIRA

2015-06-12 Thread Jens Rantil (JIRA)
Jens Rantil created CASSANDRA-9589:
--

 Summary: Unclear difference between "Improvement" and "Wish" in 
JIRA
 Key: CASSANDRA-9589
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9589
 Project: Cassandra
  Issue Type: Bug
  Components: Documentation & website, Tools
Reporter: Jens Rantil
Priority: Trivial


The JIRA issue types "Wish" and "Improvement" sounds the same to me. Every time 
I have no idea which of them I should choose. Filing this bug to 1) get clarity 
and 2) propose either one of them is merged into the other or 3) rename them to 
make it clear why they differ.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9588) Make sstableofflinerelevel print stats before relevel

2015-06-12 Thread Jens Rantil (JIRA)
Jens Rantil created CASSANDRA-9588:
--

 Summary: Make sstableofflinerelevel print stats before relevel
 Key: CASSANDRA-9588
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9588
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Jens Rantil
Priority: Trivial


The current version of sstableofflinerelevel prints the new level hierarchy. 
While "nodetool cfstats ..." will tell the current hierarchy it would be nice 
to have "sstableofflinerelevel" output the current level histograms for easy 
comparison of what changes will be made. Especially since sstableofflinerelevel 
needs to run when node isn't running and "nodetool cfstats ..." doesn't work 
because of that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9360) > 100% progress in compaction statistics

2015-05-13 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14541740#comment-14541740
 ] 

Jens Rantil commented on CASSANDRA-9360:


Oh, sorry; I'm running an unaltered cqlsh:

{noformat}
$ cqlsh ...
Connected to XXX cluster at XXX:9160.
[cqlsh 4.1.1 | Cassandra 2.0.11.83 | DSE 4.6.0 | CQL spec 3.1.1 | Thrift 
protocol 19.39.0]
Use HELP for help.
cqlsh>
{noformat}

> > 100% progress in compaction statistics
> 
>
> Key: CASSANDRA-9360
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9360
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, Tools
>Reporter: Jens Rantil
>Priority: Minor
>
> When issuing `nodetool compactionstats` I am seeing a progress that has 
> surpassed 100%:
> {noformat}
> $ nodetool compactionstats
> pending tasks: 12
>   compaction typekeyspace   table   completed 
>   total  unit  progress
> ...
>Validationmykeyspacemytable   580783515
>434846187 bytes   133.56%
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9360) > 100% progress in compaction statistics

2015-05-12 Thread Jens Rantil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Rantil updated CASSANDRA-9360:
---
Summary: > 100% progress in compaction statistics  (was: > 100% progress in 
compaction statis)

> > 100% progress in compaction statistics
> 
>
> Key: CASSANDRA-9360
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9360
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core, Tools
>Reporter: Jens Rantil
>Priority: Minor
>
> When issuing `nodetool compactionstats` I am seeing a progress that has 
> surpassed 100%:
> {noformat}
> $ nodetool compactionstats
> pending tasks: 12
>   compaction typekeyspace   table   completed 
>   total  unit  progress
> ...
>Validationmykeyspacemytable   580783515
>434846187 bytes   133.56%
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9360) > 100% progress in compaction statis

2015-05-12 Thread Jens Rantil (JIRA)
Jens Rantil created CASSANDRA-9360:
--

 Summary: > 100% progress in compaction statis
 Key: CASSANDRA-9360
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9360
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Tools
Reporter: Jens Rantil
Priority: Minor


When issuing `nodetool compactionstats` I am seeing a progress that has 
surpassed 100%:

{noformat}
$ nodetool compactionstats
pending tasks: 12
  compaction typekeyspace   table   completed   
total  unit  progress
...
   Validationmykeyspacemytable   580783515  
 434846187 bytes   133.56%
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8574) Gracefully degrade SELECT when there are lots of tombstones

2015-05-08 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14534029#comment-14534029
 ] 

Jens Rantil commented on CASSANDRA-8574:


I apologize for my lack of knowledge - what does "TOE" stand for? "Theory of 
everything" sounds out of place ;)

> Gracefully degrade SELECT when there are lots of tombstones
> ---
>
> Key: CASSANDRA-8574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8574
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jens Rantil
> Fix For: 3.x
>
>
> *Background:* There's lots of tooling out there to do BigData analysis on 
> Cassandra clusters. Examples are Spark and Hadoop, which is offered by DSE. 
> The problem with both of these so far, is that a single partition key with 
> too many tombstones can make the query job fail hard.
> The described scenario happens despite the user setting a rather small 
> FetchSize. I assume this is a common scenario if you have larger rows.
> *Proposal:* To allow a CQL SELECT to gracefully degrade to only return a 
> smaller batch of results if there are too many tombstones. The tombstones are 
> ordered according to clustering key and one should be able to page through 
> them. Potentially:
> SELECT * FROM mytable LIMIT 1000 TOMBSTONES;
> would page through maximum 1000 tombstones, _or_ 1000 (CQL) rows.
> I understand that this obviously would degrade performance, but it would at 
> least yield a result.
> *Additional comment:* I haven't dug into Cassandra code, but conceptually I 
> guess this would be doable. Let me know what you think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8574) Gracefully degrade SELECT when there are lots of tombstones

2015-05-03 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14291297#comment-14291297
 ] 

Jens Rantil edited comment on CASSANDRA-8574 at 5/3/15 7:57 PM:


I guess to do this one also have to be able to receive tombstones in the result 
to be able to page over them...


was (Author: ztyx):
I guess to do this one one also have to be able to receive tombstones in the 
result to be able to page over them...

> Gracefully degrade SELECT when there are lots of tombstones
> ---
>
> Key: CASSANDRA-8574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8574
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jens Rantil
> Fix For: 3.x
>
>
> *Background:* There's lots of tooling out there to do BigData analysis on 
> Cassandra clusters. Examples are Spark and Hadoop, which is offered by DSE. 
> The problem with both of these so far, is that a single partition key with 
> too many tombstones can make the query job fail hard.
> The described scenario happens despite the user setting a rather small 
> FetchSize. I assume this is a common scenario if you have larger rows.
> *Proposal:* To allow a CQL SELECT to gracefully degrade to only return a 
> smaller batch of results if there are too many tombstones. The tombstones are 
> ordered according to clustering key and one should be able to page through 
> them. Potentially:
> SELECT * FROM mytable LIMIT 1000 TOMBSTONES;
> would page through maximum 1000 tombstones, _or_ 1000 (CQL) rows.
> I understand that this obviously would degrade performance, but it would at 
> least yield a result.
> *Additional comment:* I haven't dug into Cassandra code, but conceptually I 
> guess this would be doable. Let me know what you think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8561) Tombstone log warning does not log partition key

2015-03-24 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14378813#comment-14378813
 ] 

Jens Rantil commented on CASSANDRA-8561:


Is this maybe something that's worth being configurable in `cassandra.yaml`?

> Tombstone log warning does not log partition key
> 
>
> Key: CASSANDRA-8561
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8561
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: Datastax DSE 4.5
>Reporter: Jens Rantil
>Assignee: Lyuben Todorov
>  Labels: logging
> Fix For: 2.1.4
>
> Attachments: cassandra-2.1-1427196372-8561-v2.diff, 
> cassandra-2.1-8561.diff, cassandra-2.1-head-1427124485-8561.diff, 
> cassandra-trunk-head-1427125869-8561.diff, trunk-1427195046-8561-v2.diff
>
>
> AFAIK, the tombstone warning in system.log does not contain the primary key. 
> See: https://gist.github.com/JensRantil/44204676f4dbea79ea3a
> Including it would help a lot in diagnosing why the (CQL) row has so many 
> tombstones.
> Let me know if I have misunderstood something.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8574) Gracefully degrade SELECT when there are lots of tombstones

2015-03-03 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345850#comment-14345850
 ] 

Jens Rantil commented on CASSANDRA-8574:


I'd be fine with that solution as long as the underlying problem can be solved 
-- the fact that it's really hard to reliably page through results that has a 
large amount of tombstones.

> Gracefully degrade SELECT when there are lots of tombstones
> ---
>
> Key: CASSANDRA-8574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8574
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jens Rantil
> Fix For: 3.0
>
>
> *Background:* There's lots of tooling out there to do BigData analysis on 
> Cassandra clusters. Examples are Spark and Hadoop, which is offered by DSE. 
> The problem with both of these so far, is that a single partition key with 
> too many tombstones can make the query job fail hard.
> The described scenario happens despite the user setting a rather small 
> FetchSize. I assume this is a common scenario if you have larger rows.
> *Proposal:* To allow a CQL SELECT to gracefully degrade to only return a 
> smaller batch of results if there are too many tombstones. The tombstones are 
> ordered according to clustering key and one should be able to page through 
> them. Potentially:
> SELECT * FROM mytable LIMIT 1000 TOMBSTONES;
> would page through maximum 1000 tombstones, _or_ 1000 (CQL) rows.
> I understand that this obviously would degrade performance, but it would at 
> least yield a result.
> *Additional comment:* I haven't dug into Cassandra code, but conceptually I 
> guess this would be doable. Let me know what you think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8574) Gracefully degrade SELECT when there are lots of tombstones

2015-01-25 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14291297#comment-14291297
 ] 

Jens Rantil commented on CASSANDRA-8574:


I guess to do this one one also have to be able to receive tombstones in the 
result to be able to page over them...

> Gracefully degrade SELECT when there are lots of tombstones
> ---
>
> Key: CASSANDRA-8574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8574
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jens Rantil
> Fix For: 3.0
>
>
> *Background:* There's lots of tooling out there to do BigData analysis on 
> Cassandra clusters. Examples are Spark and Hadoop, which is offered by DSE. 
> The problem with both of these so far, is that a single partition key with 
> too many tombstones can make the query job fail hard.
> The described scenario happens despite the user setting a rather small 
> FetchSize. I assume this is a common scenario if you have larger rows.
> *Proposal:* To allow a CQL SELECT to gracefully degrade to only return a 
> smaller batch of results if there are too many tombstones. The tombstones are 
> ordered according to clustering key and one should be able to page through 
> them. Potentially:
> SELECT * FROM mytable LIMIT 1000 TOMBSTONES;
> would page through maximum 1000 tombstones, _or_ 1000 (CQL) rows.
> I understand that this obviously would degrade performance, but it would at 
> least yield a result.
> *Additional comment:* I haven't dug into Cassandra code, but conceptually I 
> guess this would be doable. Let me know what you think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8574) Gracefully degrade SELECT when there are lots of tombstones

2015-01-25 Thread Jens Rantil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Rantil updated CASSANDRA-8574:
---
Description: 
*Background:* There's lots of tooling out there to do BigData analysis on 
Cassandra clusters. Examples are Spark and Hadoop, which is offered by DSE. The 
problem with both of these so far, is that a single partition key with too many 
tombstones can make the query job fail hard.

The described scenario happens despite the user setting a rather small 
FetchSize. I assume this is a common scenario if you have larger rows.

*Proposal:* To allow a CQL SELECT to gracefully degrade to only return a 
smaller batch of results if there are too many tombstones. The tombstones are 
ordered according to clustering key and one should be able to page through 
them. Potentially:

SELECT * FROM mytable LIMIT 1000 TOMBSTONES;

would page through maximum 1000 tombstones, _or_ 1000 (CQL) rows.

I understand that this obviously would degrade performance, but it would at 
least yield a result.

*Additional comment:* I haven't dug into Cassandra code, but conceptually I 
guess this would be doable. Let me know what you think.

  was:
*Background:* There's lots of tooling out there to do BigData analysis on 
Cassandra clusters. Examples are Spark and Hadoop, which is offered by DSE. The 
problem with both of these so far, is that a single partition key with too many 
tombstones can make the query job fail hard.

The described scenario happens despite the user setting a rather small 
FetchSize. I assume this is a common scenario if you have larger rows.

*Proposal:* To allow a CQL SELECT to gracefully degrade to only return a 
smaller batch of results if there are too many tombstones. The tombstones are 
ordered according to clustering key and one should be able to page through 
them. Potentially:

SELECT * FROM mytable LIMIT 1000 TOMBSTONES;

would page through maximum 1000 tombstones, _or_ 1000 (CQL) rows.

I understand that this obviously would degrade performance, but it would at 
least yield a result.

Additional comment: I haven't dug into Cassandra code, but conceptually I guess 
this would be doable. Let me know what you think.


> Gracefully degrade SELECT when there are lots of tombstones
> ---
>
> Key: CASSANDRA-8574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8574
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jens Rantil
> Fix For: 3.0
>
>
> *Background:* There's lots of tooling out there to do BigData analysis on 
> Cassandra clusters. Examples are Spark and Hadoop, which is offered by DSE. 
> The problem with both of these so far, is that a single partition key with 
> too many tombstones can make the query job fail hard.
> The described scenario happens despite the user setting a rather small 
> FetchSize. I assume this is a common scenario if you have larger rows.
> *Proposal:* To allow a CQL SELECT to gracefully degrade to only return a 
> smaller batch of results if there are too many tombstones. The tombstones are 
> ordered according to clustering key and one should be able to page through 
> them. Potentially:
> SELECT * FROM mytable LIMIT 1000 TOMBSTONES;
> would page through maximum 1000 tombstones, _or_ 1000 (CQL) rows.
> I understand that this obviously would degrade performance, but it would at 
> least yield a result.
> *Additional comment:* I haven't dug into Cassandra code, but conceptually I 
> guess this would be doable. Let me know what you think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8574) Gracefully degrade SELECT when there are lots of tombstones

2015-01-25 Thread Jens Rantil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Rantil updated CASSANDRA-8574:
---
Description: 
*Background:* There's lots of tooling out there to do BigData analysis on 
Cassandra clusters. Examples are Spark and Hadoop, which is offered by DSE. The 
problem with both of these so far, is that a single partition key with too many 
tombstones can make the query job fail hard.

The described scenario happens despite the user setting a rather small 
FetchSize. I assume this is a common scenario if you have larger rows.

*Proposal:* To allow a CQL SELECT to gracefully degrade to only return a 
smaller batch of results if there are too many tombstones. The tombstones are 
ordered according to clustering key and one should be able to page through 
them. Potentially:

SELECT * FROM mytable LIMIT 1000 TOMBSTONES;

would page through maximum 1000 tombstones, _or_ 1000 (CQL) rows.

I understand that this obviously would degrade performance, but it would at 
least yield a result.

Additional comment: I haven't dug into Cassandra code, but conceptually I guess 
this would be doable. Let me know what you think.

  was:
*Background:* There's lots of tooling out there to do BigData analysis on 
Cassandra clusters. Examples are Spark and Hadoop, which is offered by DSE. The 
problem with both of these so far, is that a single partition key with too many 
tombstones can make the query job fail hard.

The describe scenario happens despite the user setting a rather small PageSize. 
I assume this is a common scenario if you have a larger rows.

*Proposal:* To allow a CQL SELECT to gracefully degrade to only return a 
smaller batch of results if there are too many tombstones. The tombstones are 
ordered according to clustering key and one should be able to page through 
them. Potentially:

SELECT * FROM mytable LIMIT 1000 TOMBSTONES;

would page through maximum 1000 tombstones, _or_ 1000 (CQL) rows.

I understand that this obviously would degrade performance, but it would at 
least yield a result.

Additional comment: I haven't dug into Cassandra code, but conceptually I guess 
this would be doable. Let me know what you think.


> Gracefully degrade SELECT when there are lots of tombstones
> ---
>
> Key: CASSANDRA-8574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8574
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jens Rantil
> Fix For: 3.0
>
>
> *Background:* There's lots of tooling out there to do BigData analysis on 
> Cassandra clusters. Examples are Spark and Hadoop, which is offered by DSE. 
> The problem with both of these so far, is that a single partition key with 
> too many tombstones can make the query job fail hard.
> The described scenario happens despite the user setting a rather small 
> FetchSize. I assume this is a common scenario if you have larger rows.
> *Proposal:* To allow a CQL SELECT to gracefully degrade to only return a 
> smaller batch of results if there are too many tombstones. The tombstones are 
> ordered according to clustering key and one should be able to page through 
> them. Potentially:
> SELECT * FROM mytable LIMIT 1000 TOMBSTONES;
> would page through maximum 1000 tombstones, _or_ 1000 (CQL) rows.
> I understand that this obviously would degrade performance, but it would at 
> least yield a result.
> Additional comment: I haven't dug into Cassandra code, but conceptually I 
> guess this would be doable. Let me know what you think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8561) Tombstone log warning does not log partition key

2015-01-25 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14291295#comment-14291295
 ] 

Jens Rantil commented on CASSANDRA-8561:


Robert: Exposing number of shadowed columns sounds like a nice thing to have in 
tracing at least, but I suggest you file that as a separate issue to keep issue 
focused on one thing.

> Tombstone log warning does not log partition key
> 
>
> Key: CASSANDRA-8561
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8561
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: Datastax DSE 4.5
>Reporter: Jens Rantil
>  Labels: logging
> Fix For: 2.1.3, 2.0.13
>
>
> AFAIK, the tombstone warning in system.log does not contain the primary key. 
> See: https://gist.github.com/JensRantil/44204676f4dbea79ea3a
> Including it would help a lot in diagnosing why the (CQL) row has so many 
> tombstones.
> Let me know if I have misunderstood something.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8128) Exception when executing UPSERT

2015-01-10 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14272696#comment-14272696
 ] 

Jens Rantil commented on CASSANDRA-8128:


Thanks for your feedback, Sylvain. Since I can't reproduce this fully, I will 
not pursue in reporting it to spring-data-cassandra nor Datastax.

> Exception when executing UPSERT
> ---
>
> Key: CASSANDRA-8128
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8128
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
>Reporter: Jens Rantil
>Priority: Critical
>  Labels: cql3
> Fix For: 2.0.12
>
>
> I am putting a bunch of (CQL) rows into Datastax DSE 4.5.1-1. Each upsert is 
> for a single partition key with up to ~3000 clustering keys. I understand to 
> large upsert aren't recommended, but I wouldn't expect to be getting the 
> following exception anyway:
> {noformat}
> ERROR [Native-Transport-Requests:4205136] 2014-10-16 12:00:38,668 
> ErrorMessage.java (line 222) Unexpected exception during request
> java.lang.IndexOutOfBoundsException: Index: 1749, Size: 1749
> at java.util.ArrayList.rangeCheck(ArrayList.java:635)
> at java.util.ArrayList.get(ArrayList.java:411)
> at 
> org.apache.cassandra.cql3.Constants$Marker.bindAndGet(Constants.java:278)
> at 
> org.apache.cassandra.cql3.Constants$Setter.execute(Constants.java:307)
> at 
> org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:99)
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.addStatementMutations(BatchStatement.java:200)
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.getMutations(BatchStatement.java:145)
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:251)
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:232)
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158)
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler.statementExecution(DseQueryHandler.java:207)
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:86)
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
> at 
> org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304)
> at 
> org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
> at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
> at 
> org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
> at 
> org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43)
> at 
> org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8574) Gracefully degrade when there are lots of tombstones

2015-01-07 Thread Jens Rantil (JIRA)
Jens Rantil created CASSANDRA-8574:
--

 Summary: Gracefully degrade when there are lots of tombstones
 Key: CASSANDRA-8574
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8574
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jens Rantil


*Background:* There's lots of tooling out there to do BigData analysis on 
Cassandra clusters. Examples are Spark and Hadoop, which is offered by DSE. The 
problem with both of these so far, is that a single partition key with too many 
tombstones can make the query job fail hard.

The describe scenario happens despite the user setting a rather small PageSize. 
I assume this is a common scenario if you have a larger rows.

*Proposal:* To allow a CQL SELECT to gracefully degrade to only return a 
smaller batch of results if there are too many tombstones. The tombstones are 
ordered according to clustering key and one should be able to page through 
them. Potentially:

SELECT * FROM mytable LIMIT 1000 TOMBSTONES;

would page through maximum 1000 tombstones, _or_ 1000 (CQL) rows.

I understand that this obviously would degrade performance, but it would at 
least yield a result.

Additional comment: I haven't dug into Cassandra code, but conceptually I guess 
this would be doable. Let me know what you think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8574) Gracefully degrade SELECT when there are lots of tombstones

2015-01-07 Thread Jens Rantil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Rantil updated CASSANDRA-8574:
---
Summary: Gracefully degrade SELECT when there are lots of tombstones  (was: 
Gracefully degrade when there are lots of tombstones)

> Gracefully degrade SELECT when there are lots of tombstones
> ---
>
> Key: CASSANDRA-8574
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8574
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jens Rantil
>
> *Background:* There's lots of tooling out there to do BigData analysis on 
> Cassandra clusters. Examples are Spark and Hadoop, which is offered by DSE. 
> The problem with both of these so far, is that a single partition key with 
> too many tombstones can make the query job fail hard.
> The describe scenario happens despite the user setting a rather small 
> PageSize. I assume this is a common scenario if you have a larger rows.
> *Proposal:* To allow a CQL SELECT to gracefully degrade to only return a 
> smaller batch of results if there are too many tombstones. The tombstones are 
> ordered according to clustering key and one should be able to page through 
> them. Potentially:
> SELECT * FROM mytable LIMIT 1000 TOMBSTONES;
> would page through maximum 1000 tombstones, _or_ 1000 (CQL) rows.
> I understand that this obviously would degrade performance, but it would at 
> least yield a result.
> Additional comment: I haven't dug into Cassandra code, but conceptually I 
> guess this would be doable. Let me know what you think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8573) Lack of compaction tooling for LeveledCompactionStrategy

2015-01-07 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14267814#comment-14267814
 ] 

Jens Rantil commented on CASSANDRA-8573:


Good times. I'll be closing this then!

> Lack of compaction tooling for LeveledCompactionStrategy
> 
>
> Key: CASSANDRA-8573
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8573
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jens Rantil
>
> This is a highly frustration-driven ticket. Apologize for roughness in tone 
> ;-)
> *Background:* I happen to have a partition key with lots of tombstones. 
> Sadly, I happen to run LeveledCompactionStrategy (LCS). Yes, it's probably my 
> mistake to have put them there but running into tombstone issues seem to be 
> common for Cassandra, so I don't think this ticket can be discarded as simply 
> user error. In fact, I believe this could happen to the best of us. And when 
> it does, there should be a quick way of correcting this.
> *Problem:* How does one handle this? Well, for DTCS one could issue a 
> compaction using `nodetool compact`, or one could use the 
> forceUserDefinedCompaction MBean. Neither of these work for LCS (shall I also 
> say DTCS?).
> *Workaround:* The only options AFAIK are
>  1. to lower "gc_grace_seconds" and "wait it out" until the Cassandra node(s) 
> has garbage collected the sstables. This can take days.
>  2. possibly lower `tombstone_threshold` to something tiny, optionally 
> lowering `tombstone_compaction_interval ` (for recent deletes). This has the 
> implication that nodes might start garbage collecting a ton of unrelated 
> stuff.
>  3. variations of "delete some or all your sstables" and run a full repair. 
> Takes ages.
> *Proposed solution:* Either
>  - Make forceUserDefinedCompaction support LCS, or create a similar endpoint 
> that does something similar.
>  - make something similar to `nodetool compact` work with LCS.
> *Additional comments:* I read somewhere where someone proposed making LCS 
> default compaction strategy. Before this ticket is fixed, I don't see that as 
> an option.
> Let me know what you think (or close if not relevant).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8573) Lack of compaction tooling for LeveledCompactionStrategy

2015-01-07 Thread Jens Rantil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Rantil updated CASSANDRA-8573:
---
Description: 
This is a highly frustration-driven ticket. Apologize for roughness in tone ;-)

*Background:* I happen to have a partition key with lots of tombstones. Sadly, 
I happen to run LeveledCompactionStrategy (LCS). Yes, it's probably my mistake 
to have put them there but running into tombstone issues seem to be common for 
Cassandra, so I don't think this ticket can be discarded as simply user error. 
In fact, I believe this could happen to the best of us. And when it does, there 
should be a quick way of correcting this.

*Problem:* How does one handle this? Well, for DTCS one could issue a 
compaction using `nodetool compact`, or one could use the 
forceUserDefinedCompaction MBean. Neither of these work for LCS (shall I also 
say DTCS?).

*Workaround:* The only options AFAIK are

 1. to lower "gc_grace_seconds" and "wait it out" until the Cassandra node(s) 
has garbage collected the sstables. This can take days.
 2. possibly lower `tombstone_threshold` to something tiny, optionally lowering 
`tombstone_compaction_interval ` (for recent deletes). This has the implication 
that nodes might start garbage collecting a ton of unrelated stuff.
 3. variations of "delete some or all your sstables" and run a full repair. 
Takes ages.

*Proposed solution:* Either
 - Make forceUserDefinedCompaction support LCS, or create a similar endpoint 
that does something similar.
 - make something similar to `nodetool compact` work with LCS.

*Additional comments:* I read somewhere where someone proposed making LCS 
default compaction strategy. Before this ticket is fixed, I don't see that as 
an option.

Let me know what you think (or close if not relevant).

  was:
This is a highly frustration-driven ticket. Apologize for roughness in tone ;-)

*Background:* I happen to have a partition key with lots of tombstones. Sadly, 
I happen to run LeveledCompactionStrategy (LCS). Yes, it's probably my mistake 
to have put them there but running into tombstone issues seem to be common for 
Cassandra, so I don't think this ticket can be discarded as simply user error. 
In fact, I believe this could happen to the best of us. And when it does, there 
should be a quick way of correcting this.

*Problem:* How does one handle this? Well, for DTCS one could issue a 
compaction using `nodetool compact`, or one could use the 
forceUserDefinedCompaction MBean. Neither of these work for LCS (shall I also 
say DTCS?).

*Workaround:* The only options AFAIK are

 1. to lower "gc_grace_seconds" and "wait it out" until the Cassandra node(s) 
has garbage collected the sstables. This can take days.
 2. possibly lower `tombstone_threshold` to something tiny, optionally lowering 
`tombstone_compaction_interval ` (for recent deletes). This has the implication 
that nodes might start garbage collecting a ton of unrelated stuff.
 3. variations of "delete some or all your sstables" and run a full repair. 
Takes ages.

*Proposed solution:* Make forceUserDefinedCompaction support LCS, or create a 
similar endpoint that does something similar.

*Additional comments:* I read somewhere where someone proposed making LCS 
default compaction strategy. Before this ticket is fixed, I don't see that as 
an option.

Let me know what you think (or close if not relevant).


> Lack of compaction tooling for LeveledCompactionStrategy
> 
>
> Key: CASSANDRA-8573
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8573
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jens Rantil
>
> This is a highly frustration-driven ticket. Apologize for roughness in tone 
> ;-)
> *Background:* I happen to have a partition key with lots of tombstones. 
> Sadly, I happen to run LeveledCompactionStrategy (LCS). Yes, it's probably my 
> mistake to have put them there but running into tombstone issues seem to be 
> common for Cassandra, so I don't think this ticket can be discarded as simply 
> user error. In fact, I believe this could happen to the best of us. And when 
> it does, there should be a quick way of correcting this.
> *Problem:* How does one handle this? Well, for DTCS one could issue a 
> compaction using `nodetool compact`, or one could use the 
> forceUserDefinedCompaction MBean. Neither of these work for LCS (shall I also 
> say DTCS?).
> *Workaround:* The only options AFAIK are
>  1. to lower "gc_grace_seconds" and "wait it out" until the Cassandra node(s) 
> has garbage collected the sstables. This can take days.
>  2. possibly lower `tombstone_threshold` to something tiny, optionally 
> lowering `tombstone_compaction_interval ` (for recent deletes). This has the 
> implication that nodes might start garbage collecting a ton of unrelated 

[jira] [Created] (CASSANDRA-8573) Lack of compaction tooling for LeveledCompactionStrategy

2015-01-07 Thread Jens Rantil (JIRA)
Jens Rantil created CASSANDRA-8573:
--

 Summary: Lack of compaction tooling for LeveledCompactionStrategy
 Key: CASSANDRA-8573
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8573
 Project: Cassandra
  Issue Type: Bug
Reporter: Jens Rantil


This is a highly frustration-driven ticket. Apologize for roughness in tone ;-)

*Background:* I happen to have a partition key with lots of tombstones. Sadly, 
I happen to run LeveledCompactionStrategy (LCS). Yes, it's probably my mistake 
to have put them there but running into tombstone issues seem to be common for 
Cassandra, so I don't think this ticket can be discarded as simply user error. 
In fact, I believe this could happen to the best of us. And when it does, there 
should be a quick way of correcting this.

*Problem:* How does one handle this? Well, for DTCS one could issue a 
compaction using `nodetool compact`, or one could use the 
forceUserDefinedCompaction MBean. Neither of these work for LCS (shall I also 
say DTCS?).

*Workaround:* The only options AFAIK are

 1. to lower "gc_grace_seconds" and "wait it out" until the Cassandra node(s) 
has garbage collected the sstables. This can take days.
 2. possibly lower `tombstone_threshold` to something tiny, optionally lowering 
`tombstone_compaction_interval ` (for recent deletes). This has the implication 
that nodes might start garbage collecting a ton of unrelated stuff.
 3. variations of "delete some or all your sstables" and run a full repair. 
Takes ages.

*Proposed solution:* Make forceUserDefinedCompaction support LCS, or create a 
similar endpoint that does something similar.

*Additional comments:* I read somewhere where someone proposed making LCS 
default compaction strategy. Before this ticket is fixed, I don't see that as 
an option.

Let me know what you think (or close if not relevant).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8561) Tombstone log warning does not log partition key

2015-01-05 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14264580#comment-14264580
 ] 

Jens Rantil commented on CASSANDRA-8561:


I updated the gist to also include the ERROR log line.

> Tombstone log warning does not log partition key
> 
>
> Key: CASSANDRA-8561
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8561
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: Datastax DSE 4.5
>Reporter: Jens Rantil
>  Labels: logging
>
> AFAIK, the tombstone warning in system.log does not contain the primary key. 
> See: https://gist.github.com/JensRantil/44204676f4dbea79ea3a
> Including it would help a lot in diagnosing why the (CQL) row has so many 
> tombstones.
> Let me know if I have misunderstood something.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8561) Tombstone log warning does not log partition key

2015-01-05 Thread Jens Rantil (JIRA)
Jens Rantil created CASSANDRA-8561:
--

 Summary: Tombstone log warning does not log partition key
 Key: CASSANDRA-8561
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8561
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: Datastax DSE 4.5
Reporter: Jens Rantil


AFAIK, the tombstone warning in system.log does not contain the primary key. 
See: https://gist.github.com/JensRantil/44204676f4dbea79ea3a

Including it would help a lot in diagnosing why the (CQL) row has so many 
tombstones.

Let me know if I have misunderstood something.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8397) Support UPDATE with IN requirement for clustering key

2014-12-09 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14239211#comment-14239211
 ] 

Jens Rantil commented on CASSANDRA-8397:


> My plan is to reuse the restriction code of SelectStatement for update and 
> delete statements. If it works well it would make it easy to support for 
> delete and update all the possible restriction cases that are currently 
> supported by the select.

Just curious, performancewise, would a multi-update require a single commitlog 
item or multiple commitlog items?

> Support UPDATE with IN requirement for clustering key
> -
>
> Key: CASSANDRA-8397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8397
> Project: Cassandra
>  Issue Type: Wish
>Reporter: Jens Rantil
>Assignee: Benjamin Lerer
>Priority: Minor
>
> {noformat}
> CREATE TABLE events (
> userid uuid,
> id timeuuid,
> content text,
> type text,
> PRIMARY KEY (userid, id)
> )
> # Add data
> cqlsh:mykeyspace> UPDATE events SET content='Hello' WHERE 
> userid=57b47f85-56c4-4968-83cf-4c4e533944e9 AND id IN 
> (046e9da0-7945-11e4-a76f-770773bbbf7e, 046e0160-7945-11e4-a76f-770773bbbf7e);
> code=2200 [Invalid query] message="Invalid operator IN for PRIMARY KEY part 
> id"
> {noformat}
> I was surprised this doesn't work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8397) Support UPDATE with IN requirement for clustering key

2014-12-01 Thread Jens Rantil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Rantil updated CASSANDRA-8397:
---
Description: 
{noformat}
CREATE TABLE events (
userid uuid,
id timeuuid,
content text,
type text,
PRIMARY KEY (userid, id)
)

# Add data

cqlsh:mykeyspace> UPDATE events SET content='Hello' WHERE 
userid=57b47f85-56c4-4968-83cf-4c4e533944e9 AND id IN 
(046e9da0-7945-11e4-a76f-770773bbbf7e, 046e0160-7945-11e4-a76f-770773bbbf7e);
code=2200 [Invalid query] message="Invalid operator IN for PRIMARY KEY part id"
{noformat}

I was surprised this doesn't work.

  was:
{noformat}
CREATE TABLE tink.events (
userid uuid,
id timeuuid,
content text,
type text,
PRIMARY KEY (userid, id)
)

# Add data

cqlsh:tink> UPDATE events SET content='Hello' WHERE 
userid=57b47f85-56c4-4968-83cf-4c4e533944e9 AND id IN 
(046e9da0-7945-11e4-a76f-770773bbbf7e, 046e0160-7945-11e4-a76f-770773bbbf7e);
code=2200 [Invalid query] message="Invalid operator IN for PRIMARY KEY part id"
{noformat}

I was surprised this doesn't work.


> Support UPDATE with IN requirement for clustering key
> -
>
> Key: CASSANDRA-8397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8397
> Project: Cassandra
>  Issue Type: Wish
>Reporter: Jens Rantil
>Priority: Minor
>
> {noformat}
> CREATE TABLE events (
> userid uuid,
> id timeuuid,
> content text,
> type text,
> PRIMARY KEY (userid, id)
> )
> # Add data
> cqlsh:mykeyspace> UPDATE events SET content='Hello' WHERE 
> userid=57b47f85-56c4-4968-83cf-4c4e533944e9 AND id IN 
> (046e9da0-7945-11e4-a76f-770773bbbf7e, 046e0160-7945-11e4-a76f-770773bbbf7e);
> code=2200 [Invalid query] message="Invalid operator IN for PRIMARY KEY part 
> id"
> {noformat}
> I was surprised this doesn't work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8397) Support UPDATE with IN for clustering key

2014-12-01 Thread Jens Rantil (JIRA)
Jens Rantil created CASSANDRA-8397:
--

 Summary: Support UPDATE with IN for clustering key
 Key: CASSANDRA-8397
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8397
 Project: Cassandra
  Issue Type: Wish
Reporter: Jens Rantil
Priority: Minor


{noformat}
CREATE TABLE tink.events (
userid uuid,
id timeuuid,
content text,
type text,
PRIMARY KEY (userid, id)
)

# Add data

cqlsh:tink> UPDATE events SET content='Hello' WHERE 
userid=57b47f85-56c4-4968-83cf-4c4e533944e9 AND id IN 
(046e9da0-7945-11e4-a76f-770773bbbf7e, 046e0160-7945-11e4-a76f-770773bbbf7e);
code=2200 [Invalid query] message="Invalid operator IN for PRIMARY KEY part id"
{noformat}

I was surprised this doesn't work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8397) Support UPDATE with IN requirement for clustering key

2014-12-01 Thread Jens Rantil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Rantil updated CASSANDRA-8397:
---
Summary: Support UPDATE with IN requirement for clustering key  (was: 
Support UPDATE with IN for clustering key)

> Support UPDATE with IN requirement for clustering key
> -
>
> Key: CASSANDRA-8397
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8397
> Project: Cassandra
>  Issue Type: Wish
>Reporter: Jens Rantil
>Priority: Minor
>
> {noformat}
> CREATE TABLE tink.events (
> userid uuid,
> id timeuuid,
> content text,
> type text,
> PRIMARY KEY (userid, id)
> )
> # Add data
> cqlsh:tink> UPDATE events SET content='Hello' WHERE 
> userid=57b47f85-56c4-4968-83cf-4c4e533944e9 AND id IN 
> (046e9da0-7945-11e4-a76f-770773bbbf7e, 046e0160-7945-11e4-a76f-770773bbbf7e);
> code=2200 [Invalid query] message="Invalid operator IN for PRIMARY KEY part 
> id"
> {noformat}
> I was surprised this doesn't work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8318) Unable to replace a node

2014-11-16 Thread Jens Rantil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Rantil resolved CASSANDRA-8318.

Resolution: Cannot Reproduce

> Unable to replace a node
> 
>
> Key: CASSANDRA-8318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8318
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.0.8.39 (Datastax DSE 4.5.3)
>Reporter: Jens Rantil
> Attachments: X.X.X.56.log
>
>
> Had a hardware failure of a node. I followed the Datastax documentation[1] on 
> how to replace the node X.X.X.51 using a brand new node with the same IP. 
> Since it didn't come up after waiting for ~5 minutes or so, I decided to 
> replace X.X.X.51 with a brand new unused IP X.X.X.56 instead. It now seems 
> like my gossip is some weird state. When I start the replacement node I see 
> line like
> {noformat}
>  INFO [GossipStage:1] 2014-11-14 14:57:03,025 Gossiper.java (line 901) 
> InetAddress /X.X.X.51 is now DOWN
>  INFO [GossipStage:1] 2014-11-14 14:57:03,042 Gossiper.java (line 901) 
> InetAddress /X.X.X.56 is now DOWN
> {noformat}
> . The latter is somewhat surprising since that is the IP of the actual 
> replacement node. It doesn't surprise me it can't talk to itself if it hasn't 
> started!
> Eventually the replacement node shuts down with
> {noformat}
> ERROR [main] 2014-11-14 14:58:06,031 CassandraDaemon.java (line 513) 
> Exception encountered during startup
> java.lang.UnsupportedOperationException: Cannot replace token -2 which does 
> not exist!
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:782)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:614)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
>   at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:374)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
>   at com.datastax.bdp.server.DseDaemon.main(DseDaemon.java:615)
>  INFO [Thread-2] 2014-11-14 14:58:06,035 DseDaemon.java (line 461) DSE 
> shutting down...
>  INFO [StorageServiceShutdownHook] 2014-11-14 14:58:06,037 Gossiper.java 
> (line 1307) Announcing shutdown
>  INFO [Thread-2] 2014-11-14 14:58:06,046 PluginManager.java (line 355) All 
> plugins are stopped.
>  INFO [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 463) 
> Cassandra shutting down...
> ERROR [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 199) 
> Exception in thread Thread[Thread-2,5,main]
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.service.CassandraDaemon.stop(CassandraDaemon.java:464)
>   at com.datastax.bdp.server.DseDaemon.stop(DseDaemon.java:464)
>   at com.datastax.bdp.server.DseDaemon$1.run(DseDaemon.java:364){noformat}
> All nodes are showing
> {noformat}
> root@machine-2:~# nodetool status company
> Datacenter: Analytics
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address Load   Tokens  Owns (effective)  Host ID  
>  Rack
> UN  X.X.X.50  18.35 GB   1   16.7% 
> 25efdbcd-14d3-4e9c-803a-3db5795d4efa  rack1
> DN  X.X.X.51  195.67 KB  1   16.7% 
> d97cf86f-bfaf-4488-b716-26d71635a8fc  rack1
> UN  X.X.X.52  18.7 GB1   16.7% 
> caa32f68-5a6b-4d87-80bd-baa66a9b61ce  rack1
> UN  X.X.X.53  18.56 GB   1   16.7% 
> e219321e-a6d5-48c4-9bad-d2e25429b1d2  rack1
> UN  X.X.X.54  19.69 GB   1   16.7% 
> 3cd36895-ee47-41c1-a5f5-41cb0f8526a6  rack1
> UN  X.X.X.55  18.88 GB   1   16.7% 
> 7d3f73c4-724e-45a6-bcf9-d3072dfc157f  rack1
> Datacenter: Cassandra
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address Load   Tokens  Owns (effective)  Host ID  
>  Rack
> UN  X.X.X.33  128.95 GB  256 100.0%
> 871968c9-1d6b-4f06-ba90-8b3a8d92dcf0  rack1
> UN  X.X.X.32  115.3 GB   256 100.0%
> d7cacd89-8613-4de5-8a5e-a2c53c41ea45  rack1
> UN  X.X.X.31  130.45 GB  256 100.0%
> 48cb0782-6c9a-4805-9330-38e192b6b680  rack1
> {noformat}
> , but when X.X.X.56 is starting is shows
> {noformat}
> root@machine-1:/var/lib/cassandra# nodetool status
> Note: Ownership information does not include topology; for complete 
> information, specify a keyspace
> Datacenter: Analytics
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address Load   Tokens  Owns   Host ID 
>   Rack
> UN  X.X.X.50  18.41 GB   1   0.2%   25efdbcd-14d3-4e9c-803a-3db5795d4efa  
> rac

[jira] [Commented] (CASSANDRA-8318) Unable to replace a node

2014-11-16 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14213924#comment-14213924
 ] 

Jens Rantil commented on CASSANDRA-8318:


bq. Would you care to ellaborate on how to do such a removal?

My apoplogies. Did not know of `nodetool removenode ...`. Managed to force 
removal of the node by issuing `nodetool removenode UUID-for-X.X.X.51 &&  && nodetool removenode force`.

I will be closing this issue since, like you said, there's been numerous fixes 
to issues like this. If I experience this with later version of Cassandra, I'll 
open a new issue.

> Unable to replace a node
> 
>
> Key: CASSANDRA-8318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8318
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.0.8.39 (Datastax DSE 4.5.3)
>Reporter: Jens Rantil
> Attachments: X.X.X.56.log
>
>
> Had a hardware failure of a node. I followed the Datastax documentation[1] on 
> how to replace the node X.X.X.51 using a brand new node with the same IP. 
> Since it didn't come up after waiting for ~5 minutes or so, I decided to 
> replace X.X.X.51 with a brand new unused IP X.X.X.56 instead. It now seems 
> like my gossip is some weird state. When I start the replacement node I see 
> line like
> {noformat}
>  INFO [GossipStage:1] 2014-11-14 14:57:03,025 Gossiper.java (line 901) 
> InetAddress /X.X.X.51 is now DOWN
>  INFO [GossipStage:1] 2014-11-14 14:57:03,042 Gossiper.java (line 901) 
> InetAddress /X.X.X.56 is now DOWN
> {noformat}
> . The latter is somewhat surprising since that is the IP of the actual 
> replacement node. It doesn't surprise me it can't talk to itself if it hasn't 
> started!
> Eventually the replacement node shuts down with
> {noformat}
> ERROR [main] 2014-11-14 14:58:06,031 CassandraDaemon.java (line 513) 
> Exception encountered during startup
> java.lang.UnsupportedOperationException: Cannot replace token -2 which does 
> not exist!
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:782)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:614)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
>   at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:374)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
>   at com.datastax.bdp.server.DseDaemon.main(DseDaemon.java:615)
>  INFO [Thread-2] 2014-11-14 14:58:06,035 DseDaemon.java (line 461) DSE 
> shutting down...
>  INFO [StorageServiceShutdownHook] 2014-11-14 14:58:06,037 Gossiper.java 
> (line 1307) Announcing shutdown
>  INFO [Thread-2] 2014-11-14 14:58:06,046 PluginManager.java (line 355) All 
> plugins are stopped.
>  INFO [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 463) 
> Cassandra shutting down...
> ERROR [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 199) 
> Exception in thread Thread[Thread-2,5,main]
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.service.CassandraDaemon.stop(CassandraDaemon.java:464)
>   at com.datastax.bdp.server.DseDaemon.stop(DseDaemon.java:464)
>   at com.datastax.bdp.server.DseDaemon$1.run(DseDaemon.java:364){noformat}
> All nodes are showing
> {noformat}
> root@machine-2:~# nodetool status company
> Datacenter: Analytics
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address Load   Tokens  Owns (effective)  Host ID  
>  Rack
> UN  X.X.X.50  18.35 GB   1   16.7% 
> 25efdbcd-14d3-4e9c-803a-3db5795d4efa  rack1
> DN  X.X.X.51  195.67 KB  1   16.7% 
> d97cf86f-bfaf-4488-b716-26d71635a8fc  rack1
> UN  X.X.X.52  18.7 GB1   16.7% 
> caa32f68-5a6b-4d87-80bd-baa66a9b61ce  rack1
> UN  X.X.X.53  18.56 GB   1   16.7% 
> e219321e-a6d5-48c4-9bad-d2e25429b1d2  rack1
> UN  X.X.X.54  19.69 GB   1   16.7% 
> 3cd36895-ee47-41c1-a5f5-41cb0f8526a6  rack1
> UN  X.X.X.55  18.88 GB   1   16.7% 
> 7d3f73c4-724e-45a6-bcf9-d3072dfc157f  rack1
> Datacenter: Cassandra
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address Load   Tokens  Owns (effective)  Host ID  
>  Rack
> UN  X.X.X.33  128.95 GB  256 100.0%
> 871968c9-1d6b-4f06-ba90-8b3a8d92dcf0  rack1
> UN  X.X.X.32  115.3 GB   256 100.0%
> d7cacd89-8613-4de5-8a5e-a2c53c41ea45  rack1
> UN  X.X.X.31  130.45 GB  256 100.0%
> 48cb0782-6c9a-4805-9330-38e192b6b680  rack1
> {noformat}
> , but when X.X.X.56 is starting is shows
> {nof

[jira] [Commented] (CASSANDRA-8318) Unable to replace a node

2014-11-16 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14213916#comment-14213916
 ] 

Jens Rantil commented on CASSANDRA-8318:


bq. Just remove that node and bootstrap a new one.

Thanks Brandon for getting back to me on this. Would you care to ellaborate on 
how to do such a removal? Is it simply to remove the peer in system.peers 
manually on every Cassandra node? The node in itself is dead, so I can't 
decommission it. Feel free to point me to some documentation if there is such.

> Unable to replace a node
> 
>
> Key: CASSANDRA-8318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8318
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.0.8.39 (Datastax DSE 4.5.3)
>Reporter: Jens Rantil
> Attachments: X.X.X.56.log
>
>
> Had a hardware failure of a node. I followed the Datastax documentation[1] on 
> how to replace the node X.X.X.51 using a brand new node with the same IP. 
> Since it didn't come up after waiting for ~5 minutes or so, I decided to 
> replace X.X.X.51 with a brand new unused IP X.X.X.56 instead. It now seems 
> like my gossip is some weird state. When I start the replacement node I see 
> line like
> {noformat}
>  INFO [GossipStage:1] 2014-11-14 14:57:03,025 Gossiper.java (line 901) 
> InetAddress /X.X.X.51 is now DOWN
>  INFO [GossipStage:1] 2014-11-14 14:57:03,042 Gossiper.java (line 901) 
> InetAddress /X.X.X.56 is now DOWN
> {noformat}
> . The latter is somewhat surprising since that is the IP of the actual 
> replacement node. It doesn't surprise me it can't talk to itself if it hasn't 
> started!
> Eventually the replacement node shuts down with
> {noformat}
> ERROR [main] 2014-11-14 14:58:06,031 CassandraDaemon.java (line 513) 
> Exception encountered during startup
> java.lang.UnsupportedOperationException: Cannot replace token -2 which does 
> not exist!
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:782)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:614)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
>   at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:374)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
>   at com.datastax.bdp.server.DseDaemon.main(DseDaemon.java:615)
>  INFO [Thread-2] 2014-11-14 14:58:06,035 DseDaemon.java (line 461) DSE 
> shutting down...
>  INFO [StorageServiceShutdownHook] 2014-11-14 14:58:06,037 Gossiper.java 
> (line 1307) Announcing shutdown
>  INFO [Thread-2] 2014-11-14 14:58:06,046 PluginManager.java (line 355) All 
> plugins are stopped.
>  INFO [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 463) 
> Cassandra shutting down...
> ERROR [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 199) 
> Exception in thread Thread[Thread-2,5,main]
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.service.CassandraDaemon.stop(CassandraDaemon.java:464)
>   at com.datastax.bdp.server.DseDaemon.stop(DseDaemon.java:464)
>   at com.datastax.bdp.server.DseDaemon$1.run(DseDaemon.java:364){noformat}
> All nodes are showing
> {noformat}
> root@machine-2:~# nodetool status company
> Datacenter: Analytics
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address Load   Tokens  Owns (effective)  Host ID  
>  Rack
> UN  X.X.X.50  18.35 GB   1   16.7% 
> 25efdbcd-14d3-4e9c-803a-3db5795d4efa  rack1
> DN  X.X.X.51  195.67 KB  1   16.7% 
> d97cf86f-bfaf-4488-b716-26d71635a8fc  rack1
> UN  X.X.X.52  18.7 GB1   16.7% 
> caa32f68-5a6b-4d87-80bd-baa66a9b61ce  rack1
> UN  X.X.X.53  18.56 GB   1   16.7% 
> e219321e-a6d5-48c4-9bad-d2e25429b1d2  rack1
> UN  X.X.X.54  19.69 GB   1   16.7% 
> 3cd36895-ee47-41c1-a5f5-41cb0f8526a6  rack1
> UN  X.X.X.55  18.88 GB   1   16.7% 
> 7d3f73c4-724e-45a6-bcf9-d3072dfc157f  rack1
> Datacenter: Cassandra
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address Load   Tokens  Owns (effective)  Host ID  
>  Rack
> UN  X.X.X.33  128.95 GB  256 100.0%
> 871968c9-1d6b-4f06-ba90-8b3a8d92dcf0  rack1
> UN  X.X.X.32  115.3 GB   256 100.0%
> d7cacd89-8613-4de5-8a5e-a2c53c41ea45  rack1
> UN  X.X.X.31  130.45 GB  256 100.0%
> 48cb0782-6c9a-4805-9330-38e192b6b680  rack1
> {noformat}
> , but when X.X.X.56 is starting is shows
> {noformat}
> root@machine-1:/var/lib/cassandra# nodetool status
> No

[jira] [Commented] (CASSANDRA-8318) Unable to replace a node

2014-11-15 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14213770#comment-14213770
 ] 

Jens Rantil commented on CASSANDRA-8318:


bq. Numerous bugs around this have been fixed since 2.0.8.

Upgrading Cassandra is not an option for me right now. :-/ Is there _any_ 
possible workaround to replace my node? It's been down for days now. Would 
disabling gossiping and manually remove X.X.X.51 (and possibly X.X:X.56) from 
system.peers on every node (and the enabling gossipping) do it? Any other way? 
I'm even willing to sacrifice uptime if that's what requires me to move onwards 
with this.

> Unable to replace a node
> 
>
> Key: CASSANDRA-8318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8318
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.0.8.39 (Datastax DSE 4.5.3)
>Reporter: Jens Rantil
> Attachments: X.X.X.56.log
>
>
> Had a hardware failure of a node. I followed the Datastax documentation[1] on 
> how to replace the node X.X.X.51 using a brand new node with the same IP. 
> Since it didn't come up after waiting for ~5 minutes or so, I decided to 
> replace X.X.X.51 with a brand new unused IP X.X.X.56 instead. It now seems 
> like my gossip is some weird state. When I start the replacement node I see 
> line like
> {noformat}
>  INFO [GossipStage:1] 2014-11-14 14:57:03,025 Gossiper.java (line 901) 
> InetAddress /X.X.X.51 is now DOWN
>  INFO [GossipStage:1] 2014-11-14 14:57:03,042 Gossiper.java (line 901) 
> InetAddress /X.X.X.56 is now DOWN
> {noformat}
> . The latter is somewhat surprising since that is the IP of the actual 
> replacement node. It doesn't surprise me it can't talk to itself if it hasn't 
> started!
> Eventually the replacement node shuts down with
> {noformat}
> ERROR [main] 2014-11-14 14:58:06,031 CassandraDaemon.java (line 513) 
> Exception encountered during startup
> java.lang.UnsupportedOperationException: Cannot replace token -2 which does 
> not exist!
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:782)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:614)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
>   at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:374)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
>   at com.datastax.bdp.server.DseDaemon.main(DseDaemon.java:615)
>  INFO [Thread-2] 2014-11-14 14:58:06,035 DseDaemon.java (line 461) DSE 
> shutting down...
>  INFO [StorageServiceShutdownHook] 2014-11-14 14:58:06,037 Gossiper.java 
> (line 1307) Announcing shutdown
>  INFO [Thread-2] 2014-11-14 14:58:06,046 PluginManager.java (line 355) All 
> plugins are stopped.
>  INFO [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 463) 
> Cassandra shutting down...
> ERROR [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 199) 
> Exception in thread Thread[Thread-2,5,main]
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.service.CassandraDaemon.stop(CassandraDaemon.java:464)
>   at com.datastax.bdp.server.DseDaemon.stop(DseDaemon.java:464)
>   at com.datastax.bdp.server.DseDaemon$1.run(DseDaemon.java:364){noformat}
> All nodes are showing
> {noformat}
> root@machine-2:~# nodetool status company
> Datacenter: Analytics
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address Load   Tokens  Owns (effective)  Host ID  
>  Rack
> UN  X.X.X.50  18.35 GB   1   16.7% 
> 25efdbcd-14d3-4e9c-803a-3db5795d4efa  rack1
> DN  X.X.X.51  195.67 KB  1   16.7% 
> d97cf86f-bfaf-4488-b716-26d71635a8fc  rack1
> UN  X.X.X.52  18.7 GB1   16.7% 
> caa32f68-5a6b-4d87-80bd-baa66a9b61ce  rack1
> UN  X.X.X.53  18.56 GB   1   16.7% 
> e219321e-a6d5-48c4-9bad-d2e25429b1d2  rack1
> UN  X.X.X.54  19.69 GB   1   16.7% 
> 3cd36895-ee47-41c1-a5f5-41cb0f8526a6  rack1
> UN  X.X.X.55  18.88 GB   1   16.7% 
> 7d3f73c4-724e-45a6-bcf9-d3072dfc157f  rack1
> Datacenter: Cassandra
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address Load   Tokens  Owns (effective)  Host ID  
>  Rack
> UN  X.X.X.33  128.95 GB  256 100.0%
> 871968c9-1d6b-4f06-ba90-8b3a8d92dcf0  rack1
> UN  X.X.X.32  115.3 GB   256 100.0%
> d7cacd89-8613-4de5-8a5e-a2c53c41ea45  rack1
> UN  X.X.X.31  130.45 GB  256 100.0%
> 48cb0782-6c9a-4805-9330-38e192b6b680  rack1
> {noformat}
> , but whe

[jira] [Comment Edited] (CASSANDRA-8318) Unable to replace a node

2014-11-14 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212751#comment-14212751
 ] 

Jens Rantil edited comment on CASSANDRA-8318 at 11/14/14 8:04 PM:
--

bq. What actual token does .51 have?

It has a token of -2.

{noformat}
$ nodetool ring
Note: Ownership information does not include topology; for complete 
information, specify a keyspace

Datacenter: Analytics
==
Address RackStatus State   LoadOwnsToken
   
614891469123651
X.X.X.52  rack1   Up Normal  18.34 GB0.02%   
-922337203685477
X.X.X.50  rack1   Up Normal  18.36 GB0.24%   
-614891469123651
X.X.X.55  rack1   Up Normal  18.51 GB0.19%   
-307445734561825
X.X.X.51  rack1   Down   Normal  195.67 KB   0.02%   -2
X.X.X.54  rack1   Up Normal  19.09 GB0.04%   
3074457345618258600
X.X.X.53  rack1   Up Normal  18.5 GB 0.07%   
614891469123651

Datacenter: Cassandra
==
Address RackStatus State   LoadOwnsToken
   
9219239585832170071
...

  Warning: "nodetool ring" is used to output all the tokens of a node.
  To view status related info of a node use "nodetool status" instead.
{noformat}


was (Author: ztyx):
> What actual token does .51 have?

It has a token of -2.

{noformat}
$ nodetool ring
Note: Ownership information does not include topology; for complete 
information, specify a keyspace

Datacenter: Analytics
==
Address RackStatus State   LoadOwnsToken
   
614891469123651
X.X.X.52  rack1   Up Normal  18.34 GB0.02%   
-922337203685477
X.X.X.50  rack1   Up Normal  18.36 GB0.24%   
-614891469123651
X.X.X.55  rack1   Up Normal  18.51 GB0.19%   
-307445734561825
X.X.X.51  rack1   Down   Normal  195.67 KB   0.02%   -2
X.X.X.54  rack1   Up Normal  19.09 GB0.04%   
3074457345618258600
X.X.X.53  rack1   Up Normal  18.5 GB 0.07%   
614891469123651

Datacenter: Cassandra
==
Address RackStatus State   LoadOwnsToken
   
9219239585832170071
...

  Warning: "nodetool ring" is used to output all the tokens of a node.
  To view status related info of a node use "nodetool status" instead.
{noformat}

> Unable to replace a node
> 
>
> Key: CASSANDRA-8318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8318
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.0.8.39 (Datastax DSE 4.5.3)
>Reporter: Jens Rantil
> Attachments: X.X.X.56.log
>
>
> Had a hardware failure of a node. I followed the Datastax documentation[1] on 
> how to replace the node X.X.X.51 using a brand new node with the same IP. 
> Since it didn't come up after waiting for ~5 minutes or so, I decided to 
> replace X.X.X.51 with a brand new unused IP X.X.X.56 instead. It now seems 
> like my gossip is some weird state. When I start the replacement node I see 
> line like
> {noformat}
>  INFO [GossipStage:1] 2014-11-14 14:57:03,025 Gossiper.java (line 901) 
> InetAddress /X.X.X.51 is now DOWN
>  INFO [GossipStage:1] 2014-11-14 14:57:03,042 Gossiper.java (line 901) 
> InetAddress /X.X.X.56 is now DOWN
> {noformat}
> . The latter is somewhat surprising since that is the IP of the actual 
> replacement node. It doesn't surprise me it can't talk to itself if it hasn't 
> started!
> Eventually the replacement node shuts down with
> {noformat}
> ERROR [main] 2014-11-14 14:58:06,031 CassandraDaemon.java (line 513) 
> Exception encountered during startup
> java.lang.UnsupportedOperationException: Cannot replace token -2 which does 
> not exist!
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:782)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:614)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
>   at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:374)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
>   at com.datastax.bdp.server.DseDaemon.main

[jira] [Commented] (CASSANDRA-8318) Unable to replace a node

2014-11-14 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212751#comment-14212751
 ] 

Jens Rantil commented on CASSANDRA-8318:


> What actual token does .51 have?

It has a token of -2.

{noformat}
$ nodetool ring
Note: Ownership information does not include topology; for complete 
information, specify a keyspace

Datacenter: Analytics
==
Address RackStatus State   LoadOwnsToken
   
614891469123651
X.X.X.52  rack1   Up Normal  18.34 GB0.02%   
-922337203685477
X.X.X.50  rack1   Up Normal  18.36 GB0.24%   
-614891469123651
X.X.X.55  rack1   Up Normal  18.51 GB0.19%   
-307445734561825
X.X.X.51  rack1   Down   Normal  195.67 KB   0.02%   -2
X.X.X.54  rack1   Up Normal  19.09 GB0.04%   
3074457345618258600
X.X.X.53  rack1   Up Normal  18.5 GB 0.07%   
614891469123651

Datacenter: Cassandra
==
Address RackStatus State   LoadOwnsToken
   
9219239585832170071
...

  Warning: "nodetool ring" is used to output all the tokens of a node.
  To view status related info of a node use "nodetool status" instead.
{noformat}

> Unable to replace a node
> 
>
> Key: CASSANDRA-8318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8318
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.0.8.39 (Datastax DSE 4.5.3)
>Reporter: Jens Rantil
> Attachments: X.X.X.56.log
>
>
> Had a hardware failure of a node. I followed the Datastax documentation[1] on 
> how to replace the node X.X.X.51 using a brand new node with the same IP. 
> Since it didn't come up after waiting for ~5 minutes or so, I decided to 
> replace X.X.X.51 with a brand new unused IP X.X.X.56 instead. It now seems 
> like my gossip is some weird state. When I start the replacement node I see 
> line like
> {noformat}
>  INFO [GossipStage:1] 2014-11-14 14:57:03,025 Gossiper.java (line 901) 
> InetAddress /X.X.X.51 is now DOWN
>  INFO [GossipStage:1] 2014-11-14 14:57:03,042 Gossiper.java (line 901) 
> InetAddress /X.X.X.56 is now DOWN
> {noformat}
> . The latter is somewhat surprising since that is the IP of the actual 
> replacement node. It doesn't surprise me it can't talk to itself if it hasn't 
> started!
> Eventually the replacement node shuts down with
> {noformat}
> ERROR [main] 2014-11-14 14:58:06,031 CassandraDaemon.java (line 513) 
> Exception encountered during startup
> java.lang.UnsupportedOperationException: Cannot replace token -2 which does 
> not exist!
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:782)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:614)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
>   at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:374)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
>   at com.datastax.bdp.server.DseDaemon.main(DseDaemon.java:615)
>  INFO [Thread-2] 2014-11-14 14:58:06,035 DseDaemon.java (line 461) DSE 
> shutting down...
>  INFO [StorageServiceShutdownHook] 2014-11-14 14:58:06,037 Gossiper.java 
> (line 1307) Announcing shutdown
>  INFO [Thread-2] 2014-11-14 14:58:06,046 PluginManager.java (line 355) All 
> plugins are stopped.
>  INFO [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 463) 
> Cassandra shutting down...
> ERROR [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 199) 
> Exception in thread Thread[Thread-2,5,main]
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.service.CassandraDaemon.stop(CassandraDaemon.java:464)
>   at com.datastax.bdp.server.DseDaemon.stop(DseDaemon.java:464)
>   at com.datastax.bdp.server.DseDaemon$1.run(DseDaemon.java:364){noformat}
> All nodes are showing
> {noformat}
> root@machine-2:~# nodetool status company
> Datacenter: Analytics
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address Load   Tokens  Owns (effective)  Host ID  
>  Rack
> UN  X.X.X.50  18.35 GB   1   16.7% 
> 25efdbcd-14d3-4e9c-803a-3db5795d4efa  rack1
> DN  X.X.X.51  195.67 KB  1   16.7% 
> d97cf86f-bfaf-4488-b716-26d71635a8fc  rack1
> UN  X.X.X.52  18.7 GB1   16.7% 
> caa32f68-5a6b-4d87-80

[jira] [Comment Edited] (CASSANDRA-8318) Unable to replace a node

2014-11-14 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212344#comment-14212344
 ] 

Jens Rantil edited comment on CASSANDRA-8318 at 11/14/14 3:16 PM:
--

Just opened up a healthy node and listed peers. Interestingly, X.X.X.56 is in 
the listing:

{noformat}
cqlsh> SELECT peer, data_center, host_id FROM system.peers;

 peer   | data_center | host_id
+-+--
 X.X.X.33 |   Cassandra | 871968c9-1d6b-4f06-ba90-8b3a8d92dcf0
 X.X.X.54 |   Analytics | 3cd36895-ee47-41c1-a5f5-41cb0f8526a6
 X.X.X.51 |   Analytics | d97cf86f-bfaf-4488-b716-26d71635a8fc
 X.X.X.52 |   Analytics | caa32f68-5a6b-4d87-80bd-baa66a9b61ce
 X.X.X.55 |   Analytics | 7d3f73c4-724e-45a6-bcf9-d3072dfc157f
 X.X.X.50 |   Analytics | 25efdbcd-14d3-4e9c-803a-3db5795d4efa
 X.X.X.31 |   Cassandra | 48cb0782-6c9a-4805-9330-38e192b6b680
 X.X.X.56 |   Analytics | null
 X.X.X.53 |   Analytics | e219321e-a6d5-48c4-9bad-d2e25429b1d2

(9 rows)

cqlsh> SELECT * FROM system.peers WHERE peer='X.X.X.56';

 peer   | data_center | host_id | preferred_ip | rack | release_version | 
rpc_address | schema_version | tokens | workload
+-+-+--+--+-+-+++---
 X.X.X.56 |   Analytics |null | null | null |null | 
   null |   null |   null | Analytics

(1 rows)

cqlsh> SELECT * FROM system.peers WHERE peer='X.X.X.51';

 peer   | data_center | host_id  | preferred_ip 
| rack  | release_version | rpc_address | schema_version   
| tokens | workload
+-+--+--+---+-+-+--++---
 X.X.X.51 |   Analytics | d97cf86f-bfaf-4488-b716-26d71635a8fc | null | 
rack1 |   2.0.10.71 |  X.X.X.51 | cc6357e2-db00-3f93-8dab-17036d4f6ff7 | 
{'-2'} | Analytics

(1 rows)
{noformat}

Should I expect it do be there?


was (Author: ztyx):
Just opened up a healthy node and listed peers. Interestingly, X.X.X.56 is in 
the listing:

{noformat}
cqlsh> SELECT peer, data_center, host_id FROM system.peers;

 peer   | data_center | host_id
+-+--
 X.X.X.33 |   Cassandra | 871968c9-1d6b-4f06-ba90-8b3a8d92dcf0
 X.X.X.54 |   Analytics | 3cd36895-ee47-41c1-a5f5-41cb0f8526a6
 X.X.X.51 |   Analytics | d97cf86f-bfaf-4488-b716-26d71635a8fc
 X.X.X.52 |   Analytics | caa32f68-5a6b-4d87-80bd-baa66a9b61ce
 X.X.X.55 |   Analytics | 7d3f73c4-724e-45a6-bcf9-d3072dfc157f
 X.X.X.50 |   Analytics | 25efdbcd-14d3-4e9c-803a-3db5795d4efa
 X.X.X.31 |   Cassandra | 48cb0782-6c9a-4805-9330-38e192b6b680
 X.X.X.56 |   Analytics | null
 X.X.X.53 |   Analytics | e219321e-a6d5-48c4-9bad-d2e25429b1d2

(9 rows)

cqlsh> SELECT * FROM system.peers WHERE peer='X.X.X.56';

 peer   | data_center | host_id | preferred_ip | rack | release_version | 
rpc_address | schema_version | tokens | workload
+-+-+--+--+-+-+++---
 X.X.X.56 |   Analytics |null | null | null |null | 
   null |   null |   null | Analytics

(1 rows)

cqlsh> SELECT * FROM system.peers WHERE peer='X.X.X.51';

 peer   | data_center | host_id  | preferred_ip 
| rack  | release_version | rpc_address | schema_version   
| tokens | workload
+-+--+--+---+-+-+--++---
 X.X.X.51 |   Analytics | d97cf86f-bfaf-4488-b716-26d71635a8fc | null | 
rack1 |   2.0.10.71 |  X.X.X.51 | cc6357e2-db00-3f93-8dab-17036d4f6ff7 | 
{'-2'} | Analytics

(1 rows)
{noformat}

> Unable to replace a node
> 
>
> Key: CASSANDRA-8318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8318
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.0.8.39 (Datastax DSE 4.5.3)
>Reporter: Jens Rantil
> Attachments: X.X.X.56.log
>
>
> Had a hardware failure of a node. I followed the Datastax documentation[1] on 
> how to replace the node X.X.X.51 using a brand new node with the same IP. 
> Since it didn't come up after waiting for ~5 minutes or so, I decided to 
> replace X.X.X.51 with a brand new unused IP X.X.X.56 instead. It now seems 
> like my gossip is some weird state. When I start the replacement node I see 
> line like
> {noformat}
>  INFO [GossipStage:1

[jira] [Commented] (CASSANDRA-8318) Unable to replace a node

2014-11-14 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212344#comment-14212344
 ] 

Jens Rantil commented on CASSANDRA-8318:


Just opened up a healthy node and listed peers. Interestingly, X.X.X.56 is in 
the listing:

{noformat}
cqlsh> SELECT peer, data_center, host_id FROM system.peers;

 peer   | data_center | host_id
+-+--
 X.X.X.33 |   Cassandra | 871968c9-1d6b-4f06-ba90-8b3a8d92dcf0
 X.X.X.54 |   Analytics | 3cd36895-ee47-41c1-a5f5-41cb0f8526a6
 X.X.X.51 |   Analytics | d97cf86f-bfaf-4488-b716-26d71635a8fc
 X.X.X.52 |   Analytics | caa32f68-5a6b-4d87-80bd-baa66a9b61ce
 X.X.X.55 |   Analytics | 7d3f73c4-724e-45a6-bcf9-d3072dfc157f
 X.X.X.50 |   Analytics | 25efdbcd-14d3-4e9c-803a-3db5795d4efa
 X.X.X.31 |   Cassandra | 48cb0782-6c9a-4805-9330-38e192b6b680
 X.X.X.56 |   Analytics | null
 X.X.X.53 |   Analytics | e219321e-a6d5-48c4-9bad-d2e25429b1d2

(9 rows)

cqlsh> SELECT * FROM system.peers WHERE peer='X.X.X.56';

 peer   | data_center | host_id | preferred_ip | rack | release_version | 
rpc_address | schema_version | tokens | workload
+-+-+--+--+-+-+++---
 X.X.X.56 |   Analytics |null | null | null |null | 
   null |   null |   null | Analytics

(1 rows)

cqlsh> SELECT * FROM system.peers WHERE peer='X.X.X.51';

 peer   | data_center | host_id  | preferred_ip 
| rack  | release_version | rpc_address | schema_version   
| tokens | workload
+-+--+--+---+-+-+--++---
 X.X.X.51 |   Analytics | d97cf86f-bfaf-4488-b716-26d71635a8fc | null | 
rack1 |   2.0.10.71 |  X.X.X.51 | cc6357e2-db00-3f93-8dab-17036d4f6ff7 | 
{'-2'} | Analytics

(1 rows)
{noformat}

> Unable to replace a node
> 
>
> Key: CASSANDRA-8318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8318
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.0.8.39 (Datastax DSE 4.5.3)
>Reporter: Jens Rantil
> Attachments: X.X.X.56.log
>
>
> Had a hardware failure of a node. I followed the Datastax documentation[1] on 
> how to replace the node X.X.X.51 using a brand new node with the same IP. 
> Since it didn't come up after waiting for ~5 minutes or so, I decided to 
> replace X.X.X.51 with a brand new unused IP X.X.X.56 instead. It now seems 
> like my gossip is some weird state. When I start the replacement node I see 
> line like
> {noformat}
>  INFO [GossipStage:1] 2014-11-14 14:57:03,025 Gossiper.java (line 901) 
> InetAddress /X.X.X.51 is now DOWN
>  INFO [GossipStage:1] 2014-11-14 14:57:03,042 Gossiper.java (line 901) 
> InetAddress /X.X.X.56 is now DOWN
> {noformat}
> . The latter is somewhat surprising since that is the IP of the actual 
> replacement node. It doesn't surprise me it can't talk to itself if it hasn't 
> started!
> Eventually the replacement node shuts down with
> {noformat}
> ERROR [main] 2014-11-14 14:58:06,031 CassandraDaemon.java (line 513) 
> Exception encountered during startup
> java.lang.UnsupportedOperationException: Cannot replace token -2 which does 
> not exist!
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:782)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:614)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
>   at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:374)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
>   at com.datastax.bdp.server.DseDaemon.main(DseDaemon.java:615)
>  INFO [Thread-2] 2014-11-14 14:58:06,035 DseDaemon.java (line 461) DSE 
> shutting down...
>  INFO [StorageServiceShutdownHook] 2014-11-14 14:58:06,037 Gossiper.java 
> (line 1307) Announcing shutdown
>  INFO [Thread-2] 2014-11-14 14:58:06,046 PluginManager.java (line 355) All 
> plugins are stopped.
>  INFO [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 463) 
> Cassandra shutting down...
> ERROR [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 199) 
> Exception in thread Thread[Thread-2,5,main]
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.service.CassandraDaemon.stop(CassandraDaemon.java:464)
>   at com.datastax.bdp.server.DseDaemon.stop(DseDaemon.java:464)
>   at com.datastax.bdp.server.DseDaemon$

[jira] [Updated] (CASSANDRA-8318) Unable to replace a node

2014-11-14 Thread Jens Rantil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Rantil updated CASSANDRA-8318:
---
Description: 
Had a hardware failure of a node. I followed the Datastax documentation[1] on 
how to replace the node X.X.X.51 using a brand new node with the same IP. Since 
it didn't come up after waiting for ~5 minutes or so, I decided to replace 
X.X.X.51 with a brand new unused IP X.X.X.56 instead. It now seems like my 
gossip is some weird state. When I start the replacement node I see line like

{noformat}
 INFO [GossipStage:1] 2014-11-14 14:57:03,025 Gossiper.java (line 901) 
InetAddress /X.X.X.51 is now DOWN
 INFO [GossipStage:1] 2014-11-14 14:57:03,042 Gossiper.java (line 901) 
InetAddress /X.X.X.56 is now DOWN
{noformat}
. The latter is somewhat surprising since that is the IP of the actual 
replacement node. It doesn't surprise me it can't talk to itself if it hasn't 
started!

Eventually the replacement node shuts down with
{noformat}
ERROR [main] 2014-11-14 14:58:06,031 CassandraDaemon.java (line 513) Exception 
encountered during startup
java.lang.UnsupportedOperationException: Cannot replace token -2 which does not 
exist!
at 
org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:782)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:614)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:374)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at com.datastax.bdp.server.DseDaemon.main(DseDaemon.java:615)
 INFO [Thread-2] 2014-11-14 14:58:06,035 DseDaemon.java (line 461) DSE shutting 
down...
 INFO [StorageServiceShutdownHook] 2014-11-14 14:58:06,037 Gossiper.java (line 
1307) Announcing shutdown
 INFO [Thread-2] 2014-11-14 14:58:06,046 PluginManager.java (line 355) All 
plugins are stopped.
 INFO [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 463) 
Cassandra shutting down...
ERROR [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 199) 
Exception in thread Thread[Thread-2,5,main]
java.lang.NullPointerException
at 
org.apache.cassandra.service.CassandraDaemon.stop(CassandraDaemon.java:464)
at com.datastax.bdp.server.DseDaemon.stop(DseDaemon.java:464)
at com.datastax.bdp.server.DseDaemon$1.run(DseDaemon.java:364){noformat}

All nodes are showing
{noformat}
root@machine-2:~# nodetool status company
Datacenter: Analytics
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  Owns (effective)  Host ID
   Rack
UN  X.X.X.50  18.35 GB   1   16.7% 
25efdbcd-14d3-4e9c-803a-3db5795d4efa  rack1
DN  X.X.X.51  195.67 KB  1   16.7% 
d97cf86f-bfaf-4488-b716-26d71635a8fc  rack1
UN  X.X.X.52  18.7 GB1   16.7% 
caa32f68-5a6b-4d87-80bd-baa66a9b61ce  rack1
UN  X.X.X.53  18.56 GB   1   16.7% 
e219321e-a6d5-48c4-9bad-d2e25429b1d2  rack1
UN  X.X.X.54  19.69 GB   1   16.7% 
3cd36895-ee47-41c1-a5f5-41cb0f8526a6  rack1
UN  X.X.X.55  18.88 GB   1   16.7% 
7d3f73c4-724e-45a6-bcf9-d3072dfc157f  rack1
Datacenter: Cassandra
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  Owns (effective)  Host ID
   Rack
UN  X.X.X.33  128.95 GB  256 100.0%
871968c9-1d6b-4f06-ba90-8b3a8d92dcf0  rack1
UN  X.X.X.32  115.3 GB   256 100.0%
d7cacd89-8613-4de5-8a5e-a2c53c41ea45  rack1
UN  X.X.X.31  130.45 GB  256 100.0%
48cb0782-6c9a-4805-9330-38e192b6b680  rack1
{noformat}
, but when X.X.X.56 is starting is shows
{noformat}
root@machine-1:/var/lib/cassandra# nodetool status
Note: Ownership information does not include topology; for complete 
information, specify a keyspace
Datacenter: Analytics
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  Owns   Host ID   
Rack
UN  X.X.X.50  18.41 GB   1   0.2%   25efdbcd-14d3-4e9c-803a-3db5795d4efa  
rack1
UN  X.X.X.52  19.07 GB   1   0.0%   caa32f68-5a6b-4d87-80bd-baa66a9b61ce  
rack1
UN  X.X.X.53  18.65 GB   1   0.1%   e219321e-a6d5-48c4-9bad-d2e25429b1d2  
rack1
UN  X.X.X.54  19.69 GB   1   0.0%   3cd36895-ee47-41c1-a5f5-41cb0f8526a6  
rack1
UN  X.X.X.55  18.97 GB   1   0.2%   7d3f73c4-724e-45a6-bcf9-d3072dfc157f  
rack1
Datacenter: Cassandra
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  Owns   Host ID   
Rack
UN  X.X.X.33  129

[jira] [Updated] (CASSANDRA-8318) Unable to replace a node

2014-11-14 Thread Jens Rantil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Rantil updated CASSANDRA-8318:
---
Description: 
Had a hardware failure of a node. I followed the Datastax documentation[1] on 
how to replace the node X.X.X.51 using a brand new node with the same IP. Since 
it didn't come up after waiting for ~5 minutes or so, I decided to replace 
X.X.X.51 with a brand new unused IP X.X.X.56 instead. It now seems like my 
gossip is some weird state. When I start the replacement node I see line like

{noformat}
 INFO [GossipStage:1] 2014-11-14 14:57:03,025 Gossiper.java (line 901) 
InetAddress /X.X.X.51 is now DOWN
 INFO [GossipStage:1] 2014-11-14 14:57:03,042 Gossiper.java (line 901) 
InetAddress /X.X.X.56 is now DOWN
{noformat}
. The latter is somewhat surprising since that is the IP of the actual 
replacement node. It doesn't surprise me it can't talk to itself if it hasn't 
started!

Eventually the replacement node shuts down with
{noformat}
ERROR [main] 2014-11-14 14:58:06,031 CassandraDaemon.java (line 513) Exception 
encountered during startup
java.lang.UnsupportedOperationException: Cannot replace token -2 which does not 
exist!
at 
org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:782)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:614)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:374)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at com.datastax.bdp.server.DseDaemon.main(DseDaemon.java:615)
 INFO [Thread-2] 2014-11-14 14:58:06,035 DseDaemon.java (line 461) DSE shutting 
down...
 INFO [StorageServiceShutdownHook] 2014-11-14 14:58:06,037 Gossiper.java (line 
1307) Announcing shutdown
 INFO [Thread-2] 2014-11-14 14:58:06,046 PluginManager.java (line 355) All 
plugins are stopped.
 INFO [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 463) 
Cassandra shutting down...
ERROR [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 199) 
Exception in thread Thread[Thread-2,5,main]
java.lang.NullPointerException
at 
org.apache.cassandra.service.CassandraDaemon.stop(CassandraDaemon.java:464)
at com.datastax.bdp.server.DseDaemon.stop(DseDaemon.java:464)
at com.datastax.bdp.server.DseDaemon$1.run(DseDaemon.java:364){noformat}

All nodes are showing
{noformat}
jrantil@machine-2:~$ nodetool status company
Datacenter: Analytics
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  Owns (effective)  Host ID
   Rack
UN  X.X.X.50  18.35 GB   1   16.7% 
25efdbcd-14d3-4e9c-803a-3db5795d4efa  rack1
DN  X.X.X.51  195.67 KB  1   16.7% 
d97cf86f-bfaf-4488-b716-26d71635a8fc  rack1
UN  X.X.X.52  18.7 GB1   16.7% 
caa32f68-5a6b-4d87-80bd-baa66a9b61ce  rack1
UN  X.X.X.53  18.56 GB   1   16.7% 
e219321e-a6d5-48c4-9bad-d2e25429b1d2  rack1
UN  X.X.X.54  19.69 GB   1   16.7% 
3cd36895-ee47-41c1-a5f5-41cb0f8526a6  rack1
UN  X.X.X.55  18.88 GB   1   16.7% 
7d3f73c4-724e-45a6-bcf9-d3072dfc157f  rack1
Datacenter: Cassandra
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  Owns (effective)  Host ID
   Rack
UN  X.X.X.33  128.95 GB  256 100.0%
871968c9-1d6b-4f06-ba90-8b3a8d92dcf0  rack1
UN  X.X.X.32  115.3 GB   256 100.0%
d7cacd89-8613-4de5-8a5e-a2c53c41ea45  rack1
UN  X.X.X.31  130.45 GB  256 100.0%
48cb0782-6c9a-4805-9330-38e192b6b680  rack1
{noformat}
, but when X.X.X.56 is starting is shows
{noformat}
root@machine-1:/var/lib/cassandra# nodetool status
Note: Ownership information does not include topology; for complete 
information, specify a keyspace
Datacenter: Analytics
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  Owns   Host ID   
Rack
UN  X.X.X.50  18.41 GB   1   0.2%   25efdbcd-14d3-4e9c-803a-3db5795d4efa  
rack1
UN  X.X.X.52  19.07 GB   1   0.0%   caa32f68-5a6b-4d87-80bd-baa66a9b61ce  
rack1
UN  X.X.X.53  18.65 GB   1   0.1%   e219321e-a6d5-48c4-9bad-d2e25429b1d2  
rack1
UN  X.X.X.54  19.69 GB   1   0.0%   3cd36895-ee47-41c1-a5f5-41cb0f8526a6  
rack1
UN  X.X.X.55  18.97 GB   1   0.2%   7d3f73c4-724e-45a6-bcf9-d3072dfc157f  
rack1
Datacenter: Cassandra
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  Owns   Host ID   
Rack
UN  X.X.X.33  

[jira] [Created] (CASSANDRA-8318) Unable to replace a node

2014-11-14 Thread Jens Rantil (JIRA)
Jens Rantil created CASSANDRA-8318:
--

 Summary: Unable to replace a node
 Key: CASSANDRA-8318
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8318
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.0.8.39 (Datastax DSE 4.5.3)
Reporter: Jens Rantil
 Attachments: X.X.X.56.log

Had a hardware failure of a node. I followed the Datastax documentation[1] on 
how to replace the node X.X.X.51 using a brand new node with the same IP. Since 
it didn't come up after waiting for ~5 minutes or so, I decided to replace 
X.X.X.51 with a brand new unused IP X.X.X.56 instead. It now seems like my 
gossip is some weird state. When I start the replacement node I see line like

{noformat}
 INFO [GossipStage:1] 2014-11-14 14:57:03,025 Gossiper.java (line 901) 
InetAddress /X.X.X.51 is now DOWN
 INFO [GossipStage:1] 2014-11-14 14:57:03,042 Gossiper.java (line 901) 
InetAddress /X.X.X.56 is now DOWN
{noformat}
. The latter is somewhat surprising since that is the IP of the actual 
replacement node. It doesn't surprise me it can't talk to itself if it hasn't 
started!

Eventually the replacement node shuts down with
{noformat}
ERROR [main] 2014-11-14 14:58:06,031 CassandraDaemon.java (line 513) Exception 
encountered during startup
java.lang.UnsupportedOperationException: Cannot replace token -2 which does not 
exist!
at 
org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:782)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:614)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:374)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at com.datastax.bdp.server.DseDaemon.main(DseDaemon.java:615)
 INFO [Thread-2] 2014-11-14 14:58:06,035 DseDaemon.java (line 461) DSE shutting 
down...
 INFO [StorageServiceShutdownHook] 2014-11-14 14:58:06,037 Gossiper.java (line 
1307) Announcing shutdown
 INFO [Thread-2] 2014-11-14 14:58:06,046 PluginManager.java (line 355) All 
plugins are stopped.
 INFO [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 463) 
Cassandra shutting down...
ERROR [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 199) 
Exception in thread Thread[Thread-2,5,main]
java.lang.NullPointerException
at 
org.apache.cassandra.service.CassandraDaemon.stop(CassandraDaemon.java:464)
at com.datastax.bdp.server.DseDaemon.stop(DseDaemon.java:464)
at com.datastax.bdp.server.DseDaemon$1.run(DseDaemon.java:364){noformat}

All nodes are showing
{noformat}
jrantil@analytics-2:~$ nodetool status company
Datacenter: Analytics
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  Owns (effective)  Host ID
   Rack
UN  X.X.X.50  18.35 GB   1   16.7% 
25efdbcd-14d3-4e9c-803a-3db5795d4efa  rack1
DN  X.X.X.51  195.67 KB  1   16.7% 
d97cf86f-bfaf-4488-b716-26d71635a8fc  rack1
UN  X.X.X.52  18.7 GB1   16.7% 
caa32f68-5a6b-4d87-80bd-baa66a9b61ce  rack1
UN  X.X.X.53  18.56 GB   1   16.7% 
e219321e-a6d5-48c4-9bad-d2e25429b1d2  rack1
UN  X.X.X.54  19.69 GB   1   16.7% 
3cd36895-ee47-41c1-a5f5-41cb0f8526a6  rack1
UN  X.X.X.55  18.88 GB   1   16.7% 
7d3f73c4-724e-45a6-bcf9-d3072dfc157f  rack1
Datacenter: Cassandra
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  Owns (effective)  Host ID
   Rack
UN  X.X.X.33  128.95 GB  256 100.0%
871968c9-1d6b-4f06-ba90-8b3a8d92dcf0  rack1
UN  X.X.X.32  115.3 GB   256 100.0%
d7cacd89-8613-4de5-8a5e-a2c53c41ea45  rack1
UN  X.X.X.31  130.45 GB  256 100.0%
48cb0782-6c9a-4805-9330-38e192b6b680  rack1
{noformat}
, but when X.X.X.56 is starting is shows
{noformat}
root@analytics-1:/var/lib/cassandra# nodetool status
Note: Ownership information does not include topology; for complete 
information, specify a keyspace
Datacenter: Analytics
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  Owns   Host ID   
Rack
UN  X.X.X.50  18.41 GB   1   0.2%   25efdbcd-14d3-4e9c-803a-3db5795d4efa  
rack1
UN  X.X.X.52  19.07 GB   1   0.0%   caa32f68-5a6b-4d87-80bd-baa66a9b61ce  
rack1
UN  X.X.X.53  18.65 GB   1   0.1%   e219321e-a6d5-48c4-9bad-d2e25429b1d2  
rack1
UN  X.X.X.54  19.69 GB   1   0.0%   3cd36895-ee47-41c1-a5f5-41cb0f8526a6  
rack1
UN  X.X.X.55  18.97 GB   1   0.2%   7d3f73c4-724e-45a6-bcf9-d3072dfc157f  
rack1
D

[jira] [Commented] (CASSANDRA-8128) Exception when executing UPSERT

2014-10-17 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14174866#comment-14174866
 ] 

Jens Rantil commented on CASSANDRA-8128:


> Can you paste the schema for the keyspace and table to help with 
> reproduction? Obfuscating the names of the keyspace, table, and columns is 
> fine.

Sure,

{noformat}
cqlsh:mykeyspace> DESCRIBE TABLE mytable;

CREATE TABLE mytable (
  col1 uuid,
  col2 uuid,
  col3 uuid,
  col4 double,
  col5 uuid,
  col6 text,
  col7 uuid,
  col8 timestamp,
  col9 text,
  col10 text,
  col11 bigint,
  col12 timestamp,
  col13 double,
  col14 text,
  col15 text,
  col16 text,
  col17 double,
  col18 double,
  col19 uuid,
  col20 text,
  col21 double,
  col22 timestamp,
  col23 text,
  col24 text,
  col25 boolean,
  "col62" bigint,
  col27 text,
  col28 boolean,
  col29 boolean,
  col30 boolean,
  col31 boolean,
  col32 boolean,
  PRIMARY KEY ((col1), col2)
) WITH
  bloom_filter_fp_chance=0.10 AND
  caching='KEYS_ONLY' AND
  comment='' AND
  dclocal_read_repair_chance=0.10 AND
  gc_grace_seconds=864000 AND
  index_interval=128 AND
  read_repair_chance=0.00 AND
  replicate_on_write='true' AND
  populate_io_cache_on_flush='false' AND
  default_time_to_live=0 AND
  speculative_retry='99.0PERCENTILE' AND
  memtable_flush_period_in_ms=0 AND
  compaction={'class': 'LeveledCompactionStrategy'} AND
  compression={'sstable_compression': 'LZ4Compressor'};
{noformat}

> What do you mean by UPSERT? We have no such keyword in CQL. Do you mean 
> INSERT? or UPDATE? or INSERT ... IF NOT EXISTS? How many rows in the batch? 
> How are you building it?

Sorry, there's no logical difference between INSERT and UPDATE (right?), but I 
should obviously be more clear. I am using spring-data-cassandra to store list 
of objects. spring-data-cassandra uses Datastax Java Driver and generates the 
CQL itself. The exception I am getting on the client end can be found here: 
https://jira.spring.io/browse/DATACASS-161. Based on it, I am doing an INSERT 
(the rows don't exist previously in the database). Usually Batches around 
1000-3000 rows. Like I said, smaller batches work.

> Exception when executing UPSERT
> ---
>
> Key: CASSANDRA-8128
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8128
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
>Reporter: Jens Rantil
>Priority: Critical
>  Labels: cql3
>
> I am putting a bunch of (CQL) rows into Datastax DSE 4.5.1-1. Each upsert is 
> for a single partition key with up to ~3000 clustering keys. I understand to 
> large upsert aren't recommended, but I wouldn't expect to be getting the 
> following exception anyway:
> {noformat}
> ERROR [Native-Transport-Requests:4205136] 2014-10-16 12:00:38,668 
> ErrorMessage.java (line 222) Unexpected exception during request
> java.lang.IndexOutOfBoundsException: Index: 1749, Size: 1749
> at java.util.ArrayList.rangeCheck(ArrayList.java:635)
> at java.util.ArrayList.get(ArrayList.java:411)
> at 
> org.apache.cassandra.cql3.Constants$Marker.bindAndGet(Constants.java:278)
> at 
> org.apache.cassandra.cql3.Constants$Setter.execute(Constants.java:307)
> at 
> org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:99)
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.addStatementMutations(BatchStatement.java:200)
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.getMutations(BatchStatement.java:145)
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:251)
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:232)
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158)
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler.statementExecution(DseQueryHandler.java:207)
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:86)
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
> at 
> org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304)
> at 
> org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
> at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
> at 
> org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
> at 
> org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43)
> at 
> org.jboss.netty.handler.execution.ChannelEventRunnable.run(Channe

[jira] [Commented] (CASSANDRA-8127) Support vertical listing in cqlsh

2014-10-17 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14174858#comment-14174858
 ] 

Jens Rantil commented on CASSANDRA-8127:


> This has been supported for ages using EXPAND ON.

Good news! I just found the documentation (for anyone curious): 
http://www.datastax.com/documentation/cql/3.1/cql/cql_reference/expand.html

> Support vertical listing in cqlsh
> -
>
> Key: CASSANDRA-8127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8127
> Project: Cassandra
>  Issue Type: Wish
>  Components: Tools
>Reporter: Jens Rantil
>Priority: Minor
>  Labels: cqlsh
>
> MySQL CLI has this neat feature that you can end queries with `\G` and it 
> will each result row vertically. For tables with many columns, or for users 
> with vertical screen orientation or smaller resolution, this is highly 
> useful. Every time I start `cqlsh` I feel this feature would be highly useful 
> for some of the tables that have many columns. See example below:
> {noformat}
> mysql> SELECT * FROM testtable;
> +--+--+--+
> | a| b| c|
> +--+--+--+
> |1 |2 |3 |
> |4 |5 |6 |
> |6 |7 |8 |
> +--+--+--+
> 3 rows in set (0.00 sec)
> mysql> SELECT * FROM testtable\G
> *** 1. row ***
> a: 1
> b: 2
> c: 3
> *** 2. row ***
> a: 4
> b: 5
> c: 6
> *** 3. row ***
> a: 6
> b: 7
> c: 8
> 3 rows in set (0.00 sec)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8128) Exception when executing UPSERT

2014-10-16 Thread Jens Rantil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Rantil updated CASSANDRA-8128:
---
Description: 
I am putting a bunch of (CQL) rows into Datastax DSE 4.5.1-1. Each upsert is 
for a single partition key with up to ~3000 clustering keys. I understand to 
large upsert aren't recommended, but I wouldn't expect to be getting the 
following exception anyway:

{noformat}
ERROR [Native-Transport-Requests:4205136] 2014-10-16 12:00:38,668 
ErrorMessage.java (line 222) Unexpected exception during request
java.lang.IndexOutOfBoundsException: Index: 1749, Size: 1749
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at 
org.apache.cassandra.cql3.Constants$Marker.bindAndGet(Constants.java:278)
at 
org.apache.cassandra.cql3.Constants$Setter.execute(Constants.java:307)
at 
org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:99)
at 
org.apache.cassandra.cql3.statements.BatchStatement.addStatementMutations(BatchStatement.java:200)
at 
org.apache.cassandra.cql3.statements.BatchStatement.getMutations(BatchStatement.java:145)
at 
org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:251)
at 
org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:232)
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158)
at 
com.datastax.bdp.cassandra.cql3.DseQueryHandler.statementExecution(DseQueryHandler.java:207)
at 
com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:86)
at 
org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
at 
org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304)
at 
org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at 
org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at 
org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at 
org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43)
at 
org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{noformat}

  was:
I am putting a bunch of (CQL) rows into Datastax DSE 4.5.1-1. Each upsert is 
for a single partition key with up to 6000 clustering keys. I understand to 
large upsert aren't recommended, but I wouldn't expect to be getting the 
following exception anyway:

{noformat}
ERROR [Native-Transport-Requests:4205136] 2014-10-16 12:00:38,668 
ErrorMessage.java (line 222) Unexpected exception during request
java.lang.IndexOutOfBoundsException: Index: 1749, Size: 1749
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at 
org.apache.cassandra.cql3.Constants$Marker.bindAndGet(Constants.java:278)
at 
org.apache.cassandra.cql3.Constants$Setter.execute(Constants.java:307)
at 
org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:99)
at 
org.apache.cassandra.cql3.statements.BatchStatement.addStatementMutations(BatchStatement.java:200)
at 
org.apache.cassandra.cql3.statements.BatchStatement.getMutations(BatchStatement.java:145)
at 
org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:251)
at 
org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:232)
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158)
at 
com.datastax.bdp.cassandra.cql3.DseQueryHandler.statementExecution(DseQueryHandler.java:207)
at 
com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:86)
at 
org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
at 
org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304)
at 
org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at 
org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at 
org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at 
org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEv

[jira] [Commented] (CASSANDRA-8128) Exception when executing UPSERT

2014-10-16 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14173693#comment-14173693
 ] 

Jens Rantil commented on CASSANDRA-8128:


FYI, writing the rows in batches of 100 seem to not trigger the above exception.

> Exception when executing UPSERT
> ---
>
> Key: CASSANDRA-8128
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8128
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
>Reporter: Jens Rantil
>Priority: Critical
>  Labels: cql3
>
> I am putting a bunch of (CQL) rows into Datastax DSE 4.5.1-1. Each upsert is 
> for a single partition key with up to 6000 clustering keys. I understand to 
> large upsert aren't recommended, but I wouldn't expect to be getting the 
> following exception anyway:
> {noformat}
> ERROR [Native-Transport-Requests:4205136] 2014-10-16 12:00:38,668 
> ErrorMessage.java (line 222) Unexpected exception during request
> java.lang.IndexOutOfBoundsException: Index: 1749, Size: 1749
> at java.util.ArrayList.rangeCheck(ArrayList.java:635)
> at java.util.ArrayList.get(ArrayList.java:411)
> at 
> org.apache.cassandra.cql3.Constants$Marker.bindAndGet(Constants.java:278)
> at 
> org.apache.cassandra.cql3.Constants$Setter.execute(Constants.java:307)
> at 
> org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:99)
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.addStatementMutations(BatchStatement.java:200)
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.getMutations(BatchStatement.java:145)
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:251)
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:232)
> at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158)
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler.statementExecution(DseQueryHandler.java:207)
> at 
> com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:86)
> at 
> org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
> at 
> org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304)
> at 
> org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
> at 
> org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
> at 
> org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
> at 
> org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43)
> at 
> org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8128) Exception when executing UPSERT

2014-10-16 Thread Jens Rantil (JIRA)
Jens Rantil created CASSANDRA-8128:
--

 Summary: Exception when executing UPSERT
 Key: CASSANDRA-8128
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8128
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Jens Rantil
Priority: Critical


I am putting a bunch of (CQL) rows into Datastax DSE 4.5.1-1. Each upsert is 
for a single partition key with up to 6000 clustering keys. I understand to 
large upsert aren't recommended, but I wouldn't expect to be getting the 
following exception anyway:

{noformat}
ERROR [Native-Transport-Requests:4205136] 2014-10-16 12:00:38,668 
ErrorMessage.java (line 222) Unexpected exception during request
java.lang.IndexOutOfBoundsException: Index: 1749, Size: 1749
at java.util.ArrayList.rangeCheck(ArrayList.java:635)
at java.util.ArrayList.get(ArrayList.java:411)
at 
org.apache.cassandra.cql3.Constants$Marker.bindAndGet(Constants.java:278)
at 
org.apache.cassandra.cql3.Constants$Setter.execute(Constants.java:307)
at 
org.apache.cassandra.cql3.statements.UpdateStatement.addUpdateForKey(UpdateStatement.java:99)
at 
org.apache.cassandra.cql3.statements.BatchStatement.addStatementMutations(BatchStatement.java:200)
at 
org.apache.cassandra.cql3.statements.BatchStatement.getMutations(BatchStatement.java:145)
at 
org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:251)
at 
org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:232)
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158)
at 
com.datastax.bdp.cassandra.cql3.DseQueryHandler.statementExecution(DseQueryHandler.java:207)
at 
com.datastax.bdp.cassandra.cql3.DseQueryHandler.process(DseQueryHandler.java:86)
at 
org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:119)
at 
org.apache.cassandra.transport.Message$Dispatcher.messageReceived(Message.java:304)
at 
org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at 
org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at 
org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at 
org.jboss.netty.handler.execution.ChannelUpstreamEventRunnable.doRun(ChannelUpstreamEventRunnable.java:43)
at 
org.jboss.netty.handler.execution.ChannelEventRunnable.run(ChannelEventRunnable.java:67)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8127) Support vertical listing in cqlsh

2014-10-16 Thread Jens Rantil (JIRA)
Jens Rantil created CASSANDRA-8127:
--

 Summary: Support vertical listing in cqlsh
 Key: CASSANDRA-8127
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8127
 Project: Cassandra
  Issue Type: Wish
  Components: Tools
Reporter: Jens Rantil
Priority: Minor


MySQL CLI has this neat feature that you can end queries with `\G` and it will 
each result row vertically. For tables with many columns, or for users with 
vertical screen orientation or smaller resolution, this is highly useful. Every 
time I start `cqlsh` I feel this feature would be highly useful for some of the 
tables that have many columns. See example below:

{noformat}
mysql> SELECT * FROM testtable;
+--+--+--+
| a| b| c|
+--+--+--+
|1 |2 |3 |
|4 |5 |6 |
|6 |7 |8 |
+--+--+--+
3 rows in set (0.00 sec)

mysql> SELECT * FROM testtable\G
*** 1. row ***
a: 1
b: 2
c: 3
*** 2. row ***
a: 4
b: 5
c: 6
*** 3. row ***
a: 6
b: 7
c: 8
3 rows in set (0.00 sec)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6377) ALLOW FILTERING should allow seq scan filtering

2014-08-21 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14105313#comment-14105313
 ] 

Jens Rantil commented on CASSANDRA-6377:


> Was there part of either of those two statements that is maybe worded too 
> vaguely, or is the issue how you would have found those statements more 
> easily? Improving doc usability is a priority.

Sorry for late answer: Looking at those passages, this was partially a user 
error I guess since I missed those. That said, a sentence about what WHERE is 
based on in the "ALLOW FILTERING" section would have helped me, since that's 
the documentation part I looked up.

> ALLOW FILTERING should allow seq scan filtering
> ---
>
> Key: CASSANDRA-6377
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6377
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
>Reporter: Jonathan Ellis
>Assignee: Sylvain Lebresne
>  Labels: cql
> Fix For: 3.0
>
>
> CREATE TABLE emp_table2 (
> empID int PRIMARY KEY,
> firstname text,
> lastname text,
> b_mon text,
> b_day text,
> b_yr text,
> );
> INSERT INTO emp_table2 (empID,firstname,lastname,b_mon,b_day,b_yr) 
>VALUES (100,'jane','doe','oct','31','1980');
> INSERT INTO emp_table2 (empID,firstname,lastname,b_mon,b_day,b_yr) 
>VALUES (101,'john','smith','jan','01','1981');
> INSERT INTO emp_table2 (empID,firstname,lastname,b_mon,b_day,b_yr) 
>VALUES (102,'mary','jones','apr','15','1982');
> INSERT INTO emp_table2 (empID,firstname,lastname,b_mon,b_day,b_yr) 
>VALUES (103,'tim','best','oct','25','1982');
>
> SELECT b_mon,b_day,b_yr,firstname,lastname FROM emp_table2 
> WHERE b_mon='oct' ALLOW FILTERING;
> Bad Request: No indexed columns present in by-columns clause with Equal 
> operator



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6377) ALLOW FILTERING should allow seq scan filtering

2014-08-06 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14087441#comment-14087441
 ] 

Jens Rantil commented on CASSANDRA-6377:


I just asked a question about this on the user mailing list 
(http://bit.ly/V0YaAg). I was expecting this to work. Just wanted to chime in 
that it's not entirely clear from CQL documentation that this is currently not 
supported.

> ALLOW FILTERING should allow seq scan filtering
> ---
>
> Key: CASSANDRA-6377
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6377
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
>Reporter: Jonathan Ellis
>Assignee: Sylvain Lebresne
>  Labels: cql
> Fix For: 3.0
>
>
> CREATE TABLE emp_table2 (
> empID int PRIMARY KEY,
> firstname text,
> lastname text,
> b_mon text,
> b_day text,
> b_yr text,
> );
> INSERT INTO emp_table2 (empID,firstname,lastname,b_mon,b_day,b_yr) 
>VALUES (100,'jane','doe','oct','31','1980');
> INSERT INTO emp_table2 (empID,firstname,lastname,b_mon,b_day,b_yr) 
>VALUES (101,'john','smith','jan','01','1981');
> INSERT INTO emp_table2 (empID,firstname,lastname,b_mon,b_day,b_yr) 
>VALUES (102,'mary','jones','apr','15','1982');
> INSERT INTO emp_table2 (empID,firstname,lastname,b_mon,b_day,b_yr) 
>VALUES (103,'tim','best','oct','25','1982');
>
> SELECT b_mon,b_day,b_yr,firstname,lastname FROM emp_table2 
> WHERE b_mon='oct' ALLOW FILTERING;
> Bad Request: No indexed columns present in by-columns clause with Equal 
> operator



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7683) Always allow CREATE TABLE IF NOT EXISTS if it exists

2014-08-05 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14086167#comment-14086167
 ] 

Jens Rantil commented on CASSANDRA-7683:


[~jkrupan] "CREATE TABLE IF NOT EXISTS", yes that's what I meant. Sorry for 
that.

Thanks, yall for at least considering this. I think you understood the issue 
well. I'll keep my workaround for now.

> Always allow CREATE TABLE IF NOT EXISTS if it exists
> 
>
> Key: CASSANDRA-7683
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7683
> Project: Cassandra
>  Issue Type: Wish
>  Components: Core
>Reporter: Jens Rantil
>Priority: Minor
>
> Background: I have a table that I'd like to make sure exists when I boot up 
> my application. To make the life easier for our developers I execute an 
> `ALTER TABLE IF EXISTS`.
> In production I am using user based authorization and for security reasons 
> regular production users are not allowed to CREATE TABLEs.
> Problem: When a user without CREATE permission executes `ALTER TABLE IF 
> EXISTS` for a table that already exists, the command fails telling me the 
> user is not allowed to execute `CREATE TABLE`. It feels kinda ridiculous that 
> this fails when I'm not actually creating the table.
> Proposal: That the permission check only should be done if the table is only 
> actually to be created. 
> Workaround: Right now, I have a boolean that checks if in production and in 
> that case don't try to create the table. Another approach would be to 
> manually check if the table exists.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7683) Always allow CREATE TABLE IF NOT EXISTS if it exists

2014-08-04 Thread Jens Rantil (JIRA)
Jens Rantil created CASSANDRA-7683:
--

 Summary: Always allow CREATE TABLE IF NOT EXISTS if it exists
 Key: CASSANDRA-7683
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7683
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: Jens Rantil
Priority: Minor


Background: I have a table that I'd like to make sure exists when I boot up my 
application. To make the life easier for our developers I execute an `ALTER 
TABLE IF EXISTS`.

In production I am using user based authorization and for security reasons 
regular production users are not allowed to CREATE TABLEs.

Problem: When a user without CREATE permission executes `ALTER TABLE IF EXISTS` 
for a table that already exists, the command fails telling me the user is not 
allowed to execute `CREATE TABLE`. It feels kinda ridiculous that this fails 
when I'm not actually creating the table.

Proposal: That the permission check only should be done if the table is only 
actually to be created. 

Workaround: Right now, I have a boolean that checks if in production and in 
that case don't try to create the table. Another approach would be to manually 
check if the table exists.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7655) Slow repair on empty cluster

2014-07-31 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14080772#comment-14080772
 ] 

Jens Rantil commented on CASSANDRA-7655:


Oh, yeah:
{noformat}
cqlsh> DESC KEYSPACES;

system  my-keyspace  dse_system  system_auth  system_traces
{noformat}

> Slow repair on empty cluster
> 
>
> Key: CASSANDRA-7655
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7655
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jens Rantil
>Priority: Minor
>  Labels: repair
>
> Background: I have done:
>  * I've set up three (Datastax DSE) nodes with replication factor of 3. Each 
> node has 256 vnodes. Each is running Cassandra 2.0.8.39, according to `cqlsh`.
>  * [enabled 
> authorization|http://www.datastax.com/documentation/cassandra/1.2/cassandra/security/secure_config_native_authorize_t.html]
>  * [enabled 
> authentication|http://www.datastax.com/documentation/cassandra/1.2/cassandra/security/secure_about_native_authenticate_c.html]
>  * created a custom keyspace with replication factor and created a smaller 
> table without putting any data into it.
> For fun execute a `nodetool repair` in my terminal this takes _23 minutes_. 
> This feels a bit slow to me for not having put _any_ data into my cluster. Is 
> this expected? Or a bug?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7655) Slow repair on empty cluster

2014-07-31 Thread Jens Rantil (JIRA)
Jens Rantil created CASSANDRA-7655:
--

 Summary: Slow repair on empty cluster
 Key: CASSANDRA-7655
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7655
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jens Rantil
Priority: Minor


Background: I have done:
 * I've set up three (Datastax DSE) nodes with replication factor of 3. Each 
node has 256 vnodes. Each is running Cassandra 2.0.8.39, according to `cqlsh`.
 * [enabled 
authorization|http://www.datastax.com/documentation/cassandra/1.2/cassandra/security/secure_config_native_authorize_t.html]
 * [enabled 
authentication|http://www.datastax.com/documentation/cassandra/1.2/cassandra/security/secure_about_native_authenticate_c.html]
 * created a custom keyspace with replication factor and created a smaller 
table without putting any data into it.

For fun execute a `nodetool repair` in my terminal this takes _23 minutes_. 
This feels a bit slow to me for not having put _any_ data into my cluster. Is 
this expected? Or a bug?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7553) Issue parsing release candidate version in cqlsh

2014-07-16 Thread Jens Rantil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Rantil updated CASSANDRA-7553:
---

Environment: 
{noformat}
$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=12.10
DISTRIB_CODENAME=quantal
DISTRIB_DESCRIPTION="Ubuntu 12.10"
$ dpkg -l|grep cassandra
ii  cassandra   2.1.0~rc3   
  all  distributed storage system for structured data
{noformat}

  was:
$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=12.10
DISTRIB_CODENAME=quantal
DISTRIB_DESCRIPTION="Ubuntu 12.10"
dpkg -l|grep cassandra
ii  cassandra   2.1.0~rc3   
  all  distributed storage system for structured data


> Issue parsing release candidate version in cqlsh
> 
>
> Key: CASSANDRA-7553
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7553
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: {noformat}
> $ cat /etc/lsb-release
> DISTRIB_ID=Ubuntu
> DISTRIB_RELEASE=12.10
> DISTRIB_CODENAME=quantal
> DISTRIB_DESCRIPTION="Ubuntu 12.10"
> $ dpkg -l|grep cassandra
> ii  cassandra   2.1.0~rc3 
> all  distributed storage system for structured data
> {noformat}
>Reporter: Jens Rantil
>  Labels: cqlsh
> Fix For: 2.1 rc3
>
>
> I just did a fresh install of 2.1.0~rc3, made sure Cassandra was running 
> (which it was) and executed `cqlsh` in the shell:
> {noformat}
> $ cqlsh
> Traceback (most recent call last):
>   File "/usr/bin/cqlsh", line 1894, in 
> main(*read_options(sys.argv[1:], os.environ))
>   File "/usr/bin/cqlsh", line 1877, in main
> single_statement=options.execute)
>   File "/usr/bin/cqlsh", line 496, in __init__
> self.get_connection_versions()
>   File "/usr/bin/cqlsh", line 595, in get_connection_versions
> self.cass_ver_tuple = tuple(map(int, vers['build'].split('-', 
> 1)[0].split('.')[:3]))
> ValueError: invalid literal for int() with base 10: '0~rc3'
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-7553) Issue parsing release candidate version in cqlsh

2014-07-16 Thread Jens Rantil (JIRA)
Jens Rantil created CASSANDRA-7553:
--

 Summary: Issue parsing release candidate version in cqlsh
 Key: CASSANDRA-7553
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7553
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: $ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=12.10
DISTRIB_CODENAME=quantal
DISTRIB_DESCRIPTION="Ubuntu 12.10"
dpkg -l|grep cassandra
ii  cassandra   2.1.0~rc3   
  all  distributed storage system for structured data
Reporter: Jens Rantil
 Fix For: 2.1 rc3


I just did a fresh install of 2.1.0~rc3, made sure Cassandra was running (which 
it was) and executed `cqlsh` in the shell:

{noformat}
$ cqlsh
Traceback (most recent call last):
  File "/usr/bin/cqlsh", line 1894, in 
main(*read_options(sys.argv[1:], os.environ))
  File "/usr/bin/cqlsh", line 1877, in main
single_statement=options.execute)
  File "/usr/bin/cqlsh", line 496, in __init__
self.get_connection_versions()
  File "/usr/bin/cqlsh", line 595, in get_connection_versions
self.cass_ver_tuple = tuple(map(int, vers['build'].split('-', 
1)[0].split('.')[:3]))
ValueError: invalid literal for int() with base 10: '0~rc3'
{noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)