[jira] [Updated] (CASSANDRA-7464) Replace sstable2json and json2sstable

2015-12-21 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-7464:
---
Reviewer: Yuki Morishita

> Replace sstable2json and json2sstable
> -
>
> Key: CASSANDRA-7464
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7464
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 3.x
>
>
> Both tools are pretty awful. They are primarily meant for debugging (there is 
> much more efficient and convenient ways to do import/export data), but their 
> output manage to be hard to handle both for humans and for tools (especially 
> as soon as you have modern stuff like composites).
> There is value to having tools to export sstable contents into a format that 
> is easy to manipulate by human and tools for debugging, small hacks and 
> general tinkering, but sstable2json and json2sstable are not that.  
> So I propose that we deprecate those tools and consider writing better 
> replacements. It shouldn't be too hard to come up with an output format that 
> is more aware of modern concepts like composites, UDTs, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7464) Replace sstable2json and json2sstable

2015-12-21 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-7464:
---
Assignee: Chris Lohfink

> Replace sstable2json and json2sstable
> -
>
> Key: CASSANDRA-7464
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7464
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Chris Lohfink
>Priority: Minor
> Fix For: 3.x
>
>
> Both tools are pretty awful. They are primarily meant for debugging (there is 
> much more efficient and convenient ways to do import/export data), but their 
> output manage to be hard to handle both for humans and for tools (especially 
> as soon as you have modern stuff like composites).
> There is value to having tools to export sstable contents into a format that 
> is easy to manipulate by human and tools for debugging, small hacks and 
> general tinkering, but sstable2json and json2sstable are not that.  
> So I propose that we deprecate those tools and consider writing better 
> replacements. It shouldn't be too hard to come up with an output format that 
> is more aware of modern concepts like composites, UDTs, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10111) reconnecting snitch can bypass cluster name check

2015-12-21 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067055#comment-15067055
 ] 

Paulo Motta commented on CASSANDRA-10111:
-

Code and approach looks good but it seems we can only bump the messaging 
service version in the next major, or 4.0.

Alternatives are to wait until then, or maybe do some workaround before, like 
adding a new gossip field {{CLUSTER_ID}} to {{ApplicationState}} and ignore any 
{{GossipDigestAck}} or {{GossipDigestAck2}} messages containing states from 
other cluster ids.

Since this is quite an unlikely (and unfortunate) situation I'd be more in 
favor of waiting for the messaging bump (since we've waited until here) instead 
of polluting gossip with more fields.

> reconnecting snitch can bypass cluster name check
> -
>
> Key: CASSANDRA-10111
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10111
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Chris Burroughs
>Assignee: Joel Knighton
>  Labels: gossip
> Fix For: 3.x
>
>
> Setup:
>  * Two clusters: A & B
>  * Both are two DC cluster
>  * Both use GossipingPropertyFileSnitch with different 
> listen_address/broadcast_address
> A new node was added to cluster A with a broadcast_address of an existing 
> node in cluster B (due to an out of data DNS entry).  Cluster B  added all of 
> the nodes from cluster A, somehow bypassing the cluster name mismatch check 
> for this nodes.  The first reference to cluster A nodes in cluster B logs is 
> when then were added:
> {noformat}
>  INFO [GossipStage:1] 2015-08-17 15:08:33,858 Gossiper.java (line 983) Node 
> /8.37.70.168 is now part of the cluster
> {noformat}
> Cluster B nodes then tried to gossip to cluster A nodes, but cluster A kept 
> them out with 'ClusterName mismatch'.  Cluster B however tried to send to 
> send reads/writes to cluster A and general mayhem ensued.
> Obviously this is a Bad (TM) config that Should Not Be Done.  However, since 
> the consequence of crazy merged clusters are really bad (the reason there is 
> the name mismatch check in the first place) I think the hole is reasonable to 
> plug.  I'm not sure exactly what the code path is that skips the check in 
> GossipDigestSynVerbHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10111) reconnecting snitch can bypass cluster name check

2015-12-21 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067072#comment-15067072
 ] 

Joel Knighton commented on CASSANDRA-10111:
---

Sounds good - my original understanding was that this would be okay, but it 
sounds like the messaging service version change strategy is still unclear.

I think the best option is to wait until the next messaging service change. As 
you mentioned, this is an unlikely situation that has a solution in the form of 
forcing removal of the entries from gossip using nodetool.

> reconnecting snitch can bypass cluster name check
> -
>
> Key: CASSANDRA-10111
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10111
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Chris Burroughs
>Assignee: Joel Knighton
>  Labels: gossip, messaging-service-bump-required
> Fix For: 3.x
>
>
> Setup:
>  * Two clusters: A & B
>  * Both are two DC cluster
>  * Both use GossipingPropertyFileSnitch with different 
> listen_address/broadcast_address
> A new node was added to cluster A with a broadcast_address of an existing 
> node in cluster B (due to an out of data DNS entry).  Cluster B  added all of 
> the nodes from cluster A, somehow bypassing the cluster name mismatch check 
> for this nodes.  The first reference to cluster A nodes in cluster B logs is 
> when then were added:
> {noformat}
>  INFO [GossipStage:1] 2015-08-17 15:08:33,858 Gossiper.java (line 983) Node 
> /8.37.70.168 is now part of the cluster
> {noformat}
> Cluster B nodes then tried to gossip to cluster A nodes, but cluster A kept 
> them out with 'ClusterName mismatch'.  Cluster B however tried to send to 
> send reads/writes to cluster A and general mayhem ensued.
> Obviously this is a Bad (TM) config that Should Not Be Done.  However, since 
> the consequence of crazy merged clusters are really bad (the reason there is 
> the name mismatch check in the first place) I think the hole is reasonable to 
> plug.  I'm not sure exactly what the code path is that skips the check in 
> GossipDigestSynVerbHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10912) resumable_bootstrap_test dtest flaps

2015-12-21 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067089#comment-15067089
 ] 

Jim Witschey commented on CASSANDRA-10912:
--

[~yukim] Could you have a look this?

> resumable_bootstrap_test dtest flaps
> 
>
> Key: CASSANDRA-10912
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10912
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
> Fix For: 3.0.x
>
>
> {{bootstrap_test.py:TestBootstrap.resumable_bootstrap_test}} test flaps when 
> a node fails to start listening for connections via CQL:
> {code}
> 21 Dec 2015 10:07:48 [node3] Missing: ['Starting listening for CQL clients']:
> {code}
> I've seen it on 2.2 HEAD:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/449/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test/history/
> and 3.0 HEAD:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/444/testReport/junit/bootstrap_test/TestBootstrap/resumable_bootstrap_test/
> and trunk:
> http://cassci.datastax.com/job/trunk_dtest/838/testReport/junit/bootstrap_test/TestBootstrap/resumable_bootstrap_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6696) Partition sstables by token range

2015-12-21 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066989#comment-15066989
 ] 

Yuki Morishita commented on CASSANDRA-6696:
---

[~krummas] As you pointed out, progress display will be messed up.
Since total bytes received for each boundary cannot be determined beforehand 
right now, displaying constant name is the way to go. For that, keyspace and 
table names are enough imo.
Of course if we only have one disc, then we can do the way we do now (showing 
the whole path).

Other than that, streaming part seems good to me.

> Partition sstables by token range
> -
>
> Key: CASSANDRA-6696
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Marcus Eriksson
>  Labels: compaction, correctness, dense-storage, 
> jbod-aware-compaction, performance
> Fix For: 3.2
>
>
> In JBOD, when someone gets a bad drive, the bad drive is replaced with a new 
> empty one and repair is run. 
> This can cause deleted data to come back in some cases. Also this is true for 
> corrupt stables in which we delete the corrupt stable and run repair. 
> Here is an example:
> Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
> row=sankalp col=sankalp is written 20 days back and successfully went to all 
> three nodes. 
> Then a delete/tombstone was written successfully for the same row column 15 
> days back. 
> Since this tombstone is more than gc grace, it got compacted in Nodes A and B 
> since it got compacted with the actual data. So there is no trace of this row 
> column in node A and B.
> Now in node C, say the original data is in drive1 and tombstone is in drive2. 
> Compaction has not yet reclaimed the data and tombstone.  
> Drive2 becomes corrupt and was replaced with new empty drive. 
> Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp 
> has come back to life. 
> Now after replacing the drive we run repair. This data will be propagated to 
> all nodes. 
> Note: This is still a problem even if we run repair every gc grace. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6246) EPaxos

2015-12-21 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-6246:

Labels: messaging-service-bump-required  (was: )

> EPaxos
> --
>
> Key: CASSANDRA-6246
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6246
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jonathan Ellis
>Assignee: Blake Eggleston
>  Labels: messaging-service-bump-required
> Fix For: 3.x
>
>
> One reason we haven't optimized our Paxos implementation with Multi-paxos is 
> that Multi-paxos requires leader election and hence, a period of 
> unavailability when the leader dies.
> EPaxos is a Paxos variant that requires (1) less messages than multi-paxos, 
> (2) is particularly useful across multiple datacenters, and (3) allows any 
> node to act as coordinator: 
> http://sigops.org/sosp/sosp13/papers/p358-moraru.pdf
> However, there is substantial additional complexity involved if we choose to 
> implement it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10520) Compressed writer and reader should support non-compressed data.

2015-12-21 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-10520:
-
Labels: messaging-service-bump-required  (was: )

> Compressed writer and reader should support non-compressed data.
> 
>
> Key: CASSANDRA-10520
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10520
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Branimir Lambov
>Assignee: Branimir Lambov
>  Labels: messaging-service-bump-required
> Fix For: 3.0.x
>
>
> Compressing uncompressible data, as done, for instance, to write SSTables 
> during stress-tests, results in chunks larger than 64k which are a problem 
> for the buffer pooling mechanisms employed by the 
> {{CompressedRandomAccessReader}}. This results in non-negligible performance 
> issues due to excessive memory allocation.
> To solve this problem and avoid decompression delays in the cases where it 
> does not provide benefits, I think we should allow compressed files to store 
> uncompressed chunks as alternative to compressed data. Such a chunk could be 
> written after compression returns a buffer larger than, for example, 90% of 
> the input, and would not result in additional delays in writing. On reads it 
> could be recognized by size (using a single global threshold constant in the 
> compression metadata) and data could be directly transferred into the 
> decompressed buffer, skipping the decompression step and ensuring a 64k 
> buffer for compressed data always suffices.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10912) resumable_bootstrap_test dtest flaps

2015-12-21 Thread Jim Witschey (JIRA)
Jim Witschey created CASSANDRA-10912:


 Summary: resumable_bootstrap_test dtest flaps
 Key: CASSANDRA-10912
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10912
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Jim Witschey


{{bootstrap_test.py:TestBootstrap.resumable_bootstrap_test}} test flaps when a 
node fails to start listening for connections via CQL:

{code}
21 Dec 2015 10:07:48 [node3] Missing: ['Starting listening for CQL clients']:
{code}

I've seen it on 2.2 HEAD:

http://cassci.datastax.com/job/cassandra-2.2_dtest/449/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test/history/

and 3.0 HEAD:

http://cassci.datastax.com/job/cassandra-3.0_dtest/444/testReport/junit/bootstrap_test/TestBootstrap/resumable_bootstrap_test/

and trunk:

http://cassci.datastax.com/job/trunk_dtest/838/testReport/junit/bootstrap_test/TestBootstrap/resumable_bootstrap_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10854) cqlsh COPY FROM csv having line with more than one consecutive ',' delimiter is throwing 'list index out of range'

2015-12-21 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066996#comment-15066996
 ] 

Jim Witschey commented on CASSANDRA-10854:
--

[~Stefania] I believe you may be the person to have a look at this. Importing 
this file on {{casandra-2.2}} {{HEAD}} prints a different bad error message:

{code}
cqlsh> COPY music.tracks_by_album (album_title, album_year, performer, 
album_genre, track_number, track_title) FROM './tracks_by_album.csv' WITH 
HEADER = 'true';

Starting copy of music.tracks_by_album with columns ['album_title', 
'album_year', 'performer', 'album_genre', 'track_number', 'track_title'].
Failed to import 1 rows: TypeError - exceptions.Exception does not take keyword 
arguments -  given up after 1 attempts
Failed to process 1 batches
Processed: 0 rows; Rate:   0 rows/s; Avg. rage:   0 rows/s
0 rows imported in 0.124 seconds.
{code}

[~puspendu.baner...@gmail.com] I believe the CSV you shared is not supposed to 
be accepted by {{COPY FROM}}, but there's definitely room to improve the error 
message.

> cqlsh COPY FROM csv having line with more than one consecutive  ',' delimiter 
>  is throwing 'list index out of range'
> 
>
> Key: CASSANDRA-10854
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10854
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: cqlsh 5.0.1 | Cassandra 2.1.11.969 | DSE 4.8.3 | CQL 
> spec 3.2.1 
>Reporter: Puspendu Banerjee
>Priority: Minor
>
> cqlsh COPY FROM csv having line with more than one consecutive  ',' delimiter 
>  is throwing 'list index out of range'
> Steps to re-produce:
> {code}
> CREATE TABLE tracks_by_album (
>   album_title TEXT,
>   album_year INT,
>   performer TEXT STATIC,
>   album_genre TEXT STATIC,
>   track_number INT,
>   track_title TEXT,
>   PRIMARY KEY ((album_title, album_year), track_number)
> );
> {code}
> Create a file: tracks_by_album.csv having following 2 lines :
> {code}
> album,year,performer,genre,number,title
> a,2015,b c d,e f g,,
> {code}
> {code}
> cqlsh> COPY music.tracks_by_album
>  (album_title, album_year, performer, album_genre, track_number, 
> track_title)
> FROM '~/tracks_by_album.csv'
> WITH HEADER = 'true';
> Error :
> Starting copy of music.tracks_by_album with columns ['album_title', 
> 'album_year', 'performer', 'album_genre', 'track_number', 'track_title'].
> list index out of range
> Aborting import at record #1. Previously inserted records are still present, 
> and some records after that may be present as well.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10383) Disable auto snapshot on selected tables.

2015-12-21 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10383:

Labels: doc-impacting messaging-service-bump-required  (was: doc-impacting)

> Disable auto snapshot on selected tables.
> -
>
> Key: CASSANDRA-10383
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10383
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tommy Stendahl
>Assignee: Tommy Stendahl
>  Labels: doc-impacting, messaging-service-bump-required
> Attachments: 10383.txt
>
>
> I have a use case where I would like to turn off auto snapshot for selected 
> tables, I don't want to turn it off completely since its a good feature. 
> Looking at the code I think it would be relatively easy to fix.
> My plan is to create a new table property named something like 
> "disable_auto_snapshot". If set to false it will prevent auto snapshot on the 
> table, if set to true auto snapshot will be controlled by the "auto_snapshot" 
> property in the cassandra.yaml. Default would be true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10111) reconnecting snitch can bypass cluster name check

2015-12-21 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10111:

Labels: gossip messaging-service-bump-required  (was: gossip)

> reconnecting snitch can bypass cluster name check
> -
>
> Key: CASSANDRA-10111
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10111
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Chris Burroughs
>Assignee: Joel Knighton
>  Labels: gossip, messaging-service-bump-required
> Fix For: 3.x
>
>
> Setup:
>  * Two clusters: A & B
>  * Both are two DC cluster
>  * Both use GossipingPropertyFileSnitch with different 
> listen_address/broadcast_address
> A new node was added to cluster A with a broadcast_address of an existing 
> node in cluster B (due to an out of data DNS entry).  Cluster B  added all of 
> the nodes from cluster A, somehow bypassing the cluster name mismatch check 
> for this nodes.  The first reference to cluster A nodes in cluster B logs is 
> when then were added:
> {noformat}
>  INFO [GossipStage:1] 2015-08-17 15:08:33,858 Gossiper.java (line 983) Node 
> /8.37.70.168 is now part of the cluster
> {noformat}
> Cluster B nodes then tried to gossip to cluster A nodes, but cluster A kept 
> them out with 'ClusterName mismatch'.  Cluster B however tried to send to 
> send reads/writes to cluster A and general mayhem ensued.
> Obviously this is a Bad (TM) config that Should Not Be Done.  However, since 
> the consequence of crazy merged clusters are really bad (the reason there is 
> the name mismatch check in the first place) I think the hole is reasonable to 
> plug.  I'm not sure exactly what the code path is that skips the check in 
> GossipDigestSynVerbHandler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10839) cqlsh failed to format value bytearray

2015-12-21 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067013#comment-15067013
 ] 

Jim Witschey commented on CASSANDRA-10839:
--

[~Stefania] Could you have a look at this? As far as I can tell, Python 2.7+ 
isn't a documented dependency for {{cqlsh}} on 2.1, but there seems to be an 
incompatibility that I can't find documented there. In fact, {{cqlsh}}'s 
{{python}} executable-discovery code prefers 2.6:

https://github.com/apache/cassandra/blob/cassandra-2.1/bin/cqlsh#L25

> cqlsh failed to format value bytearray
> --
>
> Key: CASSANDRA-10839
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10839
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Severin Leonhardt
>Priority: Minor
> Fix For: 2.1.x
>
>
> Execute the following in cqlsh (5.0.1):
> {noformat}
> > create table test(column blob, primary key(column));
> > insert into test (column) VALUES(0x00);
> > select * from test;
>  column
> 
>  bytearray(b'\x00')
> (1 rows)
> Failed to format value bytearray(b'\x00') : b2a_hex() argument 1 must be 
> string or read-only buffer, not bytearray
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10912) resumable_bootstrap_test dtest flaps

2015-12-21 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067089#comment-15067089
 ] 

Jim Witschey edited comment on CASSANDRA-10912 at 12/21/15 9:30 PM:


[~yukim] Could you have a look this?

EDIT: this could be part of an environmental failure, but I'd appreciate a 
quick look to double check that I'm not missing something obvious.


was (Author: mambocab):
[~yukim] Could you have a look this?

> resumable_bootstrap_test dtest flaps
> 
>
> Key: CASSANDRA-10912
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10912
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
> Fix For: 3.0.x
>
>
> {{bootstrap_test.py:TestBootstrap.resumable_bootstrap_test}} test flaps when 
> a node fails to start listening for connections via CQL:
> {code}
> 21 Dec 2015 10:07:48 [node3] Missing: ['Starting listening for CQL clients']:
> {code}
> I've seen it on 2.2 HEAD:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/449/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test/history/
> and 3.0 HEAD:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/444/testReport/junit/bootstrap_test/TestBootstrap/resumable_bootstrap_test/
> and trunk:
> http://cassci.datastax.com/job/trunk_dtest/838/testReport/junit/bootstrap_test/TestBootstrap/resumable_bootstrap_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9294) Streaming errors should log the root cause

2015-12-21 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067075#comment-15067075
 ] 

Paulo Motta commented on CASSANDRA-9294:


ping [~yukim]

> Streaming errors should log the root cause
> --
>
> Key: CASSANDRA-9294
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9294
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Brandon Williams
>Assignee: Paulo Motta
> Fix For: 3.2, 2.1.x, 2.2.x, 3.0.x
>
>
> Currently, when a streaming error occurs all you get is something like:
> {noformat}
> java.util.concurrent.ExecutionException: 
> org.apache.cassandra.streaming.StreamException: Stream failed
> {noformat}
> Instead, we should log the root cause.  Was the connection reset by peer, did 
> it timeout, etc?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10907) Nodetool snapshot should provide an option to skip flushing

2015-12-21 Thread Nick Bailey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066826#comment-15066826
 ] 

Nick Bailey commented on CASSANDRA-10907:
-

My only objection is that the behavior of what information is actually backed 
up is basically undefined. It's possibly it's useful in some very specific use 
cases but it also introduces potential traps when used incorrectly.

It sounds to me like you should be using incremental backups. When that is 
enabled a hardlink is created every time a memtable is flushed or an sstable 
streamed. You can then just watch that directory and ship the sstables off node 
on demand as they are created.

> Nodetool snapshot should provide an option to skip flushing
> ---
>
> Key: CASSANDRA-10907
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10907
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
>  Labels: lhf
>
> For some practical scenarios, it doesn't matter if the data is flushed to 
> disk before taking a snapshot. However, it's better to save some flushing 
> time to make snapshot process quick.
> As such, it will be a good idea to provide this option to snapshot command. 
> The wiring from nodetool to MBean to VerbHandler should be easy. 
> I can provide a patch if this makes sense.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8844) Change Data Capture (CDC)

2015-12-21 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049151#comment-15049151
 ] 

Ariel Weisberg edited comment on CASSANDRA-8844 at 12/21/15 6:38 PM:
-

I don't want to scope creep this ticket. I think that this is heading the right 
direction in terms of deferring most of the functionality around consumption of 
CDC data and getting a good initial implementation of buffering and writing the 
data.

I do want to splat somewhere my thoughts on the consumption side. VoltDB had a 
CDC feature that went through several iterations over the years as we learned 
what did and didn't work.

The original implementation was a wire protocol that clients could connect to. 
The protocol was a pain and the client had to be a distributed system with 
consensus in order to load balance and fail over across multiple client 
instances and the implementation we maintained for people to plug into was a 
pain because we had to connect to all the nodes to acknowledge consumed CDC 
data at replicas. And all of this was without the benefit of already being a 
cluster member with access to failure information. The clients also had to know 
way too much about cluster internals and topology to do it well.

For the rewrite I ended up hosting CDC data processors inside the server. In 
practice this is not as scary as it may sound to some. Most of the processors 
were written by us, and there wasn't a ton they could do to misbehave without 
trying really hard and if they did that it was on them. It didn't end up being 
a support or maintenance headache, and I don't think we had instances of the 
CDC processing destabilizing things.

You could make the data available over a socket as one of these processors, 
there was a JDBC processor to insert into a database via JDBC, there was a 
Kafka processor to load data into Kafka, one to load the data into another 
VoltDB instance, and a processor that wrote the data to local disk as a CSV etc.

The processor implemented by users didn't have to do anything to deal with fail 
over and load balancing of consuming data. The database hosting the processor 
would only pass data for a given range on the hash ring to one processor at a 
time. When a processor acknowledged data as committed downstream the database 
transparently sends the acknowledgement to all replicas allowing them to 
release persisted CDC data. VoltDB runs ZooKeeper on top of VoltDB internally 
so this was pretty easy to implement inside VoltDB, but outside it would have 
been a pain.

The goal was that CDC data would never hit the filesystem, and that if it hit 
the filesystem it wouldn't hit disk if possible. Heap promotion and survivor 
copying had to be non-existent to avoid having an impact on GC pause time. With 
TPC and buffering mutations before passing them to the processors we had no 
problem getting data out at disk or line rate. Reclaiming spaced ended up being 
file deletion so that was cheap as well.


was (Author: aweisberg):
I don't want to scope creep this ticket. I think that this is heading the write 
direction in terms of deferring most of the functionality around consumption of 
CDC data and getting a good initial implementation of buffering and writing the 
data.

I do want to splat somewhere my thoughts on the consumption side. VoltDB had a 
CDC feature that went through several iterations over the years as we learned 
what did and didn't work.

The original implementation was a wire protocol that clients could connect to. 
The protocol was a pain and the client had to be a distributed system with 
consensus in order to load balance and fail over across multiple client 
instances and the implementation we maintained for people to plug into was a 
pain because we had to connect to all the nodes to acknowledge consumed CDC 
data at replicas. And all of this was without the benefit of already being a 
cluster member with access to failure information. The clients also had to know 
way too much about cluster internals and topology to do it well.

For the rewrite I ended up hosting CDC data processors inside the server. In 
practice this is not as scary as it may sound to some. Most of the processors 
were written by us, and there wasn't a ton they could do to misbehave without 
trying really hard and if they did that it was on them. It didn't end up being 
a support or maintenance headache, and I don't think we had instances of the 
CDC processing destabilizing things.

You could make the data available over a socket as one of these processors, 
there was a JDBC processor to insert into a database via JDBC, there was a 
Kafka processor to load data into Kafka, one to load the data into another 
VoltDB instance, and a processor that wrote the data to local disk as a CSV etc.

The processor implemented by users didn't have to do anything to deal with fail 
over and 

[jira] [Commented] (CASSANDRA-10907) Nodetool snapshot should provide an option to skip flushing

2015-12-21 Thread Anubhav Kale (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066956#comment-15066956
 ] 

Anubhav Kale commented on CASSANDRA-10907:
--

I agree that what is backed up will be undefined. In my opinion, the trap is 
very clear here so I don't think it can be misused. IMHO, the other nodetool 
commands have such traps as well so this is no different (e.g. why does scrub 
have an option to not snapshot ?). 

That said, if you feel strongly against this, I understand and we can kill this 
(I can always make a local patch).

BTW I can't use incremental backups, because I do not want to ship SS Table 
files that would have been removed as part of compaction. When compaction kicks 
in and deletes some files, it won't remove them from backups (which makes sense 
else it won't be incremental). So, at the time of recovery we are moving too 
many files back thus increasing the downtime of Apps. If I am not understanding 
something correctly here, please let me know !

> Nodetool snapshot should provide an option to skip flushing
> ---
>
> Key: CASSANDRA-10907
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10907
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
>  Labels: lhf
>
> For some practical scenarios, it doesn't matter if the data is flushed to 
> disk before taking a snapshot. However, it's better to save some flushing 
> time to make snapshot process quick.
> As such, it will be a good idea to provide this option to snapshot command. 
> The wiring from nodetool to MBean to VerbHandler should be easy. 
> I can provide a patch if this makes sense.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8844) Change Data Capture (CDC)

2015-12-21 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049151#comment-15049151
 ] 

Ariel Weisberg edited comment on CASSANDRA-8844 at 12/21/15 6:35 PM:
-

I don't want to scope creep this ticket. I think that this is heading the write 
direction in terms of deferring most of the functionality around consumption of 
CDC data and getting a good initial implementation of buffering and writing the 
data.

I do want to splat somewhere my thoughts on the consumption side. VoltDB had a 
CDC feature that went through several iterations over the years as we learned 
what did and didn't work.

The original implementation was a wire protocol that clients could connect to. 
The protocol was a pain and the client had to be a distributed system with 
consensus in order to load balance and fail over across multiple client 
instances and the implementation we maintained for people to plug into was a 
pain because we had to connect to all the nodes to acknowledge consumed CDC 
data at replicas. And all of this was without the benefit of already being a 
cluster member with access to failure information. The clients also had to know 
way too much about cluster internals and topology to do it well.

For the rewrite I ended up hosting CDC data processors inside the server. In 
practice this is not as scary as it may sound to some. Most of the processors 
were written by us, and there wasn't a ton they could do to misbehave without 
trying really hard and if they did that it was on them. It didn't end up being 
a support or maintenance headache, and I don't think we had instances of the 
CDC processing destabilizing things.

You could make the data available over a socket as one of these processors, 
there was a JDBC processor to insert into a database via JDBC, there was a 
Kafka processor to load data into Kafka, one to load the data into another 
VoltDB instance, and a processor that wrote the data to local disk as a CSV etc.

The processor implemented by users didn't have to do anything to deal with fail 
over and load balancing of consuming data. The database hosting the processor 
would only pass data for a given range on the hash ring to one processor at a 
time. When a processor acknowledged data as committed downstream the database 
transparently sends the acknowledgement to all replicas allowing them to 
release persisted CDC data. VoltDB runs ZooKeeper on top of VoltDB internally 
so this was pretty easy to implement inside VoltDB, but outside it would have 
been a pain.

The goal was that CDC data would never hit the filesystem, and that if it hit 
the filesystem it wouldn't hit disk if possible. Heap promotion and survivor 
copying had to be non-existent to avoid having an impact on GC pause time. With 
TPC and buffering mutations before passing them to the processors we had no 
problem getting data out at disk or line rate. Reclaiming spaced ended up being 
file deletion so that was cheap as well.


was (Author: aweisberg):
I don't want to scope creep this ticket. I think that this is heading the write 
direction in terms of deferring most of the functionality around consumption of 
CDC data and getting a good initial implementation of buffering and writing the 
data.

I do want to splat somewhere my thoughts on the consumption side. VoltDB had a 
CDC feature that went through several iterations over the years as we learned 
what did and didn't work.

The original implementation was a wire protocol that clients could connect to. 
The protocol was a pain and the client had to be a distributed system with 
consensus in order to load balance and fail over across multiple client 
instances and the implementation we maintained for people to plug into was a 
pain because we had to connect to all the nodes to acknowledge consumed CDC 
data at replicas. And all of this was without the benefit of already being a 
cluster member with access to failure information. The clients also had to know 
way too much about cluster internals and topology to do it well.

For the rewrite I ended up hosting CDC data processors inside the server. In 
practice this is not as scary as it may sound to some. Most of the processors 
were written by us, and there wasn't a ton they could do to misbehave without 
trying really hard and if they did that it was on them. It didn't end up being 
a support or maintenance headache, and I don't think we didn't have instances 
of the CDC processing destabilizing things.

You could make the data available over a socket as one of these processors, 
there was a JDBC processor to insert into a database via JDBC, there was a 
Kafka processor to load data into Kafka, one to load the data into another 
VoltDB instance, and a processor that wrote the data to local disk as a CSV etc.

The processor implemented by users didn't have to do anything to deal with fail 

[jira] [Commented] (CASSANDRA-7464) Replace sstable2json and json2sstable

2015-12-21 Thread Andy Tolbert (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066841#comment-15066841
 ] 

Andy Tolbert commented on CASSANDRA-7464:
-

[~JoshuaMcKenzie], we'd definitely both be interested and willing :).   I don't 
think it would be too big of an effort to get it working with C*.  The only 
non-cli/logging dependency is jackson, which C* already depends on (albeit an 
older version) so it shouldn't be too much effort.

We took a best effort at coming up with an output format that we thought would 
be human readable and familiar to those who previously used sstable2json, but 
definitely would be welcome to feedback.

> Replace sstable2json and json2sstable
> -
>
> Key: CASSANDRA-7464
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7464
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Priority: Minor
> Fix For: 3.x
>
>
> Both tools are pretty awful. They are primarily meant for debugging (there is 
> much more efficient and convenient ways to do import/export data), but their 
> output manage to be hard to handle both for humans and for tools (especially 
> as soon as you have modern stuff like composites).
> There is value to having tools to export sstable contents into a format that 
> is easy to manipulate by human and tools for debugging, small hacks and 
> general tinkering, but sstable2json and json2sstable are not that.  
> So I propose that we deprecate those tools and consider writing better 
> replacements. It shouldn't be too hard to come up with an output format that 
> is more aware of modern concepts like composites, UDTs, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10839) cqlsh failed to format value bytearray

2015-12-21 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-10839:
-
Fix Version/s: 2.1.x

> cqlsh failed to format value bytearray
> --
>
> Key: CASSANDRA-10839
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10839
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Severin Leonhardt
>Priority: Minor
> Fix For: 2.1.x
>
>
> Execute the following in cqlsh (5.0.1):
> {noformat}
> > create table test(column blob, primary key(column));
> > insert into test (column) VALUES(0x00);
> > select * from test;
>  column
> 
>  bytearray(b'\x00')
> (1 rows)
> Failed to format value bytearray(b'\x00') : b2a_hex() argument 1 must be 
> string or read-only buffer, not bytearray
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10866) Column Family should expose count metrics for dropped mutations.

2015-12-21 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10866:

Assignee: Anubhav Kale

> Column Family should expose count metrics for dropped mutations.
> 
>
> Key: CASSANDRA-10866
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10866
> Project: Cassandra
>  Issue Type: Improvement
> Environment: PROD
>Reporter: Anubhav Kale
>Assignee: Anubhav Kale
>Priority: Minor
> Attachments: 0001-CFCount.patch
>
>
> Please take a look at the discussion in CASSANDRA-10580. This is opened so 
> that the latency on dropped mutations is exposed as a metric on column 
> families.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10916) TestGlobalRowKeyCache.functional_test fails on Windows

2015-12-21 Thread Jim Witschey (JIRA)
Jim Witschey created CASSANDRA-10916:


 Summary: TestGlobalRowKeyCache.functional_test fails on Windows
 Key: CASSANDRA-10916
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10916
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Jim Witschey


{{global_row_key_cache_test.py:TestGlobalRowKeyCache.functional_test}} fails 
hard on Windows when a node fails to start:

http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/156/testReport/global_row_key_cache_test/TestGlobalRowKeyCache/functional_test/

http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/140/testReport/global_row_key_cache_test/TestGlobalRowKeyCache/functional_test_2/

I have not dug much into the failure history, so I don't know how closely the 
failures are related.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10913) netstats_test dtest flaps

2015-12-21 Thread Jim Witschey (JIRA)
Jim Witschey created CASSANDRA-10913:


 Summary: netstats_test dtest flaps
 Key: CASSANDRA-10913
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10913
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Jim Witschey


{{jmx_test.py:TestJMX.netstats_test}} flaps on 2.2:

http://cassci.datastax.com/job/cassandra-2.2_dtest/lastSuccessfulBuild/testReport/jmx_test/TestJMX/netstats_test/history/

3.0:

http://cassci.datastax.com/job/cassandra-3.0_dtest/lastSuccessfulBuild/testReport/jmx_test/TestJMX/netstats_test/history/

and trunk:

http://cassci.datastax.com/job/trunk_dtest/lastSuccessfulBuild/testReport/jmx_test/TestJMX/netstats_test/history/

The connection over JMX times out after 30 seconds. We may be increasing the 
size of the instances we run on CassCI, in which case these timeouts may go 
away, so I don't think there's anything we should do just yet; we should just 
keep an eye on this going forward.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-12-21 Thread dbrosius
Merge branch 'cassandra-2.2' into cassandra-3.0

Conflicts:
src/java/org/apache/cassandra/serializers/TimestampSerializer.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8e35f84e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8e35f84e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8e35f84e

Branch: refs/heads/cassandra-3.0
Commit: 8e35f84e93e96be6c8d893a7d396c9ef6d4919fd
Parents: adc9a24 ebbd516
Author: Dave Brosius 
Authored: Mon Dec 21 19:26:12 2015 -0500
Committer: Dave Brosius 
Committed: Mon Dec 21 19:26:12 2015 -0500

--
 .../org/apache/cassandra/db/marshal/DateType.java|  2 +-
 .../apache/cassandra/db/marshal/TimestampType.java   |  2 +-
 .../cassandra/serializers/TimestampSerializer.java   | 15 ++-
 3 files changed, 16 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e35f84e/src/java/org/apache/cassandra/db/marshal/DateType.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e35f84e/src/java/org/apache/cassandra/db/marshal/TimestampType.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e35f84e/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
--
diff --cc src/java/org/apache/cassandra/serializers/TimestampSerializer.java
index 01a85e0,78ee7e7..ad56cd5
--- a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
+++ b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
@@@ -97,19 -96,14 +97,27 @@@ public class TimestampSerializer implem
  }
  };
  
 +private static final String UTC_FORMAT = dateStringPatterns[40];
 +private static final ThreadLocal FORMATTER_UTC = new 
ThreadLocal()
 +{
 +protected SimpleDateFormat initialValue()
 +{
 +SimpleDateFormat sdf = new SimpleDateFormat(UTC_FORMAT);
 +sdf.setTimeZone(TimeZone.getTimeZone("UTC"));
 +return sdf;
 +}
 +};
++
+ private static final ThreadLocal FORMATTER_TO_JSON = 
new ThreadLocal()
+ {
+ protected SimpleDateFormat initialValue()
+ {
+ return new SimpleDateFormat(dateStringPatterns[15]);
+ }
+ };
 +
- public static final SimpleDateFormat TO_JSON_FORMAT = new 
SimpleDateFormat(dateStringPatterns[15]);
 +
+ 
  public static final TimestampSerializer instance = new 
TimestampSerializer();
  
  public Date deserialize(ByteBuffer bytes)



[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-12-21 Thread dbrosius
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/53538cb4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/53538cb4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/53538cb4

Branch: refs/heads/trunk
Commit: 53538cb4d64509f662967febb7af153d188232df
Parents: 43f8f8b 8e35f84
Author: Dave Brosius 
Authored: Mon Dec 21 19:26:52 2015 -0500
Committer: Dave Brosius 
Committed: Mon Dec 21 19:26:52 2015 -0500

--
 .../org/apache/cassandra/db/marshal/DateType.java|  2 +-
 .../apache/cassandra/db/marshal/TimestampType.java   |  2 +-
 .../cassandra/serializers/TimestampSerializer.java   | 15 ++-
 3 files changed, 16 insertions(+), 3 deletions(-)
--




[1/2] cassandra git commit: make json date formatter thread safe patch by dbrosius reviewed by thobbs for CASSANDRA-10814

2015-12-21 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 adc9a241e -> 8e35f84e9


make json date formatter thread safe
patch by dbrosius reviewed by thobbs for CASSANDRA-10814


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ebbd5169
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ebbd5169
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ebbd5169

Branch: refs/heads/cassandra-3.0
Commit: ebbd516985bc3e2859ae00e63a024b837cb4b429
Parents: 8565ca8
Author: Dave Brosius 
Authored: Mon Dec 21 19:20:49 2015 -0500
Committer: Dave Brosius 
Committed: Mon Dec 21 19:20:49 2015 -0500

--
 .../org/apache/cassandra/db/marshal/DateType.java|  2 +-
 .../apache/cassandra/db/marshal/TimestampType.java   |  2 +-
 .../cassandra/serializers/TimestampSerializer.java   | 15 +--
 3 files changed, 15 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ebbd5169/src/java/org/apache/cassandra/db/marshal/DateType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/DateType.java 
b/src/java/org/apache/cassandra/db/marshal/DateType.java
index 359ce52..82ed876 100644
--- a/src/java/org/apache/cassandra/db/marshal/DateType.java
+++ b/src/java/org/apache/cassandra/db/marshal/DateType.java
@@ -82,7 +82,7 @@ public class DateType extends AbstractType
 @Override
 public String toJSONString(ByteBuffer buffer, int protocolVersion)
 {
-return '"' + 
TimestampSerializer.TO_JSON_FORMAT.format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
+return '"' + 
TimestampSerializer.getJsonDateFormatter().format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ebbd5169/src/java/org/apache/cassandra/db/marshal/TimestampType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/TimestampType.java 
b/src/java/org/apache/cassandra/db/marshal/TimestampType.java
index b01651d..1704362 100644
--- a/src/java/org/apache/cassandra/db/marshal/TimestampType.java
+++ b/src/java/org/apache/cassandra/db/marshal/TimestampType.java
@@ -90,7 +90,7 @@ public class TimestampType extends AbstractType
 @Override
 public String toJSONString(ByteBuffer buffer, int protocolVersion)
 {
-return '"' + 
TimestampSerializer.TO_JSON_FORMAT.format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
+return '"' + 
TimestampSerializer.getJsonDateFormatter().format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ebbd5169/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
--
diff --git a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java 
b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
index ab81fcc..78ee7e7 100644
--- a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
+++ b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
@@ -96,8 +96,14 @@ public class TimestampSerializer implements 
TypeSerializer
 }
 };
 
-public static final SimpleDateFormat TO_JSON_FORMAT = new 
SimpleDateFormat(dateStringPatterns[15]);
-
+private static final ThreadLocal FORMATTER_TO_JSON = new 
ThreadLocal()
+{
+protected SimpleDateFormat initialValue()
+{
+return new SimpleDateFormat(dateStringPatterns[15]);
+}
+};
+
 public static final TimestampSerializer instance = new 
TimestampSerializer();
 
 public Date deserialize(ByteBuffer bytes)
@@ -138,6 +144,11 @@ public class TimestampSerializer implements 
TypeSerializer
 throw new MarshalException(String.format("Unable to coerce '%s' to 
a formatted date (long)", source), e1);
 }
 }
+
+public static SimpleDateFormat getJsonDateFormatter() 
+{
+   return FORMATTER_TO_JSON.get();
+}
 
 public void validate(ByteBuffer bytes) throws MarshalException
 {



cassandra git commit: make json date formatter thread safe patch by dbrosius reviewed by thobbs for CASSANDRA-10814

2015-12-21 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 8565ca89a -> ebbd51698


make json date formatter thread safe
patch by dbrosius reviewed by thobbs for CASSANDRA-10814


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ebbd5169
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ebbd5169
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ebbd5169

Branch: refs/heads/cassandra-2.2
Commit: ebbd516985bc3e2859ae00e63a024b837cb4b429
Parents: 8565ca8
Author: Dave Brosius 
Authored: Mon Dec 21 19:20:49 2015 -0500
Committer: Dave Brosius 
Committed: Mon Dec 21 19:20:49 2015 -0500

--
 .../org/apache/cassandra/db/marshal/DateType.java|  2 +-
 .../apache/cassandra/db/marshal/TimestampType.java   |  2 +-
 .../cassandra/serializers/TimestampSerializer.java   | 15 +--
 3 files changed, 15 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ebbd5169/src/java/org/apache/cassandra/db/marshal/DateType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/DateType.java 
b/src/java/org/apache/cassandra/db/marshal/DateType.java
index 359ce52..82ed876 100644
--- a/src/java/org/apache/cassandra/db/marshal/DateType.java
+++ b/src/java/org/apache/cassandra/db/marshal/DateType.java
@@ -82,7 +82,7 @@ public class DateType extends AbstractType
 @Override
 public String toJSONString(ByteBuffer buffer, int protocolVersion)
 {
-return '"' + 
TimestampSerializer.TO_JSON_FORMAT.format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
+return '"' + 
TimestampSerializer.getJsonDateFormatter().format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ebbd5169/src/java/org/apache/cassandra/db/marshal/TimestampType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/TimestampType.java 
b/src/java/org/apache/cassandra/db/marshal/TimestampType.java
index b01651d..1704362 100644
--- a/src/java/org/apache/cassandra/db/marshal/TimestampType.java
+++ b/src/java/org/apache/cassandra/db/marshal/TimestampType.java
@@ -90,7 +90,7 @@ public class TimestampType extends AbstractType
 @Override
 public String toJSONString(ByteBuffer buffer, int protocolVersion)
 {
-return '"' + 
TimestampSerializer.TO_JSON_FORMAT.format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
+return '"' + 
TimestampSerializer.getJsonDateFormatter().format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ebbd5169/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
--
diff --git a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java 
b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
index ab81fcc..78ee7e7 100644
--- a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
+++ b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
@@ -96,8 +96,14 @@ public class TimestampSerializer implements 
TypeSerializer
 }
 };
 
-public static final SimpleDateFormat TO_JSON_FORMAT = new 
SimpleDateFormat(dateStringPatterns[15]);
-
+private static final ThreadLocal FORMATTER_TO_JSON = new 
ThreadLocal()
+{
+protected SimpleDateFormat initialValue()
+{
+return new SimpleDateFormat(dateStringPatterns[15]);
+}
+};
+
 public static final TimestampSerializer instance = new 
TimestampSerializer();
 
 public Date deserialize(ByteBuffer bytes)
@@ -138,6 +144,11 @@ public class TimestampSerializer implements 
TypeSerializer
 throw new MarshalException(String.format("Unable to coerce '%s' to 
a formatted date (long)", source), e1);
 }
 }
+
+public static SimpleDateFormat getJsonDateFormatter() 
+{
+   return FORMATTER_TO_JSON.get();
+}
 
 public void validate(ByteBuffer bytes) throws MarshalException
 {



[jira] [Commented] (CASSANDRA-10831) Fix the way we replace sstables after anticompaction

2015-12-21 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067238#comment-15067238
 ] 

Yuki Morishita commented on CASSANDRA-10831:


+1

> Fix the way we replace sstables after anticompaction
> 
>
> Key: CASSANDRA-10831
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10831
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 2.1.x
>
>
> We have a bug when we replace sstables after anticompaction, we keep adding 
> duplicates which causes leveled compaction to fail after. Reason being that 
> LCS does not keep its sstables in a {{Set}}, so after first compaction, we 
> will keep around removed sstables in the leveled manifest and that will put 
> LCS in an infinite loop as it tries to mark non-existing sstables as 
> compacting



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10866) Column Family should expose count metrics for dropped mutations.

2015-12-21 Thread Anubhav Kale (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067289#comment-15067289
 ] 

Anubhav Kale commented on CASSANDRA-10866:
--

Thanks. I included the Collection because I did not realize that SCHEMA_* verb 
isn't part of DROPPABLE_VERBs. Good point.

I'll submit a rebased patch shortly.

> Column Family should expose count metrics for dropped mutations.
> 
>
> Key: CASSANDRA-10866
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10866
> Project: Cassandra
>  Issue Type: Improvement
> Environment: PROD
>Reporter: Anubhav Kale
>Assignee: Anubhav Kale
>Priority: Minor
> Attachments: 0001-CFCount.patch
>
>
> Please take a look at the discussion in CASSANDRA-10580. This is opened so 
> that the latency on dropped mutations is exposed as a metric on column 
> families.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-12-21 Thread dbrosius
Merge branch 'cassandra-2.2' into cassandra-3.0

Conflicts:
src/java/org/apache/cassandra/serializers/TimestampSerializer.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8e35f84e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8e35f84e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8e35f84e

Branch: refs/heads/trunk
Commit: 8e35f84e93e96be6c8d893a7d396c9ef6d4919fd
Parents: adc9a24 ebbd516
Author: Dave Brosius 
Authored: Mon Dec 21 19:26:12 2015 -0500
Committer: Dave Brosius 
Committed: Mon Dec 21 19:26:12 2015 -0500

--
 .../org/apache/cassandra/db/marshal/DateType.java|  2 +-
 .../apache/cassandra/db/marshal/TimestampType.java   |  2 +-
 .../cassandra/serializers/TimestampSerializer.java   | 15 ++-
 3 files changed, 16 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e35f84e/src/java/org/apache/cassandra/db/marshal/DateType.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e35f84e/src/java/org/apache/cassandra/db/marshal/TimestampType.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e35f84e/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
--
diff --cc src/java/org/apache/cassandra/serializers/TimestampSerializer.java
index 01a85e0,78ee7e7..ad56cd5
--- a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
+++ b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
@@@ -97,19 -96,14 +97,27 @@@ public class TimestampSerializer implem
  }
  };
  
 +private static final String UTC_FORMAT = dateStringPatterns[40];
 +private static final ThreadLocal FORMATTER_UTC = new 
ThreadLocal()
 +{
 +protected SimpleDateFormat initialValue()
 +{
 +SimpleDateFormat sdf = new SimpleDateFormat(UTC_FORMAT);
 +sdf.setTimeZone(TimeZone.getTimeZone("UTC"));
 +return sdf;
 +}
 +};
++
+ private static final ThreadLocal FORMATTER_TO_JSON = 
new ThreadLocal()
+ {
+ protected SimpleDateFormat initialValue()
+ {
+ return new SimpleDateFormat(dateStringPatterns[15]);
+ }
+ };
 +
- public static final SimpleDateFormat TO_JSON_FORMAT = new 
SimpleDateFormat(dateStringPatterns[15]);
 +
+ 
  public static final TimestampSerializer instance = new 
TimestampSerializer();
  
  public Date deserialize(ByteBuffer bytes)



[1/3] cassandra git commit: make json date formatter thread safe patch by dbrosius reviewed by thobbs for CASSANDRA-10814

2015-12-21 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 43f8f8bb3 -> 53538cb4d


make json date formatter thread safe
patch by dbrosius reviewed by thobbs for CASSANDRA-10814


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ebbd5169
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ebbd5169
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ebbd5169

Branch: refs/heads/trunk
Commit: ebbd516985bc3e2859ae00e63a024b837cb4b429
Parents: 8565ca8
Author: Dave Brosius 
Authored: Mon Dec 21 19:20:49 2015 -0500
Committer: Dave Brosius 
Committed: Mon Dec 21 19:20:49 2015 -0500

--
 .../org/apache/cassandra/db/marshal/DateType.java|  2 +-
 .../apache/cassandra/db/marshal/TimestampType.java   |  2 +-
 .../cassandra/serializers/TimestampSerializer.java   | 15 +--
 3 files changed, 15 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ebbd5169/src/java/org/apache/cassandra/db/marshal/DateType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/DateType.java 
b/src/java/org/apache/cassandra/db/marshal/DateType.java
index 359ce52..82ed876 100644
--- a/src/java/org/apache/cassandra/db/marshal/DateType.java
+++ b/src/java/org/apache/cassandra/db/marshal/DateType.java
@@ -82,7 +82,7 @@ public class DateType extends AbstractType
 @Override
 public String toJSONString(ByteBuffer buffer, int protocolVersion)
 {
-return '"' + 
TimestampSerializer.TO_JSON_FORMAT.format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
+return '"' + 
TimestampSerializer.getJsonDateFormatter().format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ebbd5169/src/java/org/apache/cassandra/db/marshal/TimestampType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/TimestampType.java 
b/src/java/org/apache/cassandra/db/marshal/TimestampType.java
index b01651d..1704362 100644
--- a/src/java/org/apache/cassandra/db/marshal/TimestampType.java
+++ b/src/java/org/apache/cassandra/db/marshal/TimestampType.java
@@ -90,7 +90,7 @@ public class TimestampType extends AbstractType
 @Override
 public String toJSONString(ByteBuffer buffer, int protocolVersion)
 {
-return '"' + 
TimestampSerializer.TO_JSON_FORMAT.format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
+return '"' + 
TimestampSerializer.getJsonDateFormatter().format(TimestampSerializer.instance.deserialize(buffer))
 + '"';
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ebbd5169/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
--
diff --git a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java 
b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
index ab81fcc..78ee7e7 100644
--- a/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
+++ b/src/java/org/apache/cassandra/serializers/TimestampSerializer.java
@@ -96,8 +96,14 @@ public class TimestampSerializer implements 
TypeSerializer
 }
 };
 
-public static final SimpleDateFormat TO_JSON_FORMAT = new 
SimpleDateFormat(dateStringPatterns[15]);
-
+private static final ThreadLocal FORMATTER_TO_JSON = new 
ThreadLocal()
+{
+protected SimpleDateFormat initialValue()
+{
+return new SimpleDateFormat(dateStringPatterns[15]);
+}
+};
+
 public static final TimestampSerializer instance = new 
TimestampSerializer();
 
 public Date deserialize(ByteBuffer bytes)
@@ -138,6 +144,11 @@ public class TimestampSerializer implements 
TypeSerializer
 throw new MarshalException(String.format("Unable to coerce '%s' to 
a formatted date (long)", source), e1);
 }
 }
+
+public static SimpleDateFormat getJsonDateFormatter() 
+{
+   return FORMATTER_TO_JSON.get();
+}
 
 public void validate(ByteBuffer bytes) throws MarshalException
 {



[jira] [Commented] (CASSANDRA-10877) Unable to read obsolete message version 1; The earliest version supported is 2.0.0

2015-12-21 Thread esala wona (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067386#comment-15067386
 ] 

esala wona commented on CASSANDRA-10877:


I just want to know why Cassandra couldn't work when I installed  
“xt_TCPOPTSTRIP”, and worked when restart it.

> Unable to read obsolete message version 1; The earliest version supported is 
> 2.0.0
> --
>
> Key: CASSANDRA-10877
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10877
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: esala wona
>
> I used cassandra version 2.1.2, but I get some error about that:
> {code}
>  error message 
> ERROR [Thread-83674153] 2015-12-15 10:54:42,980 CassandraDaemon.java:153 - 
> Exception in thread Thread[Thread-83674153,5,main]
> java.lang.UnsupportedOperationException: Unable to read obsolete message 
> version 1; The earliest version supported is 2.0.0
> at 
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:78)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> = end 
> {code}
> {code}
> == nodetool information 
> ces@ICESSuse3631:/opt/ces/cassandra/bin> ./nodetool gossipinfo
> /192.168.0.1
> generation:1450148624
> heartbeat:299069
> SCHEMA:194168a3-f66e-3ab9-b811-4f7a7c3b89ca
> STATUS:NORMAL,-111061256928956495
> RELEASE_VERSION:2.1.2
> RACK:rack1
> DC:datacenter1
> RPC_ADDRESS:192.144.36.32
> HOST_ID:11f793f0-999b-4ba8-8bdd-0f0c73ae2e23
> NET_VERSION:8
> SEVERITY:0.0
> LOAD:1.3757700946E10
> /192.168.0.2
> generation:1450149068
> heartbeat:297714
> SCHEMA:194168a3-f66e-3ab9-b811-4f7a7c3b89ca
> RELEASE_VERSION:2.1.2
> STATUS:NORMAL,-1108435478195556849
> RACK:rack1
> DC:datacenter1
> RPC_ADDRESS:192.144.36.33
> HOST_ID:0f1a2dab-1d39-4419-bb68-03386c1a79df
> NET_VERSION:8
> SEVERITY:7.611548900604248
> LOAD:8.295301191E9
> end=
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10866) Column Family should expose count metrics for dropped mutations.

2015-12-21 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067144#comment-15067144
 ] 

Paulo Motta commented on CASSANDRA-10866:
-

Thanks for the patch. Some comments below:
- Please rebase to latest trunk.
- In {{MessagingService.updateDroppedMutationCount}} use 
{{Keyspace.open(mutation.getKeyspaceName()).getColumnFamilyStore(UUID)}} to 
fetch CFS instead of iterating in {{ColumnFamilyStore.all()}}, and also perform 
null check if CFS is null (if table was dropped for example).
- In {{updateDroppedMutationCount(MessageIn message)}}, no need to check if 
{{message.payload instanceof Collection}}, since there are no 
{{DROPPABLE_VERBS}} which operates on a collection of mutations.
- In {{StorageProxy.performLocally}}, add an {{Optional}} argument 
that receives an {{Optional.absent(}} if it's not a mutation. Similarly, 
{{LocalMutationRunnable}} should receive an {{Optional}} and only 
count if {{!mutationOpt.isEmpty()}}
- In {{TableStats}} you removed the {{Maximum tombstones per slice}} metric by 
mistake.

> Column Family should expose count metrics for dropped mutations.
> 
>
> Key: CASSANDRA-10866
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10866
> Project: Cassandra
>  Issue Type: Improvement
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
> Attachments: 0001-CFCount.patch
>
>
> Please take a look at the discussion in CASSANDRA-10580. This is opened so 
> that the latency on dropped mutations is exposed as a metric on column 
> families.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10866) Column Family should expose count metrics for dropped mutations.

2015-12-21 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10866:

Reviewer: Paulo Motta

> Column Family should expose count metrics for dropped mutations.
> 
>
> Key: CASSANDRA-10866
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10866
> Project: Cassandra
>  Issue Type: Improvement
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
> Attachments: 0001-CFCount.patch
>
>
> Please take a look at the discussion in CASSANDRA-10580. This is opened so 
> that the latency on dropped mutations is exposed as a metric on column 
> families.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10914) sstable_deletion dtest flaps on 2.2

2015-12-21 Thread Jim Witschey (JIRA)
Jim Witschey created CASSANDRA-10914:


 Summary: sstable_deletion dtest flaps on 2.2
 Key: CASSANDRA-10914
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10914
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Jim Witschey


The following tests:

{code}
compaction_test.py:TestCompaction_with_DateTieredCompactionStrategy.sstable_deletion_test
compaction_test.py:TestCompaction_with_SizeTieredCompactionStrategy.sstable_deletion_test
{code}

flap on HEAD on 2.2 running under JDK8:

http://cassci.datastax.com/job/cassandra-2.2_dtest_jdk8/160/testReport/compaction_test/TestCompaction_with_DateTieredCompactionStrategy/sstable_deletion_test/history/

http://cassci.datastax.com/job/cassandra-2.2_dtest_jdk8/160/testReport/compaction_test/TestCompaction_with_SizeTieredCompactionStrategy/sstable_deletion_test/history/


http://cassci.datastax.com/job/cassandra-2.2_dtest_jdk8/143/testReport/junit/compaction_test/TestCompaction_with_SizeTieredCompactionStrategy/sstable_deletion_test/

I have not seen this failure on other versions or in other environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10915) netstats_test dtest fails on Windows

2015-12-21 Thread Jim Witschey (JIRA)
Jim Witschey created CASSANDRA-10915:


 Summary: netstats_test dtest fails on Windows
 Key: CASSANDRA-10915
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10915
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Jim Witschey


jmx_test.py:TestJMX.netstats_test started failing hard on Windows about a month 
ago:

http://cassci.datastax.com/job/cassandra-3.0_dtest_win32/140/testReport/junit/jmx_test/TestJMX/netstats_test/history/?start=25

http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/156/testReport/jmx_test/TestJMX/netstats_test/history/

It fails when it is unable to connect to a node via JMX. I don't know if this 
problem has any relationship to CASSANDRA-10913.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Fix misplaced 2.2 upgrading section

2015-12-21 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 c26b39716 -> 7afbaf714


Fix misplaced 2.2 upgrading section


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7afbaf71
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7afbaf71
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7afbaf71

Branch: refs/heads/cassandra-2.2
Commit: 7afbaf714f93e4fdcba7b166bd0b07fa788b77ae
Parents: c26b397
Author: Sylvain Lebresne 
Authored: Mon Dec 21 09:28:11 2015 +0100
Committer: Sylvain Lebresne 
Committed: Mon Dec 21 09:28:20 2015 +0100

--
 NEWS.txt | 69 ++-
 1 file changed, 35 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7afbaf71/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index e220357..7c6af4c 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -197,40 +197,6 @@ New features
  to stop working: jps, jstack, jinfo, jmc, jcmd as well as 3rd party tools 
like Jolokia.
  If you wish to use these tools you can comment this flag out in 
cassandra-env.{sh,ps1}
 
-
-2.1.10
-=
-
-New features
-
-   - The syntax TRUNCATE TABLE X is now accepted as an alias for TRUNCATE X
-
-
-2.1.9
-=
-
-Upgrading
--
-- cqlsh will now display timestamps with a UTC timezone. Previously,
-  timestamps were displayed with the local timezone.
-- Commit log files are no longer recycled by default, due to negative
-  performance implications. This can be enabled again with the 
-  commitlog_segment_recycling option in your cassandra.yaml 
-- JMX methods set/getCompactionStrategyClass have been deprecated, use
-  set/getCompactionParameters/set/getCompactionParametersJson instead
-
-2.1.8
-=
-
-Upgrading
--
-- Nothing specific to this release, but please see 2.1 if you are upgrading
-  from a previous version.
-
-
-2.1.7
-=
-
 Upgrading
 -
- Thrift rpc is no longer being started by default.
@@ -278,6 +244,41 @@ Upgrading
  to exclude data centers when the global status is enabled, see 
CASSANDRA-9035 for details.
 
 
+2.1.10
+=
+
+New features
+
+   - The syntax TRUNCATE TABLE X is now accepted as an alias for TRUNCATE X
+
+
+2.1.9
+=
+
+Upgrading
+-
+- cqlsh will now display timestamps with a UTC timezone. Previously,
+  timestamps were displayed with the local timezone.
+- Commit log files are no longer recycled by default, due to negative
+  performance implications. This can be enabled again with the 
+  commitlog_segment_recycling option in your cassandra.yaml 
+- JMX methods set/getCompactionStrategyClass have been deprecated, use
+  set/getCompactionParameters/set/getCompactionParametersJson instead
+
+2.1.8
+=
+
+Upgrading
+-
+- Nothing specific to this release, but please see 2.1 if you are upgrading
+  from a previous version.
+
+
+2.1.7
+=
+
+
+
 2.1.6
 =
 



[2/2] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-12-21 Thread slebresne
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/11165f47
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/11165f47
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/11165f47

Branch: refs/heads/cassandra-3.0
Commit: 11165f4733a5f2831f040aa08e881d60f7480922
Parents: 0e63000 7afbaf71
Author: Sylvain Lebresne 
Authored: Mon Dec 21 09:29:30 2015 +0100
Committer: Sylvain Lebresne 
Committed: Mon Dec 21 09:29:30 2015 +0100

--
 NEWS.txt | 70 +--
 1 file changed, 35 insertions(+), 35 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/11165f47/NEWS.txt
--



[jira] [Updated] (CASSANDRA-10856) Upgrading section in NEWS for 2.2 is misplaced

2015-12-21 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10856:
-
Summary: Upgrading section in NEWS for 2.2 is misplaced  (was: CHANGES and 
NEWS updated incorrectly when "thrift by default" was deprecated)

> Upgrading section in NEWS for 2.2 is misplaced
> --
>
> Key: CASSANDRA-10856
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10856
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Sylvain Lebresne
> Fix For: 2.2.5
>
>
> Thrift was no longer started by default in CASSANDRA-9319. This change 
> affects 2.2+, but the CHANGES and NEWS document the change as affecting 3.0+:
> https://github.com/apache/cassandra/commit/fa4a020ac922fcdc0f3c2bebe35777cfa2e223c1
> I think this is incorrect; [~iamaleksey] can you confirm?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10856) CHANGES and NEWS updated incorrectly when "thrift by default" was deprecated

2015-12-21 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-10856.
--
   Resolution: Fixed
Fix Version/s: 2.2.5

Don't trust commit to be the current state of the code: that commit pre-dates 
the 2.2 renaming and thus both the CHANGES and NEWS file correctly had the 
CASSANDRA-9319 changes documented.

That said, there _was_ something wrong with the NEWS file in that the full 
"upgrading" section for 2.2 had been mistakenly put for 2.1.7. Not sure why 
that happened and I've moved it back where it belong so I'll rename the ticket 
to account for that change.

> CHANGES and NEWS updated incorrectly when "thrift by default" was deprecated
> 
>
> Key: CASSANDRA-10856
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10856
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Sylvain Lebresne
> Fix For: 2.2.5
>
>
> Thrift was no longer started by default in CASSANDRA-9319. This change 
> affects 2.2+, but the CHANGES and NEWS document the change as affecting 3.0+:
> https://github.com/apache/cassandra/commit/fa4a020ac922fcdc0f3c2bebe35777cfa2e223c1
> I think this is incorrect; [~iamaleksey] can you confirm?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7653) Add role based access control to Cassandra

2015-12-21 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066441#comment-15066441
 ] 

Paulo Motta commented on CASSANDRA-7653:


bq. Once all nodes are upgraded, an operator with superuser privileges should 
drop the legacy tables, which will prompt PA, CRM and CA to switch over to the 
new tables without requiring a further rolling restart.

Is there any reason why we don't do this automatically? I find it a bit strange 
to ask users to drop system tables (a potentially dangerous operation) to 
complete an upgrade. It would be nice in the future to provide a {{nodetool 
upgradesystemtables}} command to perform these types of post-upgrade tasks.

> Add role based access control to Cassandra
> --
>
> Key: CASSANDRA-7653
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7653
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL, Distributed Metadata
>Reporter: Mike Adamson
>Assignee: Sam Tunnicliffe
>  Labels: docs-impacting, security
> Fix For: 2.2.0 beta 1
>
> Attachments: 7653.patch, CQLSmokeTest.java, cql_smoke_test.py
>
>
> The current authentication model supports granting permissions to individual 
> users. While this is OK for small or medium organizations wanting to 
> implement authorization, it does not work well in large organizations because 
> of the overhead of having to maintain the permissions for each user.
> Introducing roles into the authentication model would allow sets of 
> permissions to be controlled in one place as a role and then the role granted 
> to users. Roles should also be able to be granted to other roles to allow 
> hierarchical sets of permissions to be built up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


svn commit: r1721164 - in /cassandra/site: publish/download/index.html publish/index.html src/settings.py

2015-12-21 Thread jake
Author: jake
Date: Mon Dec 21 14:14:28 2015
New Revision: 1721164

URL: http://svn.apache.org/viewvc?rev=1721164=rev
Log:
3.1.1. and 3.0.2

Modified:
cassandra/site/publish/download/index.html
cassandra/site/publish/index.html
cassandra/site/src/settings.py

Modified: cassandra/site/publish/download/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/download/index.html?rev=1721164=1721163=1721164=diff
==
--- cassandra/site/publish/download/index.html (original)
+++ cassandra/site/publish/download/index.html Mon Dec 21 14:14:28 2015
@@ -49,16 +49,16 @@
 
 Cassandra is moving to a new release process called http://www.planetcassandra.org/blog/cassandra-2-2-3-0-and-beyond/;>Tick-Tock.
 
-The latest tick-tock release is 3.1 (released on
-2015-12-08).
+The latest tick-tock release is 3.1.1 (released on
+2015-12-21).
 
 
 
   
-  http://www.apache.org/dyn/closer.lua/cassandra/3.1/apache-cassandra-3.1-bin.tar.gz;>apache-cassandra-3.1-bin.tar.gz
-  [http://www.apache.org/dist/cassandra/3.1/apache-cassandra-3.1-bin.tar.gz.asc;>PGP]
-  [http://www.apache.org/dist/cassandra/3.1/apache-cassandra-3.1-bin.tar.gz.md5;>MD5]
-  [http://www.apache.org/dist/cassandra/3.1/apache-cassandra-3.1-bin.tar.gz.sha1;>SHA1]
+  http://www.apache.org/dyn/closer.lua/cassandra/3.1.1/apache-cassandra-3.1.1-bin.tar.gz;>apache-cassandra-3.1.1-bin.tar.gz
+  [http://www.apache.org/dist/cassandra/3.1.1/apache-cassandra-3.1.1-bin.tar.gz.asc;>PGP]
+  [http://www.apache.org/dist/cassandra/3.1.1/apache-cassandra-3.1.1-bin.tar.gz.md5;>MD5]
+  [http://www.apache.org/dist/cassandra/3.1.1/apache-cassandra-3.1.1-bin.tar.gz.sha1;>SHA1]
   
 
 
@@ -74,21 +74,21 @@
There are currently two active releases available:


-  The latest release of Apache Cassandra is 3.0.1
-  (released on 2015-12-08).  If you're just
+  The latest release of Apache Cassandra is 3.0.2
+  (released on 2015-12-21).  If you're just
   starting out and not yet in production, download this one.
 
 

  
http://www.apache.org/dyn/closer.lua/cassandra/3.0.1/apache-cassandra-3.0.1-bin.tar.gz;
+ 
href="http://www.apache.org/dyn/closer.lua/cassandra/3.0.2/apache-cassandra-3.0.2-bin.tar.gz;
  onclick="javascript: 
pageTracker._trackPageview('/clicks/binary_download');">
-apache-cassandra-3.0.1-bin.tar.gz
+apache-cassandra-3.0.2-bin.tar.gz

-   [http://www.apache.org/dist/cassandra/3.0.1/apache-cassandra-3.0.1-bin.tar.gz.asc;>PGP]
-   [http://www.apache.org/dist/cassandra/3.0.1/apache-cassandra-3.0.1-bin.tar.gz.md5;>MD5]
-   [http://www.apache.org/dist/cassandra/3.0.1/apache-cassandra-3.0.1-bin.tar.gz.sha1;>SHA1]
+   [http://www.apache.org/dist/cassandra/3.0.2/apache-cassandra-3.0.2-bin.tar.gz.asc;>PGP]
+   [http://www.apache.org/dist/cassandra/3.0.2/apache-cassandra-3.0.2-bin.tar.gz.md5;>MD5]
+   [http://www.apache.org/dist/cassandra/3.0.2/apache-cassandra-3.0.2-bin.tar.gz.sha1;>SHA1]
  
  
http://wiki.apache.org/cassandra/DebianPackaging;>Debian 
installation instructions
@@ -173,13 +173,13 @@
   
 
 http://www.apache.org/dyn/closer.lua/cassandra/3.0.1/apache-cassandra-3.0.1-src.tar.gz;
+   
href="http://www.apache.org/dyn/closer.lua/cassandra/3.0.2/apache-cassandra-3.0.2-src.tar.gz;
onclick="javascript: 
pageTracker._trackPageview('/clicks/source_download');">
-  apache-cassandra-3.0.1-src.tar.gz
+  apache-cassandra-3.0.2-src.tar.gz
 
-[http://www.apache.org/dist/cassandra/3.0.1/apache-cassandra-3.0.1-src.tar.gz.asc;>PGP]
-[http://www.apache.org/dist/cassandra/3.0.1/apache-cassandra-3.0.1-src.tar.gz.md5;>MD5]
-[http://www.apache.org/dist/cassandra/3.0.1/apache-cassandra-3.0.1-src.tar.gz.sha1;>SHA1]
+[http://www.apache.org/dist/cassandra/3.0.2/apache-cassandra-3.0.2-src.tar.gz.asc;>PGP]
+[http://www.apache.org/dist/cassandra/3.0.2/apache-cassandra-3.0.2-src.tar.gz.md5;>MD5]
+[http://www.apache.org/dist/cassandra/3.0.2/apache-cassandra-3.0.2-src.tar.gz.sha1;>SHA1]
 
   
 

Modified: cassandra/site/publish/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/index.html?rev=1721164=1721163=1721164=diff
==
--- cassandra/site/publish/index.html (original)
+++ cassandra/site/publish/index.html Mon Dec 21 14:14:28 2015
@@ -77,11 +77,11 @@
   
   
   
-  http://www.planetcassandra.org/blog/cassandra-2-2-3-0-and-beyond/;>Tick-Tock
 release 3.1 (http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-3.1;>Changes)
+  http://www.planetcassandra.org/blog/cassandra-2-2-3-0-and-beyond/;>Tick-Tock
 release 3.1.1 

[jira] [Updated] (CASSANDRA-10799) 2 cqlshlib tests still failing with cythonized driver installation

2015-12-21 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-10799:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> 2 cqlshlib tests still failing with cythonized driver installation
> --
>
> Key: CASSANDRA-10799
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10799
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 2.1.13, 2.2.5, 3.2, 3.0.3
>
>
> We still have 2 cqlshlib tests failing on Jenkins:
> http://cassci.datastax.com/job/cassandra-3.0_cqlshlib/lastCompletedBuild/testReport/
> Locally, these tests only fail with a cythonized driver installation. If the 
> driver is not cythonized (installed with {{--no_extensions}}) then the tests 
> are fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10816) Explicitly handle SSL handshake errors during connect()

2015-12-21 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-10816:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> Explicitly handle SSL handshake errors during connect()
> ---
>
> Key: CASSANDRA-10816
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10816
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
> Fix For: 2.2.5, 3.2, 3.0.3
>
>
> Diagnosing internode SSL issues can be a problem as any messaging 
> {{IOException}} before this patch is just logged to debug, which is likely 
> not enabled in most cases. Logs will just show nothing in case of any 
> problems with SSL certs. 
> Also the implemented retry semantics in {{OutboundTcpConnection}} will not 
> make much sense for SSL handshake errors and will cause unnecessary load for 
> both peers.
> The proposed patch will explicitly catch {{SSLHandshakeException}}, log them 
> to error and abort connect().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10835) CqlInputFormat creates too small splits for map Hadoop tasks

2015-12-21 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-10835:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> CqlInputFormat  creates too small splits for map Hadoop tasks
> -
>
> Key: CASSANDRA-10835
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10835
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Artem Aliev
> Fix For: 2.2.5, 3.2, 3.0.3
>
> Attachments: cassandra-2.2-10835-2.txt, cassandra-3.0.1-10835-2.txt, 
> cassandra-3.0.1-10835.txt
>
>
> CqlInputFormat use number of rows in C* version < 2.2 to define split size
> The default split size was 64K rows.
> {code}
> private static final int DEFAULT_SPLIT_SIZE = 64 * 1024;
> {code}
> The doc:
> {code}
> * You can also configure the number of rows per InputSplit with
>  *   ConfigHelper.setInputSplitSize. The default split size is 64k rows.
>  {code}
> New split algorithm assumes that SPLIT size is in bytes, so it creates really 
> small map hadoop tasks by default (or with old configs).
> There two way to fix it:
> 1. Update the doc and increase default value to something like 16MB
> 2. Make the C* to be compatible with older version.
> I like the second options, as it will not surprise people who upgrade from 
> old versions. I do not expect a lot of new user that will use Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10806) sstableloader can't handle upper case keyspace

2015-12-21 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-10806:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> sstableloader can't handle upper case keyspace
> --
>
> Key: CASSANDRA-10806
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10806
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Alex Liu
>Assignee: Alex Liu
>Priority: Minor
> Fix For: 3.2, 3.0.3
>
> Attachments: CASSANDRA-10806-3.0-branch.txt
>
>
> sstableloader can't handle upper case keyspace. The following shows the 
> endpoint is missing
> {code}
> cassandra/bin/sstableloader 
> /var/folders/zz/zyxvpxvq6csfxvn_n0/T/bulk-write-to-Test1-Words-a9343a5f-62f3-4901-a9c8-ab7dc42a458e/Test1/Words-5
>   -d 127.0.0.1
> objc[7818]: Class JavaLaunchHelper is implemented in both 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_66.jdk/Contents/Home/bin/java and 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_66.jdk/Contents/Home/jre/lib/libinstrument.dylib.
>  One of the two will be used. Which one is undefined.
> Established connection to initial hosts
> Opening sstables and calculating sections to stream
> Streaming relevant part of 
> /var/folders/zz/zyxvpxvq6csfxvn_n0/T/bulk-write-to-Test1-Words-a9343a5f-62f3-4901-a9c8-ab7dc42a458e/Test1/Words-5/ma-1-big-Data.db
>  to []
> Summary statistics: 
>   Connections per host:: 1
>   Total files transferred:  : 0
>   Total bytes transferred:  : 0
>   Total duration (ms):  : 923  
>   Average transfer rate (MB/s): : 0
>   Peak transfer rate (MB/s):: 0 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9179) Unable to "point in time" restore if table/cf has been recreated

2015-12-21 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-9179:
--
Fix Version/s: (was: 3.0.2)
   3.0.3

> Unable to "point in time" restore if table/cf has been recreated
> 
>
> Key: CASSANDRA-9179
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9179
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL, Distributed Metadata
>Reporter: Jon Moses
>Assignee: Branimir Lambov
>  Labels: doc-impacting
> Fix For: 2.1.13, 2.2.5, 3.2, 3.0.3
>
>
> With Cassandra 2.1, and the addition of the CF UUID, the ability to do a 
> "point in time" restore by restoring a snapshot and replaying commitlogs is 
> lost if the table has been dropped and recreated.
> When the table is recreated, the cf_id changes, and the commitlog replay 
> mechanism skips the desired mutations as the cf_id no longer matches what's 
> present in the schema.
> There should exist a way to inform the replay that you want the mutations 
> replayed even if the cf_id doesn't match.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9748) Can't see other nodes when using multiple network interfaces

2015-12-21 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-9748:
--
Fix Version/s: (was: 3.0.2)
   3.0.3

> Can't see other nodes when using multiple network interfaces
> 
>
> Key: CASSANDRA-9748
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9748
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
> Environment: Cassandra 2.0.16; multi-DC configuration
>Reporter: Roman Bielik
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: docs-impacting
> Fix For: 2.2.5, 3.2, 3.0.3
>
> Attachments: system_node1.log, system_node2.log
>
>
> The idea is to setup a multi-DC environment across 2 different networks based 
> on the following configuration recommendations:
> http://docs.datastax.com/en/cassandra/2.0/cassandra/configuration/configMultiNetworks.html
> Each node has 2 network interfaces. One used as a private network (DC1: 
> 10.0.1.x and DC2: 10.0.2.x). The second one a "public" network where all 
> nodes can see each other (this one has a higher latency). 
> Using the following settings in cassandra.yaml:
> *seeds:* public IP (same as used in broadcast_address)
> *listen_address:* private IP
> *broadcast_address:* public IP
> *rpc_address:* 0.0.0.0
> *endpoint_snitch:* GossipingPropertyFileSnitch
> _(tried different combinations with no luck)_
> No firewall and no SSL/encryption used.
> The problem is that nodes do not see each other (a gossip problem I guess). 
> The nodetool ring/status shows only the local node but not the other ones 
> (even from the same DC).
> When I set listen_address to public IP, then everything works fine, but that 
> is not the required configuration.
> _Note: Not using EC2 cloud!_
> netstat -anp | grep -E "(7199|9160|9042|7000)"
> tcp0  0 0.0.0.0:71990.0.0.0:*   
> LISTEN  3587/java   
> tcp0  0 10.0.1.1:9160   0.0.0.0:*   
> LISTEN  3587/java   
> tcp0  0 10.0.1.1:9042   0.0.0.0:*   
> LISTEN  3587/java   
> tcp0  0 10.0.1.1:7000   0.0.0.0:*   
> LISTEN  3587/java   
> tcp0  0 127.0.0.1:7199  127.0.0.1:52874 
> ESTABLISHED 3587/java   
> tcp0  0 10.0.1.1:7199   10.0.1.1:39650  
> ESTABLISHED 3587/java 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8805) runWithCompactionsDisabled only cancels compactions, which is not the only source of markCompacted

2015-12-21 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-8805:
--
Fix Version/s: (was: 3.0.2)
   3.0.3

> runWithCompactionsDisabled only cancels compactions, which is not the only 
> source of markCompacted
> --
>
> Key: CASSANDRA-8805
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8805
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benedict
>Assignee: Carl Yeksigian
> Fix For: 2.1.13, 2.2.5, 3.2, 3.0.3
>
> Attachments: 8805-2.1.txt
>
>
> Operations like repair that may operate over all sstables cancel compactions 
> before beginning, and fail if there are any files marked compacting after 
> doing so. Redistribution of index summaries is not a compaction, so is not 
> cancelled by this action, but does mark sstables as compacting, so such an 
> action will fail to initiate if there is an index summary redistribution in 
> progress. It seems that IndexSummaryManager needs to register itself as 
> interruptible along with compactions (AFAICT no other actions that may 
> markCompacting are not themselves compactions).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10911) Unit tests for AbstractSSTableIterator and subclasses

2015-12-21 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-10911:


 Summary: Unit tests for AbstractSSTableIterator and subclasses
 Key: CASSANDRA-10911
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10911
 Project: Cassandra
  Issue Type: Test
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne


Many classes lack unit test but {{AbstractSSTableIterator}} and its sub-classes 
are particularly essential so a good one to prioritize. Testing them in 
isolation is particularly useful for indexed readers as it's hard to guarantee 
that we cover all cases from a higher level CQL test (we don't really know 
where the index bounds are), and this could have avoided CASSANDRA-10903 in 
particular.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10686) cqlsh schema refresh on timeout dtest is flaky

2015-12-21 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066440#comment-15066440
 ] 

Benjamin Lerer commented on CASSANDRA-10686:


Nice explanation :-)

In my opinion, we should wait for the fix for 
[PYTHON-458|https://datastax-oss.atlassian.net/browse/PYTHON-458] instead of 
trying to fix it on our side.

In the mean time we could set {{max_schema_agreement_wait}} to a greater value 
that {{client_timeout}} in the DTests to prevent the flapping. 

[~jkni] will you be fine with that?

> cqlsh schema refresh on timeout dtest is flaky 
> ---
>
> Key: CASSANDRA-10686
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10686
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing, Tools
>Reporter: Joel Knighton
>Assignee: Paulo Motta
>Priority: Minor
>
> [flaky 3.0 
> runs|http://cassci.datastax.com/view/cassandra-3.0/job/cassandra-3.0_dtest/lastCompletedBuild/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_refresh_schema_on_timeout_error/history/]
> [flaky 2.2 
> runs|http://cassci.datastax.com/job/cassandra-2.2_dtest/381/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_refresh_schema_on_timeout_error/history/]
> [flaky 2.1 
> runs|http://cassci.datastax.com/job/cassandra-2.1_dtest/324/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_refresh_schema_on_timeout_error/history/]
> As far as I can tell, the issue could be with the test or the original issue. 
> Pinging [~pauloricardomg] since he knows this best.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10801) Unexplained inconsistent data with Cassandra 2.1

2015-12-21 Thread Maor Cohen (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066461#comment-15066461
 ] 

Maor Cohen commented on CASSANDRA-10801:


We started with write ONE and read ONE.
Then we changed write ONE and read ALL,  write ALL and read ONE but records 
were still missing.
Currently the setting is read and write in QUORUM consistency.

Reloading to table with different name is an option but we want to try other 
things before we going in this direction.


> Unexplained inconsistent data with Cassandra 2.1
> 
>
> Key: CASSANDRA-10801
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10801
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Imri Zvik
> Fix For: 2.1.x
>
> Attachments: trace2.log, tracing.log
>
>
> We are experiencing weird behavior which we cannot explain.
> We have a CF, with RF=3, and we are writing and reading data to it with 
> consistency level of ONE.
> For some reason, we see inconsistent results when querying for data.
> Even for rows that were written a day ago, we're seeing inconsistent results 
> (1 replca has the data, the two others don't).
> Now, I would expect to see timeouts/dropped mutations, but all relevant 
> counters are not advancing, and I would also expect hints to fix this 
> inconsistency within minutes, but yet it doesn't.
> {code}
> cqlsh:testing> SELECT WRITETIME(last_update),site_id, tree_id, individual_id, 
> last_update FROM testcf WHERE site_id = 229673621 AND tree_id = 9 AND 
> individual_id = 9032483;
>  writetime(last_update) | site_id   | tree_id | individual_id | last_update
> +---+-+---+-
>1448988343028000 | 229673621 |   9 |   9032483 |  1380912397
> (1 rows)
> cqlsh:testing> SELECT WRITETIME(last_update),site_id, tree_id, individual_id, 
> last_update FROM testcf WHERE site_id = 229673621 AND tree_id = 9 AND 
> individual_id = 9032483;
> site_id   | tree_id | individual_id | last_update
> ---+-+---+-
> (0 rows)
> cqlsh:testing> SELECT dateof(now()) FROM system.local ;
>  dateof(now())
> --
>  2015-12-02 14:48:44+
> (1 rows)
> {code}
> We are running with Cassandra 2.1.11 with Oracle Java 1.8.0_65-b17



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10873) Allow sstableloader to work with 3rd party authentication providers

2015-12-21 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-10873:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> Allow sstableloader to work with 3rd party authentication providers
> ---
>
> Key: CASSANDRA-10873
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10873
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Mike Adamson
>Assignee: Mike Adamson
> Fix For: 3.2, 3.0.3
>
>
> When sstableloader was changed to use native protocol instead of thrift there 
> was a regression in that now sstableloader (BulkLoader) only takes 
> {{username}} and {{password}} as credentials so only works with the 
> {{PlainTextAuthProvider}} provided by the java driver.
> Previously it allowed 3rd party auth providers to be used, we need to add 
> back that ability by allowing the full classname of the auth provider to be 
> passed as a parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10797) Bootstrap new node fails with OOM when streaming nodes contains thousands of sstables

2015-12-21 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-10797:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> Bootstrap new node fails with OOM when streaming nodes contains thousands of 
> sstables
> -
>
> Key: CASSANDRA-10797
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10797
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: Cassandra 2.1.8.621 w/G1GC
>Reporter: Jose Martinez Poblete
>Assignee: Paulo Motta
> Fix For: 3.2, 3.0.3
>
> Attachments: 10797-nonpatched.png, 10797-patched.png, 
> 10798-nonpatched-500M.png, 10798-patched-500M.png, 112415_system.log, 
> Heapdump_OOM.zip, Screen Shot 2015-12-01 at 7.34.40 PM.png, dtest.tar.gz
>
>
> When adding a new node to an existing DC, it runs OOM after 25-45 minutes
> Upon heapdump revision, it is found the sending nodes are streaming thousands 
> of sstables which in turns blows the bootstrapping node heap 
> {noformat}
> ERROR [RMI Scheduler(0)] 2015-11-24 10:10:44,585 
> JVMStabilityInspector.java:94 - JVM state determined to be unstable.  Exiting 
> forcefully due to:
> java.lang.OutOfMemoryError: Java heap space
> ERROR [STREAM-IN-/173.36.28.148] 2015-11-24 10:10:44,585 
> StreamSession.java:502 - [Stream #0bb13f50-92cb-11e5-bc8d-f53b7528ffb4] 
> Streaming error occurred
> java.lang.IllegalStateException: Shutdown in progress
> at 
> java.lang.ApplicationShutdownHooks.remove(ApplicationShutdownHooks.java:82) 
> ~[na:1.8.0_65]
> at java.lang.Runtime.removeShutdownHook(Runtime.java:239) 
> ~[na:1.8.0_65]
> at 
> org.apache.cassandra.service.StorageService.removeShutdownHook(StorageService.java:747)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.utils.JVMStabilityInspector$Killer.killCurrentJVM(JVMStabilityInspector.java:95)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.utils.JVMStabilityInspector.inspectThrowable(JVMStabilityInspector.java:64)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:66)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:38)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:55)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:250)
>  ~[cassandra-all-2.1.8.621.jar:2.1.8.621]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
> ERROR [RMI TCP Connection(idle)] 2015-11-24 10:10:44,585 
> JVMStabilityInspector.java:94 - JVM state determined to be unstable.  Exiting 
> forcefully due to:
> java.lang.OutOfMemoryError: Java heap space
> ERROR [OptionalTasks:1] 2015-11-24 10:10:44,585 CassandraDaemon.java:223 - 
> Exception in thread Thread[OptionalTasks:1,5,main]
> java.lang.IllegalStateException: Shutdown in progress
> {noformat}
> Attached is the Eclipse MAT report as a zipped web page



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10850) v4 spec has tons of grammatical mistakes

2015-12-21 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-10850:
---
Fix Version/s: (was: 3.0.2)
   3.0.3

> v4 spec has tons of grammatical mistakes
> 
>
> Key: CASSANDRA-10850
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10850
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation and Website
>Reporter: Sandeep Tamhankar
> Fix For: 3.0.3
>
> Attachments: v4-protocol.patch
>
>
> https://github.com/apache/cassandra/blob/cassandra-3.0/doc/native_protocol_v4.spec
> I notice the following in the first section of the spec and then gave up:
> "The list of allowed opcode is defined Section 2.3" => "The list of allowed 
> opcode*s* is defined in Section 2.3"
> "the details of each corresponding message is described Section 4" => "the 
> details of each corresponding message are described in Section 4" since the 
> subject is details, not message.
> "Requests are those frame sent by" => "Requests are those frame*s* sent by"
> I think someone should go through the whole spec and fix all the mistakes 
> rather than me pointing out the ones I notice piece-meal. I found the grammar 
> errors to be rather distracting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10632) sstableutil tests failing

2015-12-21 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey resolved CASSANDRA-10632.
--
Resolution: Fixed

Closed [with this PR|https://github.com/riptano/cassandra-dtest/pull/673].

> sstableutil tests failing
> -
>
> Key: CASSANDRA-10632
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10632
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Jim Witschey
> Fix For: 3.0.x
>
>
> {{sstableutil_test.py:SSTableUtilTest.abortedcompaction_test}} and 
> {{sstableutil_test.py:SSTableUtilTest.compaction_test}} fail on Windows:
> http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/100/testReport/sstableutil_test/SSTableUtilTest/abortedcompaction_test/
> http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/100/testReport/sstableutil_test/SSTableUtilTest/compaction_test/
> This is a pretty simple failure -- looks like the underlying behavior is ok, 
> but string comparison fails when the leading {{d}} in the filename is 
> lowercase as returned by {{sstableutil}} (see the [{{_invoke_sstableutil}} 
> test 
> function|https://github.com/riptano/cassandra-dtest/blob/master/sstableutil_test.py#L128]),
>  but uppercase as returned by {{glob.glob}} (see the [{{_get_sstable_files}} 
> test 
> function|https://github.com/riptano/cassandra-dtest/blob/master/sstableutil_test.py#L160]).
> Do I understand correctly that Windows filenames are case-insensitive, 
> including the drive portion? If that's the case, then we can just lowercase 
> the file names in the test helper functions above when the tests are run on 
> Windows. [~JoshuaMcKenzie] can you confirm? I'll fix this in the tests if so. 
> If I'm wrong, and something in {{sstableutil}} needs to be fixed, could you 
> find an assignee?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10661) Integrate SASI to Cassandra

2015-12-21 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067415#comment-15067415
 ] 

Pavel Yaskevich commented on CASSANDRA-10661:
-

[~beobal] Here is the latest status: I've attempted to integrate OR/Parenthesis 
into the CQL3 and SelectStatement which, as I've figured, actually would still 
require CASSADRA-10765 to be implemented since all of the restrictions have to 
be constructed/checked per logical operation (in other words, per CQL3 
statement we'll have to build operation graph instead of current list approach) 
which would require substantial changes in SelectStatement, 
StatementRestrictions and other query processing classes. Maybe an alternative, 
and granular approach, would be more appropriate in this case:

phase #1 - SASI goes into trunk supporting AND only (in other words, having 
QueryPlan internalized, no changes to CQL3);
phase #2 - implement CASSANDRA-10765 with AND support only, which would 
supersede restriction support (via StatementRestrictions) in CQL3;
phase #3 - add OR support to, by that time, already global QueryPlan.

WDYT?

> Integrate SASI to Cassandra
> ---
>
> Key: CASSANDRA-10661
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10661
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Pavel Yaskevich
>Assignee: Pavel Yaskevich
>  Labels: sasi
> Fix For: 3.x
>
>
> We have recently released new secondary index engine 
> (https://github.com/xedin/sasi) build using SecondaryIndex API, there are 
> still couple of things to work out regarding 3.x since it's currently 
> targeted on 2.0 released. I want to make this an umbrella issue to all of the 
> things related to integration of SASI, which are also tracked in 
> [sasi_issues|https://github.com/xedin/sasi/issues], into mainline Cassandra 
> 3.x release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10917) better validator randomness

2015-12-21 Thread Dave Brosius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Brosius updated CASSANDRA-10917:
-
Attachment: 10917.txt

> better validator randomness
> ---
>
> Key: CASSANDRA-10917
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10917
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Dave Brosius
>Priority: Trivial
> Fix For: 3.x
>
> Attachments: 10917.txt
>
>
> get better randomness by reusing a Random object rather than recreating it.
> Also reuse keys list to avoid reallocations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: add missing logger parm marker

2015-12-21 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 8e35f84e9 -> 21103bea2


add missing logger parm marker


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/21103bea
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/21103bea
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/21103bea

Branch: refs/heads/cassandra-3.0
Commit: 21103bea23fa07ab4e38092e788a9a37b5707334
Parents: 8e35f84
Author: Dave Brosius 
Authored: Mon Dec 21 20:28:24 2015 -0500
Committer: Dave Brosius 
Committed: Mon Dec 21 20:28:24 2015 -0500

--
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/21103bea/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index f0adf39..e200e8e 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -571,7 +571,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
  // clear ephemeral snapshots that were not properly cleared last 
session (CASSANDRA-7357)
 clearEphemeralSnapshots(directories);
 
-logger.trace("Removing temporary or obsoleted files from unfinished 
operations for table", metadata.cfName);
+logger.trace("Removing temporary or obsoleted files from unfinished 
operations for table {}", metadata.cfName);
 LifecycleTransaction.removeUnfinishedLeftovers(metadata);
 
 logger.trace("Further extra check for orphan sstable files for {}", 
metadata.cfName);



[2/2] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-12-21 Thread dbrosius
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c4428c7d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c4428c7d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c4428c7d

Branch: refs/heads/trunk
Commit: c4428c7dd03b12204b00fd7043d582a6a00982b0
Parents: 53538cb 21103be
Author: Dave Brosius 
Authored: Mon Dec 21 20:29:14 2015 -0500
Committer: Dave Brosius 
Committed: Mon Dec 21 20:29:14 2015 -0500

--
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c4428c7d/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--



[jira] [Assigned] (CASSANDRA-10917) better validator randomness

2015-12-21 Thread Dave Brosius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Brosius reassigned CASSANDRA-10917:


Assignee: Dave Brosius

> better validator randomness
> ---
>
> Key: CASSANDRA-10917
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10917
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Dave Brosius
>Assignee: Dave Brosius
>Priority: Trivial
> Fix For: 3.x
>
> Attachments: 10917.txt
>
>
> get better randomness by reusing a Random object rather than recreating it.
> Also reuse keys list to avoid reallocations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10918) remove leftover code from refactor

2015-12-21 Thread Dave Brosius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Brosius reassigned CASSANDRA-10918:


Assignee: Dave Brosius

> remove leftover code from refactor
> --
>
> Key: CASSANDRA-10918
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10918
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Dave Brosius
>Assignee: Dave Brosius
>Priority: Trivial
> Fix For: 3.x
>
> Attachments: 10918.txt
>
>
> code seems to have been left over from refactor from 2.2 to 3.0. removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10918) remove leftover code from refactor

2015-12-21 Thread Dave Brosius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Brosius updated CASSANDRA-10918:
-
Attachment: 10918.txt

> remove leftover code from refactor
> --
>
> Key: CASSANDRA-10918
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10918
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Dave Brosius
>Priority: Trivial
> Fix For: 3.x
>
> Attachments: 10918.txt
>
>
> code seems to have been left over from refactor from 2.2 to 3.0. removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10877) Unable to read obsolete message version 1; The earliest version supported is 2.0.0

2015-12-21 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067392#comment-15067392
 ] 

Jim Witschey commented on CASSANDRA-10877:
--

For Cassandra support questions, you might ask in the IRC room or users mailing 
list as described at the bottom of the Cassandra webpage:

http://cassandra.apache.org/ 

Maybe someone there will be familiar with both Cassandra and {{xt_TCPOPTSTRIP}}.

Closing this ticket, since all that's left are support tasks.

> Unable to read obsolete message version 1; The earliest version supported is 
> 2.0.0
> --
>
> Key: CASSANDRA-10877
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10877
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: esala wona
>
> I used cassandra version 2.1.2, but I get some error about that:
> {code}
>  error message 
> ERROR [Thread-83674153] 2015-12-15 10:54:42,980 CassandraDaemon.java:153 - 
> Exception in thread Thread[Thread-83674153,5,main]
> java.lang.UnsupportedOperationException: Unable to read obsolete message 
> version 1; The earliest version supported is 2.0.0
> at 
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:78)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> = end 
> {code}
> {code}
> == nodetool information 
> ces@ICESSuse3631:/opt/ces/cassandra/bin> ./nodetool gossipinfo
> /192.168.0.1
> generation:1450148624
> heartbeat:299069
> SCHEMA:194168a3-f66e-3ab9-b811-4f7a7c3b89ca
> STATUS:NORMAL,-111061256928956495
> RELEASE_VERSION:2.1.2
> RACK:rack1
> DC:datacenter1
> RPC_ADDRESS:192.144.36.32
> HOST_ID:11f793f0-999b-4ba8-8bdd-0f0c73ae2e23
> NET_VERSION:8
> SEVERITY:0.0
> LOAD:1.3757700946E10
> /192.168.0.2
> generation:1450149068
> heartbeat:297714
> SCHEMA:194168a3-f66e-3ab9-b811-4f7a7c3b89ca
> RELEASE_VERSION:2.1.2
> STATUS:NORMAL,-1108435478195556849
> RACK:rack1
> DC:datacenter1
> RPC_ADDRESS:192.144.36.33
> HOST_ID:0f1a2dab-1d39-4419-bb68-03386c1a79df
> NET_VERSION:8
> SEVERITY:7.611548900604248
> LOAD:8.295301191E9
> end=
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10711) NoSuchElementException when executing empty batch.

2015-12-21 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067479#comment-15067479
 ] 

Jim Witschey commented on CASSANDRA-10711:
--

Pinging [~slebresne], looks like these jobs have run.

> NoSuchElementException when executing empty batch.
> --
>
> Key: CASSANDRA-10711
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10711
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Cassandra 3.0, OSS 42.1
>Reporter: Jaroslav Kamenik
>Assignee: ZhaoYang
>  Labels: triaged
> Fix For: 3.0.x
>
> Attachments: CASSANDRA-10711-trunk.patch
>
>
> After upgrade to C* 3.0, it fails when executes empty batch:
> {code}
> java.util.NoSuchElementException: null
> at java.util.ArrayList$Itr.next(ArrayList.java:854) ~[na:1.8.0_60]
> at 
> org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:737)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.executeWithoutConditions(BatchStatement.java:356)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:337)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:323)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processBatch(QueryProcessor.java:490)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.cql3.QueryProcessor.processBatch(QueryProcessor.java:480)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.transport.messages.BatchMessage.execute(BatchMessage.java:217)
>  ~[apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [apache-cassandra-3.0.0.jar:3.0.0]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [apache-cassandra-3.0.0.jar:3.0.0]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_60]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  [apache-cassandra-3.0.0.jar:3.0.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.0.0.jar:3.0.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9624) unable to bootstrap; streaming fails with NullPointerException

2015-12-21 Thread Kai Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067557#comment-15067557
 ] 

Kai Wang commented on CASSANDRA-9624:
-

I am having this problem with 2.2.4

> unable to bootstrap; streaming fails with NullPointerException
> --
>
> Key: CASSANDRA-9624
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9624
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: Debian Jessie, 7u79-2.5.5-1~deb8u1, Cassandra 2.1.3
>Reporter: Eric Evans
>Assignee: Yuki Morishita
> Fix For: 2.1.x
>
> Attachments: joining_system.log.zip
>
>
> When attempting to bootstrap a new node into a 2.1.3 cluster, the stream 
> source fails with a {{NullPointerException}}:
> {noformat}
> ERROR [STREAM-IN-/10.xx.x.xxx] 2015-06-13 00:02:01,264 StreamSession.java:477 
> - [Stream #60e8c120-
> 115f-11e5-9fee-] Streaming error occurred
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.io.sstable.SSTableReader.getPositionsForRanges(SSTableReader.java:1277)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.streaming.StreamSession.getSSTableSectionsForRanges(StreamSession.java:313)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.streaming.StreamSession.addTransferRanges(StreamSession.java:266)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.streaming.StreamSession.prepare(StreamSession.java:493) 
> ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:425)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:251)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
> INFO  [STREAM-IN-/10.xx.x.xxx] 2015-06-13 00:02:01,265 
> StreamResultFuture.java:180 - [Stream #60e8c120-115f-11e5-9fee-] 
> Session with /10.xx.x.xx1 is complete
> {noformat}
> _Update (2015-06-26):_
> I can also reproduce this on 2.1.7, though without the NPE on the stream-from 
> side.
> Stream source / existing node:
> {noformat}
> INFO  [STREAM-IN-/10.64.32.178] 2015-06-26 06:48:53,060 
> StreamResultFuture.java:180 - [Stream #8bdeb1b0-1ad2-11e5-abd8-3fcfb96209d9] 
> Session with /10.64.32.178 is complete
> INFO  [STREAM-IN-/10.64.32.178] 2015-06-26 06:48:53,064 
> StreamResultFuture.java:212 - [Stream #8bdeb1b0-1ad2-11e5-abd8-3fcfb96209d9] 
> All sessions completed
> {noformat}
> Stream sink / bootstrapping node:
> {noformat}
> INFO  [StreamReceiveTask:57] 2015-06-26 06:48:53,061 
> StreamResultFuture.java:180 - [Stream #8bdeb1b0-1ad2-11e5-abd8-3fcfb96209d9] 
> Session with /10.64.32.160 is complete
> WARN  [StreamReceiveTask:57] 2015-06-26 06:48:53,062 
> StreamResultFuture.java:207 - [Stream #8bdeb1b0-1ad2-11e5-abd8-3fcfb96209d9] 
> Stream failed
> INFO  [CompactionExecutor:2885] 2015-06-26 06:48:53,062 
> ColumnFamilyStore.java:906 - Enqueuing flush of compactions_in_progress: 428 
> (0%) on-heap, 379 (0%) off-heap
> INFO  [MemtableFlushWriter:959] 2015-06-26 06:48:53,063 Memtable.java:346 - 
> Writing Memtable-compactions_in_progress@1203013482(294 serialized bytes, 12 
> ops, 0%/0% of on/off-heap limit)
> ERROR [main] 2015-06-26 06:48:53,063 CassandraDaemon.java:541 - Exception 
> encountered during startup
> java.lang.RuntimeException: Error during boostrap: Stream failed
> at 
> org.apache.cassandra.dht.BootStrapper.bootstrap(BootStrapper.java:86) 
> ~[apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1137)
>  ~[apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:927)
>  ~[apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:723)
>  ~[apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:605)
>  ~[apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378) 
> [apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:524)
>  [apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:613) 
> [apache-cassandra-2.1.7.jar:2.1.7]
> Caused by: org.apache.cassandra.streaming.StreamException: Stream failed
> at 
> 

[jira] [Updated] (CASSANDRA-9624) unable to bootstrap; streaming fails with NullPointerException

2015-12-21 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-9624:
--
Assignee: (was: Yuki Morishita)

> unable to bootstrap; streaming fails with NullPointerException
> --
>
> Key: CASSANDRA-9624
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9624
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: Debian Jessie, 7u79-2.5.5-1~deb8u1, Cassandra 2.1.3
>Reporter: Eric Evans
> Fix For: 2.1.x
>
> Attachments: joining_system.log.zip
>
>
> When attempting to bootstrap a new node into a 2.1.3 cluster, the stream 
> source fails with a {{NullPointerException}}:
> {noformat}
> ERROR [STREAM-IN-/10.xx.x.xxx] 2015-06-13 00:02:01,264 StreamSession.java:477 
> - [Stream #60e8c120-
> 115f-11e5-9fee-] Streaming error occurred
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.io.sstable.SSTableReader.getPositionsForRanges(SSTableReader.java:1277)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.streaming.StreamSession.getSSTableSectionsForRanges(StreamSession.java:313)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.streaming.StreamSession.addTransferRanges(StreamSession.java:266)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.streaming.StreamSession.prepare(StreamSession.java:493) 
> ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:425)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:251)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
> INFO  [STREAM-IN-/10.xx.x.xxx] 2015-06-13 00:02:01,265 
> StreamResultFuture.java:180 - [Stream #60e8c120-115f-11e5-9fee-] 
> Session with /10.xx.x.xx1 is complete
> {noformat}
> _Update (2015-06-26):_
> I can also reproduce this on 2.1.7, though without the NPE on the stream-from 
> side.
> Stream source / existing node:
> {noformat}
> INFO  [STREAM-IN-/10.64.32.178] 2015-06-26 06:48:53,060 
> StreamResultFuture.java:180 - [Stream #8bdeb1b0-1ad2-11e5-abd8-3fcfb96209d9] 
> Session with /10.64.32.178 is complete
> INFO  [STREAM-IN-/10.64.32.178] 2015-06-26 06:48:53,064 
> StreamResultFuture.java:212 - [Stream #8bdeb1b0-1ad2-11e5-abd8-3fcfb96209d9] 
> All sessions completed
> {noformat}
> Stream sink / bootstrapping node:
> {noformat}
> INFO  [StreamReceiveTask:57] 2015-06-26 06:48:53,061 
> StreamResultFuture.java:180 - [Stream #8bdeb1b0-1ad2-11e5-abd8-3fcfb96209d9] 
> Session with /10.64.32.160 is complete
> WARN  [StreamReceiveTask:57] 2015-06-26 06:48:53,062 
> StreamResultFuture.java:207 - [Stream #8bdeb1b0-1ad2-11e5-abd8-3fcfb96209d9] 
> Stream failed
> INFO  [CompactionExecutor:2885] 2015-06-26 06:48:53,062 
> ColumnFamilyStore.java:906 - Enqueuing flush of compactions_in_progress: 428 
> (0%) on-heap, 379 (0%) off-heap
> INFO  [MemtableFlushWriter:959] 2015-06-26 06:48:53,063 Memtable.java:346 - 
> Writing Memtable-compactions_in_progress@1203013482(294 serialized bytes, 12 
> ops, 0%/0% of on/off-heap limit)
> ERROR [main] 2015-06-26 06:48:53,063 CassandraDaemon.java:541 - Exception 
> encountered during startup
> java.lang.RuntimeException: Error during boostrap: Stream failed
> at 
> org.apache.cassandra.dht.BootStrapper.bootstrap(BootStrapper.java:86) 
> ~[apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1137)
>  ~[apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:927)
>  ~[apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:723)
>  ~[apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:605)
>  ~[apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378) 
> [apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:524)
>  [apache-cassandra-2.1.7.jar:2.1.7]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:613) 
> [apache-cassandra-2.1.7.jar:2.1.7]
> Caused by: org.apache.cassandra.streaming.StreamException: Stream failed
> at 
> org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFailure(StreamEventJMXNotifier.java:85)
>  ~[apache-cassandra-2.1.7.jar:2.1.7]
>

[1/2] cassandra git commit: add missing logger parm marker

2015-12-21 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 53538cb4d -> c4428c7dd


add missing logger parm marker


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/21103bea
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/21103bea
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/21103bea

Branch: refs/heads/trunk
Commit: 21103bea23fa07ab4e38092e788a9a37b5707334
Parents: 8e35f84
Author: Dave Brosius 
Authored: Mon Dec 21 20:28:24 2015 -0500
Committer: Dave Brosius 
Committed: Mon Dec 21 20:28:24 2015 -0500

--
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/21103bea/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index f0adf39..e200e8e 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -571,7 +571,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
  // clear ephemeral snapshots that were not properly cleared last 
session (CASSANDRA-7357)
 clearEphemeralSnapshots(directories);
 
-logger.trace("Removing temporary or obsoleted files from unfinished 
operations for table", metadata.cfName);
+logger.trace("Removing temporary or obsoleted files from unfinished 
operations for table {}", metadata.cfName);
 LifecycleTransaction.removeUnfinishedLeftovers(metadata);
 
 logger.trace("Further extra check for orphan sstable files for {}", 
metadata.cfName);



[jira] [Created] (CASSANDRA-10918) remove leftover code from refactor

2015-12-21 Thread Dave Brosius (JIRA)
Dave Brosius created CASSANDRA-10918:


 Summary: remove leftover code from refactor
 Key: CASSANDRA-10918
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10918
 Project: Cassandra
  Issue Type: Improvement
  Components: Local Write-Read Paths
Reporter: Dave Brosius
Priority: Trivial
 Fix For: 3.x


code seems to have been left over from refactor from 2.2 to 3.0. removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10919) sstableutil_test.py:SSTableUtilTest.abortedcompaction_test flapping on 3.0

2015-12-21 Thread Jim Witschey (JIRA)
Jim Witschey created CASSANDRA-10919:


 Summary: 
sstableutil_test.py:SSTableUtilTest.abortedcompaction_test flapping on 3.0
 Key: CASSANDRA-10919
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10919
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Jim Witschey


{{sstableutil_test.py:SSTableUtilTest.abortedcompaction_test}} flaps on 3.0:

http://cassci.datastax.com/job/cassandra-3.0_dtest/438/testReport/junit/sstableutil_test/SSTableUtilTest/abortedcompaction_test/

It also flaps on the CassCI job running without vnodes:

http://cassci.datastax.com/job/cassandra-3.0_novnode_dtest/110/testReport/junit/sstableutil_test/SSTableUtilTest/abortedcompaction_test/history/





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7396) Allow selecting Map key, List index

2015-12-21 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067487#comment-15067487
 ] 

Jim Witschey commented on CASSANDRA-7396:
-

Since 8099's been merged, it might be time to look at this and decide its fate. 
Are we moving forward with it? [~snazy]?

> Allow selecting Map key, List index
> ---
>
> Key: CASSANDRA-7396
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7396
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Robert Stupp
>  Labels: cql
> Fix For: 3.x
>
> Attachments: 7396_unit_tests.txt
>
>
> Allow "SELECT map['key]" and "SELECT list[index]."  (Selecting a UDT subfield 
> is already supported.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9303) Match cassandra-loader options in COPY FROM

2015-12-21 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066163#comment-15066163
 ] 

Stefania commented on CASSANDRA-9303:
-

bq. I tested the new config options and the ingest rate is now working like a 
charm. 

Thanks, I made a slight modification to the ingest rate algorithm to give a 
better chance to the receive meter to show the statistics. The ingest rate 
should still be pretty accurate.

bq. I was initially thinking that while \[copy\] is a global section, 
\[copy-from*\] and \[copy-to*\] are exclusive sections for these commands, so 
for example if you define INGESTRATE by mistake in the \[copy-to\] section it's 
not picked up by a copy-from execution.

That's correct, {{copy-to*}} sections are not read in from executions and 
vice-versa. I've added a check to explicitly skip invalid or wrong direction 
options from config files along with more log messages so that it should be 
easier to see that an option is not read or ignored.

bq. Can you also add some examples to conf/cqlshrc.sample ? And maybe also 
update the cql protocol version there which is quite old.

Done.

bq.Also in the Reading options from /home/paulo/.cqlsh/cqlshrc message, maybe 
print which options are being read to improve clarity (don't worry if not 
straightforward)

Done.
   
bq. Cool! Since it's an edge-case I guess we can omit in the help and print a 
message instead in case it happens.

Done.

bq. Sounds good, it just seems the skipped columns is still being printed on 
the message Starting copy of keyspace1.standard1 with columns \['key', 'C0', 
'C1', 'C2', 'C3', 'C4'\]. (you fixed before, but it came back somehow).

It came back because of the changes to SKIPCOLS, it should be OK now.

bq. Move csv_dialect_defaults from cqlsh.py to copyutil.py

Done, I got rid of it.

bq. Move exclusive skip_columns field from CopyTask to ImportTask

Done, I've also moved it from ChildProcess to ImportProcess.

bq. csv_options are a bit misleading since they are not exclusive csv-related 
options, can we maybe rename the tuple CopyOptions(csv, dialect, unrecognized) 
to Options(copy, dialect, unrecognized)?

Done

--

I need to clarify with [~iamaleksey] for which branch we need to commit this 
since CASSANDRA-9494 was only committed to trunk. I will up-merge later on 
today once I know for sure.

> Match cassandra-loader options in COPY FROM
> ---
>
> Key: CASSANDRA-9303
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9303
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Tools
>Reporter: Jonathan Ellis
>Assignee: Stefania
>Priority: Critical
> Fix For: 2.1.x
>
>
> https://github.com/brianmhess/cassandra-loader added a bunch of options to 
> handle real world requirements, we should match those.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: Fix misplaced 2.2 upgrading section

2015-12-21 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 0e63000c3 -> 11165f473


Fix misplaced 2.2 upgrading section


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7afbaf71
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7afbaf71
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7afbaf71

Branch: refs/heads/cassandra-3.0
Commit: 7afbaf714f93e4fdcba7b166bd0b07fa788b77ae
Parents: c26b397
Author: Sylvain Lebresne 
Authored: Mon Dec 21 09:28:11 2015 +0100
Committer: Sylvain Lebresne 
Committed: Mon Dec 21 09:28:20 2015 +0100

--
 NEWS.txt | 69 ++-
 1 file changed, 35 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7afbaf71/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index e220357..7c6af4c 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -197,40 +197,6 @@ New features
  to stop working: jps, jstack, jinfo, jmc, jcmd as well as 3rd party tools 
like Jolokia.
  If you wish to use these tools you can comment this flag out in 
cassandra-env.{sh,ps1}
 
-
-2.1.10
-=
-
-New features
-
-   - The syntax TRUNCATE TABLE X is now accepted as an alias for TRUNCATE X
-
-
-2.1.9
-=
-
-Upgrading
--
-- cqlsh will now display timestamps with a UTC timezone. Previously,
-  timestamps were displayed with the local timezone.
-- Commit log files are no longer recycled by default, due to negative
-  performance implications. This can be enabled again with the 
-  commitlog_segment_recycling option in your cassandra.yaml 
-- JMX methods set/getCompactionStrategyClass have been deprecated, use
-  set/getCompactionParameters/set/getCompactionParametersJson instead
-
-2.1.8
-=
-
-Upgrading
--
-- Nothing specific to this release, but please see 2.1 if you are upgrading
-  from a previous version.
-
-
-2.1.7
-=
-
 Upgrading
 -
- Thrift rpc is no longer being started by default.
@@ -278,6 +244,41 @@ Upgrading
  to exclude data centers when the global status is enabled, see 
CASSANDRA-9035 for details.
 
 
+2.1.10
+=
+
+New features
+
+   - The syntax TRUNCATE TABLE X is now accepted as an alias for TRUNCATE X
+
+
+2.1.9
+=
+
+Upgrading
+-
+- cqlsh will now display timestamps with a UTC timezone. Previously,
+  timestamps were displayed with the local timezone.
+- Commit log files are no longer recycled by default, due to negative
+  performance implications. This can be enabled again with the 
+  commitlog_segment_recycling option in your cassandra.yaml 
+- JMX methods set/getCompactionStrategyClass have been deprecated, use
+  set/getCompactionParameters/set/getCompactionParametersJson instead
+
+2.1.8
+=
+
+Upgrading
+-
+- Nothing specific to this release, but please see 2.1 if you are upgrading
+  from a previous version.
+
+
+2.1.7
+=
+
+
+
 2.1.6
 =
 



[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-12-21 Thread slebresne
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e6724625
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e6724625
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e6724625

Branch: refs/heads/trunk
Commit: e67246255ceb29d20136eb2790fa653a683bce11
Parents: c0fd119 11165f4
Author: Sylvain Lebresne 
Authored: Mon Dec 21 09:29:48 2015 +0100
Committer: Sylvain Lebresne 
Committed: Mon Dec 21 09:29:48 2015 +0100

--
 NEWS.txt | 70 +--
 1 file changed, 35 insertions(+), 35 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e6724625/NEWS.txt
--



[1/3] cassandra git commit: Fix misplaced 2.2 upgrading section

2015-12-21 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/trunk c0fd119ce -> e67246255


Fix misplaced 2.2 upgrading section


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7afbaf71
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7afbaf71
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7afbaf71

Branch: refs/heads/trunk
Commit: 7afbaf714f93e4fdcba7b166bd0b07fa788b77ae
Parents: c26b397
Author: Sylvain Lebresne 
Authored: Mon Dec 21 09:28:11 2015 +0100
Committer: Sylvain Lebresne 
Committed: Mon Dec 21 09:28:20 2015 +0100

--
 NEWS.txt | 69 ++-
 1 file changed, 35 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7afbaf71/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index e220357..7c6af4c 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -197,40 +197,6 @@ New features
  to stop working: jps, jstack, jinfo, jmc, jcmd as well as 3rd party tools 
like Jolokia.
  If you wish to use these tools you can comment this flag out in 
cassandra-env.{sh,ps1}
 
-
-2.1.10
-=
-
-New features
-
-   - The syntax TRUNCATE TABLE X is now accepted as an alias for TRUNCATE X
-
-
-2.1.9
-=
-
-Upgrading
--
-- cqlsh will now display timestamps with a UTC timezone. Previously,
-  timestamps were displayed with the local timezone.
-- Commit log files are no longer recycled by default, due to negative
-  performance implications. This can be enabled again with the 
-  commitlog_segment_recycling option in your cassandra.yaml 
-- JMX methods set/getCompactionStrategyClass have been deprecated, use
-  set/getCompactionParameters/set/getCompactionParametersJson instead
-
-2.1.8
-=
-
-Upgrading
--
-- Nothing specific to this release, but please see 2.1 if you are upgrading
-  from a previous version.
-
-
-2.1.7
-=
-
 Upgrading
 -
- Thrift rpc is no longer being started by default.
@@ -278,6 +244,41 @@ Upgrading
  to exclude data centers when the global status is enabled, see 
CASSANDRA-9035 for details.
 
 
+2.1.10
+=
+
+New features
+
+   - The syntax TRUNCATE TABLE X is now accepted as an alias for TRUNCATE X
+
+
+2.1.9
+=
+
+Upgrading
+-
+- cqlsh will now display timestamps with a UTC timezone. Previously,
+  timestamps were displayed with the local timezone.
+- Commit log files are no longer recycled by default, due to negative
+  performance implications. This can be enabled again with the 
+  commitlog_segment_recycling option in your cassandra.yaml 
+- JMX methods set/getCompactionStrategyClass have been deprecated, use
+  set/getCompactionParameters/set/getCompactionParametersJson instead
+
+2.1.8
+=
+
+Upgrading
+-
+- Nothing specific to this release, but please see 2.1 if you are upgrading
+  from a previous version.
+
+
+2.1.7
+=
+
+
+
 2.1.6
 =
 



[2/3] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-12-21 Thread slebresne
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/11165f47
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/11165f47
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/11165f47

Branch: refs/heads/trunk
Commit: 11165f4733a5f2831f040aa08e881d60f7480922
Parents: 0e63000 7afbaf71
Author: Sylvain Lebresne 
Authored: Mon Dec 21 09:29:30 2015 +0100
Committer: Sylvain Lebresne 
Committed: Mon Dec 21 09:29:30 2015 +0100

--
 NEWS.txt | 70 +--
 1 file changed, 35 insertions(+), 35 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/11165f47/NEWS.txt
--



[jira] [Commented] (CASSANDRA-10707) Add support for Group By to Select statement

2015-12-21 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066337#comment-15066337
 ] 

Benjamin Lerer commented on CASSANDRA-10707:


The main difficulty of the ticket is the paging. Between the client and the 
coordinator nodes the page are returned based on the grouping but internally 
the data are paged by number of rows. 
For example, if a {{Group by}} query is used with a page size of 5000, the 
first page returned to the client must contains the aggregates for the first 
5000 groups or less (if there was less than 5000 groups). As these groups can 
be composed of a big number of rows, in order to avoid  OOM errors, the 
coordinator node need to request pages of data from the other nodes until it 
has enough groups. One of the problem being that it is only possible to be sure 
that a group is complete when the next group is reached or the data exhausted.

> Add support for Group By to Select statement
> 
>
> Key: CASSANDRA-10707
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10707
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>
> Now that Cassandra support aggregate functions, it makes sense to support 
> {{GROUP BY}} on the {{SELECT}} statements.
> It should be possible to group either at the partition level or at the 
> clustering column level.
> {code}
> SELECT partitionKey, max(value) FROM myTable GROUP BY partitionKey;
> SELECT partitionKey, clustering0, clustering1, max(value) FROM myTable GROUP 
> BY partitionKey, clustering0, clustering1; 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10801) Unexplained inconsistent data with Cassandra 2.1

2015-12-21 Thread Maor Cohen (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066261#comment-15066261
 ] 

Maor Cohen commented on CASSANDRA-10801:


One more important fact that might help to understand the problem:
We dropped and recreated that table with the same name and also truncated it 
several times. The dropped table contained data - meaning it wasn't empty when 
we recreated or truncated it.

I understand that there were bugs in earlier versions related to dropping and 
recreating a table with the same name but they should be solved in this version.

> Unexplained inconsistent data with Cassandra 2.1
> 
>
> Key: CASSANDRA-10801
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10801
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Imri Zvik
> Fix For: 2.1.x
>
> Attachments: tracing.log
>
>
> We are experiencing weird behavior which we cannot explain.
> We have a CF, with RF=3, and we are writing and reading data to it with 
> consistency level of ONE.
> For some reason, we see inconsistent results when querying for data.
> Even for rows that were written a day ago, we're seeing inconsistent results 
> (1 replca has the data, the two others don't).
> Now, I would expect to see timeouts/dropped mutations, but all relevant 
> counters are not advancing, and I would also expect hints to fix this 
> inconsistency within minutes, but yet it doesn't.
> {code}
> cqlsh:testing> SELECT WRITETIME(last_update),site_id, tree_id, individual_id, 
> last_update FROM testcf WHERE site_id = 229673621 AND tree_id = 9 AND 
> individual_id = 9032483;
>  writetime(last_update) | site_id   | tree_id | individual_id | last_update
> +---+-+---+-
>1448988343028000 | 229673621 |   9 |   9032483 |  1380912397
> (1 rows)
> cqlsh:testing> SELECT WRITETIME(last_update),site_id, tree_id, individual_id, 
> last_update FROM testcf WHERE site_id = 229673621 AND tree_id = 9 AND 
> individual_id = 9032483;
> site_id   | tree_id | individual_id | last_update
> ---+-+---+-
> (0 rows)
> cqlsh:testing> SELECT dateof(now()) FROM system.local ;
>  dateof(now())
> --
>  2015-12-02 14:48:44+
> (1 rows)
> {code}
> We are running with Cassandra 2.1.11 with Oracle Java 1.8.0_65-b17



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10854) cqlsh COPY FROM csv having line with more than one consecutive ',' delimiter is throwing 'list index out of range'

2015-12-21 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania reassigned CASSANDRA-10854:


Assignee: Stefania

> cqlsh COPY FROM csv having line with more than one consecutive  ',' delimiter 
>  is throwing 'list index out of range'
> 
>
> Key: CASSANDRA-10854
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10854
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: cqlsh 5.0.1 | Cassandra 2.1.11.969 | DSE 4.8.3 | CQL 
> spec 3.2.1 
>Reporter: Puspendu Banerjee
>Assignee: Stefania
>Priority: Minor
>
> cqlsh COPY FROM csv having line with more than one consecutive  ',' delimiter 
>  is throwing 'list index out of range'
> Steps to re-produce:
> {code}
> CREATE TABLE tracks_by_album (
>   album_title TEXT,
>   album_year INT,
>   performer TEXT STATIC,
>   album_genre TEXT STATIC,
>   track_number INT,
>   track_title TEXT,
>   PRIMARY KEY ((album_title, album_year), track_number)
> );
> {code}
> Create a file: tracks_by_album.csv having following 2 lines :
> {code}
> album,year,performer,genre,number,title
> a,2015,b c d,e f g,,
> {code}
> {code}
> cqlsh> COPY music.tracks_by_album
>  (album_title, album_year, performer, album_genre, track_number, 
> track_title)
> FROM '~/tracks_by_album.csv'
> WITH HEADER = 'true';
> Error :
> Starting copy of music.tracks_by_album with columns ['album_title', 
> 'album_year', 'performer', 'album_genre', 'track_number', 'track_title'].
> list index out of range
> Aborting import at record #1. Previously inserted records are still present, 
> and some records after that may be present as well.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10917) better validator randomness

2015-12-21 Thread Dave Brosius (JIRA)
Dave Brosius created CASSANDRA-10917:


 Summary: better validator randomness
 Key: CASSANDRA-10917
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10917
 Project: Cassandra
  Issue Type: Improvement
  Components: Local Write-Read Paths
Reporter: Dave Brosius
Priority: Trivial
 Fix For: 3.x


get better randomness by reusing a Random object rather than recreating it.

Also reuse keys list to avoid reallocations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10801) Unexplained inconsistent data with Cassandra 2.1

2015-12-21 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066391#comment-15066391
 ] 

Paulo Motta commented on CASSANDRA-10801:
-

bq. I don't understand why the first query didn't send request to other nodes 
while the last one did.

Reads are only sent to the amount of nodes necessary to satisfy consistency, so 
if you read at {{ONE}} the query is only sent to 1 node. If read repair is 
triggered, the query is sent to more nodes.

Are you sure your writes were at {{ALL}} consistency? The first report by some 
other person says the write was at {{ONE}}. Because if you had dropped 
mutations then some of your {{ALL}} writes have failed, and those might be 
causing inconsistencies. 

Unfortunately it's not possible to identify what may be causing this with the 
info you provided so you should try to provide more specific reproduction 
steps. As a workaround, you may try to recreate the table with a different name 
and see if the problem persists.

> Unexplained inconsistent data with Cassandra 2.1
> 
>
> Key: CASSANDRA-10801
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10801
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Imri Zvik
> Fix For: 2.1.x
>
> Attachments: trace2.log, tracing.log
>
>
> We are experiencing weird behavior which we cannot explain.
> We have a CF, with RF=3, and we are writing and reading data to it with 
> consistency level of ONE.
> For some reason, we see inconsistent results when querying for data.
> Even for rows that were written a day ago, we're seeing inconsistent results 
> (1 replca has the data, the two others don't).
> Now, I would expect to see timeouts/dropped mutations, but all relevant 
> counters are not advancing, and I would also expect hints to fix this 
> inconsistency within minutes, but yet it doesn't.
> {code}
> cqlsh:testing> SELECT WRITETIME(last_update),site_id, tree_id, individual_id, 
> last_update FROM testcf WHERE site_id = 229673621 AND tree_id = 9 AND 
> individual_id = 9032483;
>  writetime(last_update) | site_id   | tree_id | individual_id | last_update
> +---+-+---+-
>1448988343028000 | 229673621 |   9 |   9032483 |  1380912397
> (1 rows)
> cqlsh:testing> SELECT WRITETIME(last_update),site_id, tree_id, individual_id, 
> last_update FROM testcf WHERE site_id = 229673621 AND tree_id = 9 AND 
> individual_id = 9032483;
> site_id   | tree_id | individual_id | last_update
> ---+-+---+-
> (0 rows)
> cqlsh:testing> SELECT dateof(now()) FROM system.local ;
>  dateof(now())
> --
>  2015-12-02 14:48:44+
> (1 rows)
> {code}
> We are running with Cassandra 2.1.11 with Oracle Java 1.8.0_65-b17



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10829) cleanup + repair generates a lot of logs

2015-12-21 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066392#comment-15066392
 ] 

Marcus Eriksson commented on CASSANDRA-10829:
-

good point, pushed new commit on top of the old one and triggered new builds in 
cassci.


> cleanup + repair generates a lot of logs
> 
>
> Key: CASSANDRA-10829
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10829
> Project: Cassandra
>  Issue Type: Bug
> Environment: 5 nodes on Cassandra 2.1.11 (on Debian)
>Reporter: Fabien Rousseau
>Assignee: Marcus Eriksson
> Fix For: 2.1.x
>
>
> One of our node generates a lot of cassandra logs (int the 10 MB/s) and CPU 
> usage has increased (by a factor 2-3).
> This was most probably triggered by a "nodetool snapshot" while a cleanup was 
> already running on this node.
> An example of those logs:
> 2015-12-08 09:15:17,794 INFO  
> [ValidationExecutor:689]ColumnFamilyStore.java:1923 Spinning trying to 
> capture released readers [...]
> 2015-12-08 09:15:17,794 INFO  
> [ValidationExecutor:689]ColumnFamilyStore.java:1924 Spinning trying to 
> capture all readers [...]
> 2015-12-08 09:15:17,795 INFO  
> [ValidationExecutor:689]ColumnFamilyStore.java:1923 Spinning trying to 
> capture released readers [...]
> 2015-12-08 09:15:17,795 INFO  
> [ValidationExecutor:689]ColumnFamilyStore.java:1924 Spinning trying to 
> capture all readers [...]
> (I removed SSTableReader information because it's rather long... I can share 
> it privately if needed)
> Note that the date has not been changed (only 1ms between logs)
> It should not generate that gigantic amount of logs :)
> This is probably linked to: 
> https://issues.apache.org/jira/browse/CASSANDRA-9637



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10897) Avoid building PartitionUpdate in toString()

2015-12-21 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066397#comment-15066397
 ] 

Benjamin Lerer commented on CASSANDRA-10897:


+1

> Avoid building PartitionUpdate in toString()
> 
>
> Key: CASSANDRA-10897
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10897
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 3.0.x, 3.x
>
>
> In {{AbstractBTreePartition.toString()}} (which {{PartitionUpdate}} extends), 
> we iterate over the rows in the partition.  This triggers {{maybeBuild()}} in 
> the {{PartitionUpdate}}.  If the {{PartitionUpdate}} gets updated after the 
> {{toString()}} call, it will result in an {{IllegalStateException}} with the 
> message "An update should not be written again once it has been read".
> As a result, logging or using a debugger can trigger spurious errors, which 
> makes debugging difficult or impossible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10801) Unexplained inconsistent data with Cassandra 2.1

2015-12-21 Thread Maor Cohen (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066371#comment-15066371
 ] 

Maor Cohen edited comment on CASSANDRA-10801 at 12/21/15 11:57 AM:
---

Another example (trace2.log):

Rows that was loaded on Dec 17 (4 days before the read attempt). Executed CQL 
with consistency level ONE:

1. First query returned no rows. Logs shows that it doesn't send read request 
to other nodes.
2. Second query also didn't return rows. Read request sent to other nodes. 
Inconsistency found by read repair.
3. Last query returned the row.

I don't understand why the first query didn't send request to other nodes while 
the last one did. Can someone explain this?


was (Author: maor.cohen):
Another example:

Rows that was loaded on Dec 17 (4 days before the read attempt). Executed CQL 
with consistency level ONE:

1. First query returned no rows. Logs shows that it doesn't send read request 
to other nodes.
2. Second query also didn't return rows. Read request sent to other nodes. 
Inconsistency found by read repair.
3. Last query returned the row.

I don't understand why the first query didn't send request to other nodes while 
the last one did. Can someone explain this?

> Unexplained inconsistent data with Cassandra 2.1
> 
>
> Key: CASSANDRA-10801
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10801
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Imri Zvik
> Fix For: 2.1.x
>
> Attachments: trace2.log, tracing.log
>
>
> We are experiencing weird behavior which we cannot explain.
> We have a CF, with RF=3, and we are writing and reading data to it with 
> consistency level of ONE.
> For some reason, we see inconsistent results when querying for data.
> Even for rows that were written a day ago, we're seeing inconsistent results 
> (1 replca has the data, the two others don't).
> Now, I would expect to see timeouts/dropped mutations, but all relevant 
> counters are not advancing, and I would also expect hints to fix this 
> inconsistency within minutes, but yet it doesn't.
> {code}
> cqlsh:testing> SELECT WRITETIME(last_update),site_id, tree_id, individual_id, 
> last_update FROM testcf WHERE site_id = 229673621 AND tree_id = 9 AND 
> individual_id = 9032483;
>  writetime(last_update) | site_id   | tree_id | individual_id | last_update
> +---+-+---+-
>1448988343028000 | 229673621 |   9 |   9032483 |  1380912397
> (1 rows)
> cqlsh:testing> SELECT WRITETIME(last_update),site_id, tree_id, individual_id, 
> last_update FROM testcf WHERE site_id = 229673621 AND tree_id = 9 AND 
> individual_id = 9032483;
> site_id   | tree_id | individual_id | last_update
> ---+-+---+-
> (0 rows)
> cqlsh:testing> SELECT dateof(now()) FROM system.local ;
>  dateof(now())
> --
>  2015-12-02 14:48:44+
> (1 rows)
> {code}
> We are running with Cassandra 2.1.11 with Oracle Java 1.8.0_65-b17



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10801) Unexplained inconsistent data with Cassandra 2.1

2015-12-21 Thread Maor Cohen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maor Cohen updated CASSANDRA-10801:
---
Attachment: trace2.log

Another example:

Rows that was loaded on Dec 17 (4 days before the read attempt). Executed CQL 
with consistency level ONE:

1. First query returned no rows. Logs shows that it doesn't send read request 
to other nodes.
2. Second query also didn't return rows. Read request sent to other nodes. 
Inconsistency found by read repair.
3. Last query returned the row.

I don't understand why the first query didn't send request to other nodes while 
the last one did. Can someone explain this?

> Unexplained inconsistent data with Cassandra 2.1
> 
>
> Key: CASSANDRA-10801
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10801
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Imri Zvik
> Fix For: 2.1.x
>
> Attachments: trace2.log, tracing.log
>
>
> We are experiencing weird behavior which we cannot explain.
> We have a CF, with RF=3, and we are writing and reading data to it with 
> consistency level of ONE.
> For some reason, we see inconsistent results when querying for data.
> Even for rows that were written a day ago, we're seeing inconsistent results 
> (1 replca has the data, the two others don't).
> Now, I would expect to see timeouts/dropped mutations, but all relevant 
> counters are not advancing, and I would also expect hints to fix this 
> inconsistency within minutes, but yet it doesn't.
> {code}
> cqlsh:testing> SELECT WRITETIME(last_update),site_id, tree_id, individual_id, 
> last_update FROM testcf WHERE site_id = 229673621 AND tree_id = 9 AND 
> individual_id = 9032483;
>  writetime(last_update) | site_id   | tree_id | individual_id | last_update
> +---+-+---+-
>1448988343028000 | 229673621 |   9 |   9032483 |  1380912397
> (1 rows)
> cqlsh:testing> SELECT WRITETIME(last_update),site_id, tree_id, individual_id, 
> last_update FROM testcf WHERE site_id = 229673621 AND tree_id = 9 AND 
> individual_id = 9032483;
> site_id   | tree_id | individual_id | last_update
> ---+-+---+-
> (0 rows)
> cqlsh:testing> SELECT dateof(now()) FROM system.local ;
>  dateof(now())
> --
>  2015-12-02 14:48:44+
> (1 rows)
> {code}
> We are running with Cassandra 2.1.11 with Oracle Java 1.8.0_65-b17



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/6] cassandra git commit: Add instructions for upgrade to 2.2 in NEWS.txt

2015-12-21 Thread samt
Add instructions for upgrade to 2.2 in NEWS.txt

To include details of the data conversion and upgrade process
for new system_auth tables. See CASSANDRA-10904


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/df49cec1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/df49cec1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/df49cec1

Branch: refs/heads/cassandra-3.0
Commit: df49cec1caeaa710f0e32516b635b60426da6cd9
Parents: 7afbaf71
Author: Sam Tunnicliffe 
Authored: Sat Dec 19 17:03:11 2015 +
Committer: Sam Tunnicliffe 
Committed: Mon Dec 21 11:56:06 2015 +

--
 NEWS.txt | 23 +++
 1 file changed, 23 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/df49cec1/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 7c6af4c..3876c43 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -104,6 +104,29 @@ New features
 2.2
 ===
 
+Upgrading
+-
+   - The authentication & authorization subsystems have been redesigned to
+ support role based access control (RBAC), resulting in a change to the
+ schema of the system_auth keyspace. See below for more detail.
+ For systems already using the internal auth implementations, the process
+ for converting existing data during a rolling upgrade is straightforward.
+ As each node is restarted, it will attempt to convert any data in the
+ legacy tables into the new schema. Until enough nodes to satisfy the
+ replication strategy for the system_auth keyspace are upgraded and so have
+ the new schema, this conversion will fail with the failure being reported
+ in the system log.
+ During the upgrade, Cassandra's internal auth classes will continue to use
+ the legacy tables, so clients experience no disruption. Issuing DCL
+ statements during an upgrade is not supported.
+ Once all nodes are upgraded, an operator with superuser privileges should
+ drop the legacy tables, system_auth.users, system_auth.credentials and 
+ system_auth.permissions. Doing so will prompt Cassandra to switch over to 
+ the new tables without requiring any further intervention.
+ While the legacy tables are present a restarted node will re-run the data
+ conversion and report the outcome so that operators can verify that it is
+ safe to drop them.
+
 New features
 
- The LIMIT clause applies now only to the number of rows returned to the 
user,



[1/6] cassandra git commit: Add instructions for upgrade to 2.2 in NEWS.txt

2015-12-21 Thread samt
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 7afbaf714 -> df49cec1c
  refs/heads/cassandra-3.0 11165f473 -> 67fd42fd3
  refs/heads/trunk e67246255 -> 565799c28


Add instructions for upgrade to 2.2 in NEWS.txt

To include details of the data conversion and upgrade process
for new system_auth tables. See CASSANDRA-10904


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/df49cec1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/df49cec1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/df49cec1

Branch: refs/heads/cassandra-2.2
Commit: df49cec1caeaa710f0e32516b635b60426da6cd9
Parents: 7afbaf71
Author: Sam Tunnicliffe 
Authored: Sat Dec 19 17:03:11 2015 +
Committer: Sam Tunnicliffe 
Committed: Mon Dec 21 11:56:06 2015 +

--
 NEWS.txt | 23 +++
 1 file changed, 23 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/df49cec1/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 7c6af4c..3876c43 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -104,6 +104,29 @@ New features
 2.2
 ===
 
+Upgrading
+-
+   - The authentication & authorization subsystems have been redesigned to
+ support role based access control (RBAC), resulting in a change to the
+ schema of the system_auth keyspace. See below for more detail.
+ For systems already using the internal auth implementations, the process
+ for converting existing data during a rolling upgrade is straightforward.
+ As each node is restarted, it will attempt to convert any data in the
+ legacy tables into the new schema. Until enough nodes to satisfy the
+ replication strategy for the system_auth keyspace are upgraded and so have
+ the new schema, this conversion will fail with the failure being reported
+ in the system log.
+ During the upgrade, Cassandra's internal auth classes will continue to use
+ the legacy tables, so clients experience no disruption. Issuing DCL
+ statements during an upgrade is not supported.
+ Once all nodes are upgraded, an operator with superuser privileges should
+ drop the legacy tables, system_auth.users, system_auth.credentials and 
+ system_auth.permissions. Doing so will prompt Cassandra to switch over to 
+ the new tables without requiring any further intervention.
+ While the legacy tables are present a restarted node will re-run the data
+ conversion and report the outcome so that operators can verify that it is
+ safe to drop them.
+
 New features
 
- The LIMIT clause applies now only to the number of rows returned to the 
user,



[5/6] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-12-21 Thread samt
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/67fd42fd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/67fd42fd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/67fd42fd

Branch: refs/heads/cassandra-3.0
Commit: 67fd42fd3fb564585294c7b98d8cbca8a696cfdd
Parents: 11165f4 df49cec
Author: Sam Tunnicliffe 
Authored: Mon Dec 21 11:57:06 2015 +
Committer: Sam Tunnicliffe 
Committed: Mon Dec 21 11:57:06 2015 +

--
 NEWS.txt | 23 +++
 1 file changed, 23 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/67fd42fd/NEWS.txt
--



[6/6] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-12-21 Thread samt
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/565799c2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/565799c2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/565799c2

Branch: refs/heads/trunk
Commit: 565799c28d67bfd7f6b705abc01b6f895f3f90c1
Parents: e672462 67fd42f
Author: Sam Tunnicliffe 
Authored: Mon Dec 21 11:57:53 2015 +
Committer: Sam Tunnicliffe 
Committed: Mon Dec 21 11:57:53 2015 +

--
 NEWS.txt | 23 +++
 1 file changed, 23 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/565799c2/NEWS.txt
--



[jira] [Updated] (CASSANDRA-10910) Materialized view remained rows

2015-12-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gábor Auth updated CASSANDRA-10910:
---
Description: 
I've created a table and a materialized view.
{code}
> CREATE TABLE test (id text PRIMARY KEY, key text, value int);
> CREATE MATERIALIZED VIEW test_view AS SELECT * FROM test WHERE key IS NOT 
> NULL PRIMARY KEY(key, id);
{code}

I've put a value into the table:
{code}
> update test set key='key', value=1 where id='id';
> select * from test; select * from test_view ;

 id | key | value
+-+---
 id | key | 1

(1 rows)

 key | id | value
-++---
 key | id | 1

(1 rows)
{code}

I've updated the value without specified the key of the materialized view:
{code}
> update test set value=2 where id='id';
> select * from test; select * from test_view ;

 id | key | value
+-+---
 id | key | 2

(1 rows)

 key | id | value
-++---
 key | id | 2

(1 rows)
{code}
It works as I think...

...but I've updated the key of the materialized view:
{code}
> update test set key='newKey' where id='id';
> select * from test; select * from test_view ;

 id | key| value
++---
 id | newKey | 2

(1 rows)

 key| id | value
++---
key | id | 2
 newKey | id | 2

(2 rows)
> update test set key='newKey', value=3 where id='id';
> select * from test; select * from test_view ;

 id | key| value
++---
 id | newKey | 3

(1 rows)

 key| id | value
++---
key | id | 2
 newKey | id | 3

(2 rows)
> delete from test where id='id';
> select * from test; select * from test_view ;

 id | key | value
+-+---

(0 rows)

 key | id | value
-++---
 key | id | 2

(1 rows)
{code}

It is a bug?

  was:
I've created a table and a materialized view.
{code}
> CREATE TABLE test (id text PRIMARY KEY, key text, value int);
> CREATE MATERIALIZED VIEW test_view AS SELECT * FROM test WHERE key IS NOT 
> NULL PRIMARY KEY(key, id);
{code}

I've put a value into the table:
{code}
> update test set key='key', value=1 where id='id';
> select * from test; select * from test_view ;

 id | key | value
+-+---
 id | key | 1

(1 rows)

 key | id | value
-++---
 key | id | 1

(1 rows)
{code}

I've updated the value without specified the key of the materialized view:
{code}
> update test set value=2 where id='id';
> select * from test; select * from test_view ;

 id | key | value
+-+---
 id | key | 2

(1 rows)

 key | id | value
-++---
 key | id | 2

(1 rows)
{code}
It works as I think...

...but I've updated the key of the materialized view:
{code}
> update test set key='newKey' where id='id';
> select * from test; select * from test_view ;

 id | key| value
++---
 id | newKey | 2

(1 rows)

 key| id | value
++---
key | id | 2
 newKey | id | 2

(2 rows)
> delete from test where id='id';
> select * from test; select * from test_view ;

 id | key | value
+-+---

(0 rows)

 key | id | value
-++---
 key | id | 2

(1 rows)
{code}

It is a bug?


> Materialized view remained rows
> ---
>
> Key: CASSANDRA-10910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10910
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.0
>Reporter: Gábor Auth
>
> I've created a table and a materialized view.
> {code}
> > CREATE TABLE test (id text PRIMARY KEY, key text, value int);
> > CREATE MATERIALIZED VIEW test_view AS SELECT * FROM test WHERE key IS NOT 
> > NULL PRIMARY KEY(key, id);
> {code}
> I've put a value into the table:
> {code}
> > update test set key='key', value=1 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 1
> (1 rows)
>  key | id | value
> -++---
>  key | id | 1
> (1 rows)
> {code}
> I've updated the value without specified the key of the materialized view:
> {code}
> > update test set value=2 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 2
> (1 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> It works as I think...
> ...but I've updated the key of the materialized view:
> {code}
> > update test set key='newKey' where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> ++---
>  id | newKey | 2
> (1 rows)
>  key| id | value
> ++---
> key | id | 2
>  newKey | id | 2
> (2 rows)
> > update test set key='newKey', value=3 where id='id';
> > select * from test; select * from test_view ;
>  id | key| value
> 

[jira] [Updated] (CASSANDRA-10907) Nodetool snapshot should provide an option to skip flushing

2015-12-21 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10907:

Labels: lhf  (was: )

> Nodetool snapshot should provide an option to skip flushing
> ---
>
> Key: CASSANDRA-10907
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10907
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
>  Labels: lhf
>
> For some practical scenarios, it doesn't matter if the data is flushed to 
> disk before taking a snapshot. However, it's better to save some flushing 
> time to make snapshot process quick.
> As such, it will be a good idea to provide this option to snapshot command. 
> The wiring from nodetool to MBean to VerbHandler should be easy. 
> I can provide a patch if this makes sense.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10910) Materialized view remained rows

2015-12-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gábor Auth updated CASSANDRA-10910:
---
Description: 
I've created a table and a materialized view.
{code}
> CREATE TABLE test (id text PRIMARY KEY, key text, value int);
> CREATE MATERIALIZED VIEW test_view AS SELECT * FROM test WHERE key IS NOT 
> NULL PRIMARY KEY(key, id);
{code}

I've put a value into the table:
{code}
> update test set key='key', value=1 where id='id';
> select * from test; select * from test_view ;

 id | key | value
+-+---
 id | key | 1

(1 rows)

 key | id | value
-++---
 key | id | 1

(1 rows)
{code}

I've updated the value without specified the key of the materialized view:
{code}
> update test set value=2 where id='id';
> select * from test; select * from test_view ;

 id | key | value
+-+---
 id | key | 2

(1 rows)

 key | id | value
-++---
 key | id | 2

(1 rows)
{code}
It works as I think...

...but I've updated the key of the materialized view:
{code}
> update test set key='newKey' where id='id';
> select * from test; select * from test_view ;

 id | key| value
++---
 id | newKey | 2

(1 rows)

 key| id | value
++---
key | id | 2
 newKey | id | 2

(2 rows)
{code}

...I've updated the value of the row:
{code}
> update test set key='newKey', value=3 where id='id';
> select * from test; select * from test_view ;

 id | key| value
++---
 id | newKey | 3

(1 rows)

 key| id | value
++---
key | id | 2
 newKey | id | 3

(2 rows)
{code}

...I've deleted the row by the id key:
{code}
> delete from test where id='id';
> select * from test; select * from test_view ;

 id | key | value
+-+---

(0 rows)

 key | id | value
-++---
 key | id | 2

(1 rows)
{code}

It is a bug?

  was:
I've created a table and a materialized view.
{code}
> CREATE TABLE test (id text PRIMARY KEY, key text, value int);
> CREATE MATERIALIZED VIEW test_view AS SELECT * FROM test WHERE key IS NOT 
> NULL PRIMARY KEY(key, id);
{code}

I've put a value into the table:
{code}
> update test set key='key', value=1 where id='id';
> select * from test; select * from test_view ;

 id | key | value
+-+---
 id | key | 1

(1 rows)

 key | id | value
-++---
 key | id | 1

(1 rows)
{code}

I've updated the value without specified the key of the materialized view:
{code}
> update test set value=2 where id='id';
> select * from test; select * from test_view ;

 id | key | value
+-+---
 id | key | 2

(1 rows)

 key | id | value
-++---
 key | id | 2

(1 rows)
{code}
It works as I think...

...but I've updated the key of the materialized view:
{code}
> update test set key='newKey' where id='id';
> select * from test; select * from test_view ;

 id | key| value
++---
 id | newKey | 2

(1 rows)

 key| id | value
++---
key | id | 2
 newKey | id | 2

(2 rows)
> update test set key='newKey', value=3 where id='id';
> select * from test; select * from test_view ;

 id | key| value
++---
 id | newKey | 3

(1 rows)

 key| id | value
++---
key | id | 2
 newKey | id | 3

(2 rows)
> delete from test where id='id';
> select * from test; select * from test_view ;

 id | key | value
+-+---

(0 rows)

 key | id | value
-++---
 key | id | 2

(1 rows)
{code}

It is a bug?


> Materialized view remained rows
> ---
>
> Key: CASSANDRA-10910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10910
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.0
>Reporter: Gábor Auth
>
> I've created a table and a materialized view.
> {code}
> > CREATE TABLE test (id text PRIMARY KEY, key text, value int);
> > CREATE MATERIALIZED VIEW test_view AS SELECT * FROM test WHERE key IS NOT 
> > NULL PRIMARY KEY(key, id);
> {code}
> I've put a value into the table:
> {code}
> > update test set key='key', value=1 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 1
> (1 rows)
>  key | id | value
> -++---
>  key | id | 1
> (1 rows)
> {code}
> I've updated the value without specified the key of the materialized view:
> {code}
> > update test set value=2 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 2
> (1 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> It works as I think...
> ...but I've updated the key of the materialized view:
> {code}
> > update test set key='newKey' where id='id';

[jira] [Updated] (CASSANDRA-10910) Materialized view remained rows

2015-12-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gábor Auth updated CASSANDRA-10910:
---
Description: 
I've created a table and a materialized view.
{code}
> CREATE TABLE test (id text PRIMARY KEY, key text, value int);
> CREATE MATERIALIZED VIEW test_view AS SELECT * FROM test WHERE key IS NOT 
> NULL PRIMARY KEY(key, id);
{code}

I've put a value into the table:
{code}
> update test set key='key', value=1 where id='id';
> select * from test; select * from test_view ;

 id | key | value
+-+---
 id | key | 1

(1 rows)

 key | id | value
-++---
 key | id | 1

(1 rows)
{code}

I've updated the value without specified the key of the materialized view:
{code}
> update test set value=2 where id='id';
> select * from test; select * from test_view ;

 id | key | value
+-+---
 id | key | 2

(1 rows)

 key | id | value
-++---
 key | id | 2

(1 rows)
{code}
It works as I think...

...but I've updated the key of the materialized view:
{code}
> update test set key='newKey' where id='id';
> select * from test; select * from test_view ;

 id | key| value
++---
 id | newKey | 2

(1 rows)

 key| id | value
++---
key | id | 2
 newKey | id | 2

(2 rows)
{code}

...I've updated the value of the row:
{code}
> update test set key='newKey', value=3 where id='id';
> select * from test; select * from test_view ;

 id | key| value
++---
 id | newKey | 3

(1 rows)

 key| id | value
++---
key | id | 2
 newKey | id | 3

(2 rows)
{code}

...I've deleted the row by the id key:
{code}
> delete from test where id='id';
> select * from test; select * from test_view ;

 id | key | value
+-+---

(0 rows)

 key | id | value
-++---
 key | id | 2

(1 rows)
{code}

Is it a bug?

  was:
I've created a table and a materialized view.
{code}
> CREATE TABLE test (id text PRIMARY KEY, key text, value int);
> CREATE MATERIALIZED VIEW test_view AS SELECT * FROM test WHERE key IS NOT 
> NULL PRIMARY KEY(key, id);
{code}

I've put a value into the table:
{code}
> update test set key='key', value=1 where id='id';
> select * from test; select * from test_view ;

 id | key | value
+-+---
 id | key | 1

(1 rows)

 key | id | value
-++---
 key | id | 1

(1 rows)
{code}

I've updated the value without specified the key of the materialized view:
{code}
> update test set value=2 where id='id';
> select * from test; select * from test_view ;

 id | key | value
+-+---
 id | key | 2

(1 rows)

 key | id | value
-++---
 key | id | 2

(1 rows)
{code}
It works as I think...

...but I've updated the key of the materialized view:
{code}
> update test set key='newKey' where id='id';
> select * from test; select * from test_view ;

 id | key| value
++---
 id | newKey | 2

(1 rows)

 key| id | value
++---
key | id | 2
 newKey | id | 2

(2 rows)
{code}

...I've updated the value of the row:
{code}
> update test set key='newKey', value=3 where id='id';
> select * from test; select * from test_view ;

 id | key| value
++---
 id | newKey | 3

(1 rows)

 key| id | value
++---
key | id | 2
 newKey | id | 3

(2 rows)
{code}

...I've deleted the row by the id key:
{code}
> delete from test where id='id';
> select * from test; select * from test_view ;

 id | key | value
+-+---

(0 rows)

 key | id | value
-++---
 key | id | 2

(1 rows)
{code}

It is a bug?


> Materialized view remained rows
> ---
>
> Key: CASSANDRA-10910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10910
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 3.0.0
>Reporter: Gábor Auth
>
> I've created a table and a materialized view.
> {code}
> > CREATE TABLE test (id text PRIMARY KEY, key text, value int);
> > CREATE MATERIALIZED VIEW test_view AS SELECT * FROM test WHERE key IS NOT 
> > NULL PRIMARY KEY(key, id);
> {code}
> I've put a value into the table:
> {code}
> > update test set key='key', value=1 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 1
> (1 rows)
>  key | id | value
> -++---
>  key | id | 1
> (1 rows)
> {code}
> I've updated the value without specified the key of the materialized view:
> {code}
> > update test set value=2 where id='id';
> > select * from test; select * from test_view ;
>  id | key | value
> +-+---
>  id | key | 2
> (1 rows)
>  key | id | value
> -++---
>  key | id | 2
> (1 rows)
> {code}
> It works as I think...
> 

[4/6] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-12-21 Thread samt
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/67fd42fd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/67fd42fd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/67fd42fd

Branch: refs/heads/trunk
Commit: 67fd42fd3fb564585294c7b98d8cbca8a696cfdd
Parents: 11165f4 df49cec
Author: Sam Tunnicliffe 
Authored: Mon Dec 21 11:57:06 2015 +
Committer: Sam Tunnicliffe 
Committed: Mon Dec 21 11:57:06 2015 +

--
 NEWS.txt | 23 +++
 1 file changed, 23 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/67fd42fd/NEWS.txt
--



[3/6] cassandra git commit: Add instructions for upgrade to 2.2 in NEWS.txt

2015-12-21 Thread samt
Add instructions for upgrade to 2.2 in NEWS.txt

To include details of the data conversion and upgrade process
for new system_auth tables. See CASSANDRA-10904


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/df49cec1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/df49cec1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/df49cec1

Branch: refs/heads/trunk
Commit: df49cec1caeaa710f0e32516b635b60426da6cd9
Parents: 7afbaf71
Author: Sam Tunnicliffe 
Authored: Sat Dec 19 17:03:11 2015 +
Committer: Sam Tunnicliffe 
Committed: Mon Dec 21 11:56:06 2015 +

--
 NEWS.txt | 23 +++
 1 file changed, 23 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/df49cec1/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 7c6af4c..3876c43 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -104,6 +104,29 @@ New features
 2.2
 ===
 
+Upgrading
+-
+   - The authentication & authorization subsystems have been redesigned to
+ support role based access control (RBAC), resulting in a change to the
+ schema of the system_auth keyspace. See below for more detail.
+ For systems already using the internal auth implementations, the process
+ for converting existing data during a rolling upgrade is straightforward.
+ As each node is restarted, it will attempt to convert any data in the
+ legacy tables into the new schema. Until enough nodes to satisfy the
+ replication strategy for the system_auth keyspace are upgraded and so have
+ the new schema, this conversion will fail with the failure being reported
+ in the system log.
+ During the upgrade, Cassandra's internal auth classes will continue to use
+ the legacy tables, so clients experience no disruption. Issuing DCL
+ statements during an upgrade is not supported.
+ Once all nodes are upgraded, an operator with superuser privileges should
+ drop the legacy tables, system_auth.users, system_auth.credentials and 
+ system_auth.permissions. Doing so will prompt Cassandra to switch over to 
+ the new tables without requiring any further intervention.
+ While the legacy tables are present a restarted node will re-run the data
+ conversion and report the outcome so that operators can verify that it is
+ safe to drop them.
+
 New features
 
- The LIMIT clause applies now only to the number of rows returned to the 
user,



[jira] [Resolved] (CASSANDRA-10904) Add upgrade procedure related to new role based access control in NEWS.txt

2015-12-21 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe resolved CASSANDRA-10904.
-
   Resolution: Fixed
Fix Version/s: 3.0.3
   3.2
   2.2.5

I've added the upgrade instructions to NEWS.txt in 
{{df49cec1caeaa710f0e32516b635b60426da6cd9}}.

For reference: 
bq. what are the "legacy tables" that should be dropped.
>From the {{system_auth}} keyspace, the tables to drop are {{users}}, 
>{{credentials}} and {{permissions}}. Actually, it's only essential to drop 
>{{credentials}} and {{permissions}}, but {{users}} is unused once the other 
>two are dropped.

bq. how to verify if the schema upgrade is successful. 
You could manually compare the data in the the old tables with that in the new 
{{roles}} and {{role_permissions}} tables, but the schema is obviously 
different so it isn't just a simple 1:1 mapping. A more straightforward way 
would be to simply restart a node and monitor its {{system.log}}. As long as 
the legacy tables haven't been dropped, the node will re-run the data 
conversion at startup and report its outcome. You should look for the following 
lines in the log:

{noformat}
INFO  [OptionalTasks:1] CassandraRoleManager.java:410 - Converting legacy users
INFO  [OptionalTasks:1] CassandraRoleManager.java:420 - Completed conversion of 
legacy users
INFO  [OptionalTasks:1] CassandraRoleManager.java:425 - Migrating legacy 
credentials data to new system table
INFO  [OptionalTasks:1] CassandraRoleManager.java:438 - Completed conversion of 
legacy credentials
INFO  [OptionalTasks:1] CassandraAuthorizer.java:396 - Converting legacy 
permissions data
INFO  [OptionalTasks:1] CassandraAuthorizer.java:435 - Completed conversion of 
legacy permissions
{noformat}

> Add upgrade procedure related to new role based access control in NEWS.txt
> --
>
> Key: CASSANDRA-10904
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10904
> Project: Cassandra
>  Issue Type: Task
>  Components: Documentation and Website
>Reporter: Reynald Bourtembourg
>  Labels: documentation
> Fix For: 2.2.5, 3.2, 3.0.3
>
>
> The upgrade procedure related to new role based access control feature in 
> Cassandra 2.2 is not documented in NEWS.txt file.
> The upgrade procedure is described in this blog post:
> http://www.datastax.com/dev/blog/role-based-access-control-in-cassandra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10909) NPE in ActiveRepairService

2015-12-21 Thread Eduard Tudenhoefner (JIRA)
Eduard Tudenhoefner created CASSANDRA-10909:
---

 Summary: NPE in ActiveRepairService 
 Key: CASSANDRA-10909
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10909
 Project: Cassandra
  Issue Type: Bug
 Environment: cassandra-3.0.1.777
Reporter: Eduard Tudenhoefner
Assignee: Marcus Eriksson


NPE after one started multiple incremental repairs

{code}
INFO  [Thread-62] 2015-12-21 11:40:53,742  RepairRunnable.java:125 - Starting 
repair command #1, repairing keyspace keyspace1 with repair options 
(parallelism: parallel, primary range: false, incremental: true, job threads: 
1, ColumnFamilies: [], dataCenters: [], hosts: [], # of ranges: 2)
INFO  [Thread-62] 2015-12-21 11:40:53,813  RepairSession.java:237 - [repair 
#b13e3740-a7d7-11e5-b568-f565b837eb0d] new session: will sync /10.200.177.32, 
/10.200.177.33 on range [(10,-9223372036854775808]] for keyspace1.[counter1, 
standard1]
INFO  [Repair#1:1] 2015-12-21 11:40:53,853  RepairJob.java:100 - [repair 
#b13e3740-a7d7-11e5-b568-f565b837eb0d] requesting merkle trees for counter1 (to 
[/10.200.177.33, /10.200.177.32])
INFO  [Repair#1:1] 2015-12-21 11:40:53,853  RepairJob.java:174 - [repair 
#b13e3740-a7d7-11e5-b568-f565b837eb0d] Requesting merkle trees for counter1 (to 
[/10.200.177.33, /10.200.177.32])
INFO  [Thread-62] 2015-12-21 11:40:53,854  RepairSession.java:237 - [repair 
#b1449fe0-a7d7-11e5-b568-f565b837eb0d] new session: will sync /10.200.177.32, 
/10.200.177.31 on range [(0,10]] for keyspace1.[counter1, standard1]
INFO  [AntiEntropyStage:1] 2015-12-21 11:40:53,896  RepairSession.java:181 - 
[repair #b13e3740-a7d7-11e5-b568-f565b837eb0d] Received merkle tree for 
counter1 from /10.200.177.32
INFO  [AntiEntropyStage:1] 2015-12-21 11:40:53,906  RepairSession.java:181 - 
[repair #b13e3740-a7d7-11e5-b568-f565b837eb0d] Received merkle tree for 
counter1 from /10.200.177.33
INFO  [Repair#1:1] 2015-12-21 11:40:53,906  RepairJob.java:100 - [repair 
#b13e3740-a7d7-11e5-b568-f565b837eb0d] requesting merkle trees for standard1 
(to [/10.200.177.33, /10.200.177.32])
INFO  [Repair#1:1] 2015-12-21 11:40:53,906  RepairJob.java:174 - [repair 
#b13e3740-a7d7-11e5-b568-f565b837eb0d] Requesting merkle trees for standard1 
(to [/10.200.177.33, /10.200.177.32])
INFO  [RepairJobTask:2] 2015-12-21 11:40:53,910  SyncTask.java:66 - [repair 
#b13e3740-a7d7-11e5-b568-f565b837eb0d] Endpoints /10.200.177.33 and 
/10.200.177.32 are consistent for counter1
INFO  [RepairJobTask:1] 2015-12-21 11:40:53,910  RepairJob.java:145 - [repair 
#b13e3740-a7d7-11e5-b568-f565b837eb0d] counter1 is fully synced
INFO  [AntiEntropyStage:1] 2015-12-21 11:40:54,823  Validator.java:272 - 
[repair #b17a2ed0-a7d7-11e5-ada8-8304f5629908] Sending completed merkle tree to 
/10.200.177.33 for keyspace1.counter1
ERROR [ValidationExecutor:3] 2015-12-21 11:40:55,104  
CompactionManager.java:1065 - Cannot start multiple repair sessions over the 
same sstables
ERROR [ValidationExecutor:3] 2015-12-21 11:40:55,105  Validator.java:259 - 
Failed creating a merkle tree for [repair #b17a2ed0-a7d7-11e5-ada8-8304f5629908 
on keyspace1/standard1, [(10,-9223372036854775808]]], /10.200.177.33 (see log 
for details)
ERROR [ValidationExecutor:3] 2015-12-21 11:40:55,110  CassandraDaemon.java:195 
- Exception in thread Thread[ValidationExecutor:3,1,main]
java.lang.RuntimeException: Cannot start multiple repair sessions over the same 
sstables
at 
org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1066)
 ~[cassandra-all-3.0.1.777.jar:3.0.1.777]
at 
org.apache.cassandra.db.compaction.CompactionManager.access$700(CompactionManager.java:80)
 ~[cassandra-all-3.0.1.777.jar:3.0.1.777]
at 
org.apache.cassandra.db.compaction.CompactionManager$10.call(CompactionManager.java:679)
 ~[cassandra-all-3.0.1.777.jar:3.0.1.777]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[na:1.8.0_40]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_40]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_40]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]
ERROR [AntiEntropyStage:1] 2015-12-21 11:40:55,174  
RepairMessageVerbHandler.java:161 - Got error, removing parent repair session
INFO  [CompactionExecutor:3] 2015-12-21 11:40:55,175  
CompactionManager.java:489 - Starting anticompaction for keyspace1.counter1 on 
0/[] sstables
INFO  [CompactionExecutor:3] 2015-12-21 11:40:55,176  
CompactionManager.java:547 - Completed anticompaction successfully
ERROR [AntiEntropyStage:1] 2015-12-21 11:40:55,179  CassandraDaemon.java:195 - 
Exception in thread Thread[AntiEntropyStage:1,5,main]
java.lang.RuntimeException: java.lang.NullPointerException
at 

[jira] [Created] (CASSANDRA-10910) Materialized view remained rows

2015-12-21 Thread JIRA
Gábor Auth created CASSANDRA-10910:
--

 Summary: Materialized view remained rows
 Key: CASSANDRA-10910
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10910
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 3.0.0
Reporter: Gábor Auth


I've created a table and a materialized view.
{code}
> CREATE TABLE test (id text PRIMARY KEY, key text, value int);
> CREATE MATERIALIZED VIEW test_view AS SELECT * FROM test WHERE key IS NOT 
> NULL PRIMARY KEY(key, id);
{code}

I've put a value into the table:
{code}
> update test set key='key', value=1 where id='id';
> select * from test; select * from test_view ;

 id | key | value
+-+---
 id | key | 1

(1 rows)

 key | id | value
-++---
 key | id | 1

(1 rows)
{code}

I've updated the value without specified the key of the materialized view:
{code}
> update test set value=2 where id='id';
> select * from test; select * from test_view ;

 id | key | value
+-+---
 id | key | 2

(1 rows)

 key | id | value
-++---
 key | id | 2

(1 rows)
{code}
It works as I think...

...but I've updated the key of the materialized view:
{code}
> update test set key='newKey' where id='id';
> select * from test; select * from test_view ;

 id | key| value
++---
 id | newKey | 2

(1 rows)

 key| id | value
++---
key | id | 2
 newKey | id | 2

(2 rows)
> delete from test where id='id';
> select * from test; select * from test_view ;

 id | key | value
+-+---

(0 rows)

 key | id | value
-++---
 key | id | 2

(1 rows)
{code}

It is a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10877) Unable to read obsolete message version 1; The earliest version supported is 2.0.0

2015-12-21 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1500#comment-1500
 ] 

Jim Witschey commented on CASSANDRA-10877:
--

[~esala] Sounds like this isn't a Cassandra issue -- is that correct?

> Unable to read obsolete message version 1; The earliest version supported is 
> 2.0.0
> --
>
> Key: CASSANDRA-10877
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10877
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: esala wona
>
> I used cassandra version 2.1.2, but I get some error about that:
> {code}
>  error message 
> ERROR [Thread-83674153] 2015-12-15 10:54:42,980 CassandraDaemon.java:153 - 
> Exception in thread Thread[Thread-83674153,5,main]
> java.lang.UnsupportedOperationException: Unable to read obsolete message 
> version 1; The earliest version supported is 2.0.0
> at 
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:78)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> = end 
> {code}
> {code}
> == nodetool information 
> ces@ICESSuse3631:/opt/ces/cassandra/bin> ./nodetool gossipinfo
> /192.168.0.1
> generation:1450148624
> heartbeat:299069
> SCHEMA:194168a3-f66e-3ab9-b811-4f7a7c3b89ca
> STATUS:NORMAL,-111061256928956495
> RELEASE_VERSION:2.1.2
> RACK:rack1
> DC:datacenter1
> RPC_ADDRESS:192.144.36.32
> HOST_ID:11f793f0-999b-4ba8-8bdd-0f0c73ae2e23
> NET_VERSION:8
> SEVERITY:0.0
> LOAD:1.3757700946E10
> /192.168.0.2
> generation:1450149068
> heartbeat:297714
> SCHEMA:194168a3-f66e-3ab9-b811-4f7a7c3b89ca
> RELEASE_VERSION:2.1.2
> STATUS:NORMAL,-1108435478195556849
> RACK:rack1
> DC:datacenter1
> RPC_ADDRESS:192.144.36.33
> HOST_ID:0f1a2dab-1d39-4419-bb68-03386c1a79df
> NET_VERSION:8
> SEVERITY:7.611548900604248
> LOAD:8.295301191E9
> end=
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >