[jira] [Commented] (CASSANDRA-8147) Secondary indexing of map keys does not work properly when mixing contains and contains_key

2014-10-21 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178080#comment-14178080
 ] 

Benjamin Lerer commented on CASSANDRA-8147:
---

The empty result is not correct, no matter how you look at it. There are only 
two things that can happen in response to the query:
you either get the good result back or you get an error with a message telling 
you what you did wrong.
I agree that querying a key and a value and on map does not make a lot of 
sense, so an error message will be perfectly acceptable.  

 Secondary indexing of map keys does not work properly when mixing contains 
 and contains_key
 ---

 Key: CASSANDRA-8147
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8147
 Project: Cassandra
  Issue Type: Bug
Reporter: Benjamin Lerer
Priority: Minor

 If you have a table with a map column and an index on the map key selecting 
 data using a contains key and a contains will not return the expected data.
 The problem can be reproduced using the following unit test:
 {code}
 @Test
 public void testMapKeyContainsAndValueContains() throws Throwable
 {
 createTable(CREATE TABLE %s (account text, id int, categories 
 maptext,text, PRIMARY KEY (account, id)));
 createIndex(CREATE INDEX ON %s(keys(categories)));
 execute(INSERT INTO %s (account, id , categories) VALUES (?, ?, ?), 
 test, 5, map(lmn, foo));
 assertRows(execute(SELECT * FROM %s WHERE account = ? AND id = ? AND 
 categories CONTAINS KEY ? AND categories CONTAINS ? ALLOW FILTERING, test, 
 5, lmn, foo), row(test, 5, map(lmn, foo)));  
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Git Push Summary

2014-10-21 Thread slebresne
Repository: cassandra
Updated Tags:  refs/tags/2.1.1-tentative [created] ec866fa16


Git Push Summary

2014-10-21 Thread slebresne
Repository: cassandra
Updated Tags:  refs/tags/2.1.1-tentative [deleted] 42f859042


[jira] [Commented] (CASSANDRA-8147) Secondary indexing of map keys does not work properly when mixing contains and contains_key

2014-10-21 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178171#comment-14178171
 ] 

Sylvain Lebresne commented on CASSANDRA-8147:
-

I don't totally agree with all this. It is correct that we currently only allow 
one index per CQL column, and so one cannot index both the keys and values of a 
given map, but that's not what the test does. The test only has an index on the 
keys.

Regarding the {{SELECT}}, provided you do have an indexed clause (which that 
example has), it's allowed to have other non-indexed clause (it will require 
{{ALLOW FILTERING}} but it's used in the example too).  So I'm not sure why 
this doesn't work, but it should (it's worth testing on current 2.1 branch 
though, maybe this has been fixed since 2.1.0).

bq. I agree that querying a key and a value and on map does not make a lot of 
sense

Out of curiosity, why wouldn't that make sense?

 Secondary indexing of map keys does not work properly when mixing contains 
 and contains_key
 ---

 Key: CASSANDRA-8147
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8147
 Project: Cassandra
  Issue Type: Bug
Reporter: Benjamin Lerer
Priority: Minor

 If you have a table with a map column and an index on the map key selecting 
 data using a contains key and a contains will not return the expected data.
 The problem can be reproduced using the following unit test:
 {code}
 @Test
 public void testMapKeyContainsAndValueContains() throws Throwable
 {
 createTable(CREATE TABLE %s (account text, id int, categories 
 maptext,text, PRIMARY KEY (account, id)));
 createIndex(CREATE INDEX ON %s(keys(categories)));
 execute(INSERT INTO %s (account, id , categories) VALUES (?, ?, ?), 
 test, 5, map(lmn, foo));
 assertRows(execute(SELECT * FROM %s WHERE account = ? AND id = ? AND 
 categories CONTAINS KEY ? AND categories CONTAINS ? ALLOW FILTERING, test, 
 5, lmn, foo), row(test, 5, map(lmn, foo)));  
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8147) Secondary indexing of map keys does not work properly when mixing contains and contains_key

2014-10-21 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8147:

Assignee: Benjamin Lerer

 Secondary indexing of map keys does not work properly when mixing contains 
 and contains_key
 ---

 Key: CASSANDRA-8147
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8147
 Project: Cassandra
  Issue Type: Bug
Reporter: Benjamin Lerer
Assignee: Benjamin Lerer
Priority: Minor

 If you have a table with a map column and an index on the map key selecting 
 data using a contains key and a contains will not return the expected data.
 The problem can be reproduced using the following unit test:
 {code}
 @Test
 public void testMapKeyContainsAndValueContains() throws Throwable
 {
 createTable(CREATE TABLE %s (account text, id int, categories 
 maptext,text, PRIMARY KEY (account, id)));
 createIndex(CREATE INDEX ON %s(keys(categories)));
 execute(INSERT INTO %s (account, id , categories) VALUES (?, ?, ?), 
 test, 5, map(lmn, foo));
 assertRows(execute(SELECT * FROM %s WHERE account = ? AND id = ? AND 
 categories CONTAINS KEY ? AND categories CONTAINS ? ALLOW FILTERING, test, 
 5, lmn, foo), row(test, 5, map(lmn, foo)));  
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8147) Secondary indexing of map keys does not work properly when mixing contains and contains_key

2014-10-21 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178201#comment-14178201
 ] 

Benjamin Lerer commented on CASSANDRA-8147:
---

{quote}(it's worth testing on current 2.1 branch though, maybe this has been 
fixed since 2.1.0).{quote}
I tested it on the latest 2.1

{quote}Out of curiosity, why wouldn't that make sense?{quote}
As a map can only have one value associated to a given key, using such a query 
means that you want to check that the key exists and that the value is the one 
you think it should be. If you select using a contains key only you will be 
able to have the same information but you will also know if it is the key which 
is missing or the value which is not what you expect.
That is why I think that it does not make a lot of sense and that an error 
message will be fine for me if I was a user.
Now as a user it is also true that it will also give me a better sense of 
robustness if the query was handled properly ;-)   


 Secondary indexing of map keys does not work properly when mixing contains 
 and contains_key
 ---

 Key: CASSANDRA-8147
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8147
 Project: Cassandra
  Issue Type: Bug
Reporter: Benjamin Lerer
Assignee: Benjamin Lerer
Priority: Minor

 If you have a table with a map column and an index on the map key selecting 
 data using a contains key and a contains will not return the expected data.
 The problem can be reproduced using the following unit test:
 {code}
 @Test
 public void testMapKeyContainsAndValueContains() throws Throwable
 {
 createTable(CREATE TABLE %s (account text, id int, categories 
 maptext,text, PRIMARY KEY (account, id)));
 createIndex(CREATE INDEX ON %s(keys(categories)));
 execute(INSERT INTO %s (account, id , categories) VALUES (?, ?, ?), 
 test, 5, map(lmn, foo));
 assertRows(execute(SELECT * FROM %s WHERE account = ? AND id = ? AND 
 categories CONTAINS KEY ? AND categories CONTAINS ? ALLOW FILTERING, test, 
 5, lmn, foo), row(test, 5, map(lmn, foo)));  
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8147) Secondary indexing of map keys does not work properly when mixing contains and contains_key

2014-10-21 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178212#comment-14178212
 ] 

Sylvain Lebresne commented on CASSANDRA-8147:
-

bq. As a map can only have one value associated to a given key, using such a 
query means that you want to check that the key exists and that the value is 
the one you think it should be.

That's not what the query means, no. Asking for maps that contains a given key 
and a given value does not imply that said given value must be associated to 
said given key.
Besides, even if that was what the query means, the query still make sense. It 
might not be terribly useful, but it make sense, so I'd still not think 
throwing an error would not be very user friendly.

 Secondary indexing of map keys does not work properly when mixing contains 
 and contains_key
 ---

 Key: CASSANDRA-8147
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8147
 Project: Cassandra
  Issue Type: Bug
Reporter: Benjamin Lerer
Assignee: Benjamin Lerer
Priority: Minor

 If you have a table with a map column and an index on the map key selecting 
 data using a contains key and a contains will not return the expected data.
 The problem can be reproduced using the following unit test:
 {code}
 @Test
 public void testMapKeyContainsAndValueContains() throws Throwable
 {
 createTable(CREATE TABLE %s (account text, id int, categories 
 maptext,text, PRIMARY KEY (account, id)));
 createIndex(CREATE INDEX ON %s(keys(categories)));
 execute(INSERT INTO %s (account, id , categories) VALUES (?, ?, ?), 
 test, 5, map(lmn, foo));
 assertRows(execute(SELECT * FROM %s WHERE account = ? AND id = ? AND 
 categories CONTAINS KEY ? AND categories CONTAINS ? ALLOW FILTERING, test, 
 5, lmn, foo), row(test, 5, map(lmn, foo)));  
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8131) Short-circuited query results from collection index query

2014-10-21 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178215#comment-14178215
 ] 

Sylvain Lebresne commented on CASSANDRA-8131:
-

bq. What 2.1 minor version do you think will have this fix available?

If you mean, in which minor version will the query return the right result 
(i.e. nothing), then it will be 2.1.1. The patch on this ticket will probably 
only make 2.1.2, but that patch only fix the {{ALLOW FILTERING}} validation 
issue.

 Short-circuited query results from collection index query
 -

 Key: CASSANDRA-8131
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8131
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian Wheezy, Oracle JDK, Cassandra 2.1
Reporter: Catalin Alexandru Zamfir
Assignee: Benjamin Lerer
  Labels: collections, cql3, cqlsh, query, queryparser, triaged
 Fix For: 2.1.2

 Attachments: CASSANDRA-8131.txt


 After watching Jonathan's 2014 summit video, I wanted to give collection 
 indexes a try as they seem to be a fit for a search by key/values usage 
 pattern we have in our setup. Doing some test queries that I expect users 
 would do against the table, a short-circuit behavior came up:
 Here's the whole transcript:
 {noformat}
 CREATE TABLE by_sets (id int PRIMARY KEY, datakeys settext, datavars 
 settext);
 CREATE INDEX by_sets_datakeys ON by_sets (datakeys);
 CREATE INDEX by_sets_datavars ON by_sets (datavars);
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (1, {'a'}, {'b'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (2, {'c'}, {'d'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (3, {'e'}, {'f'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (4, {'a'}, {'z'});
 SELECT * FROM by_sets;
  id | datakeys | datavars
 +--+--
   1 |{'a'} |{'b'}
   2 |{'c'} |{'d'}
   4 |{'a'} |{'z'}
   3 |{'e'} |{'f'}
 {noformat}
 We then tried this query which short-circuited:
 {noformat}
 SELECT * FROM by_sets WHERE datakeys CONTAINS 'a' AND datakeys CONTAINS 'c';
  id | datakeys | datavars
 +--+--
   1 |{'a'} |{'b'}
   4 |{'a'} |{'z'}
 (2 rows)
 {noformat}
 Instead of receveing 3 rows, which match the datakeys CONTAINS 'a' AND 
 datakeys CONTAINS 'c' we only got the first.
 Doing the same, but with CONTAINS 'c' first, ignores the second AND.
 {noformat}
 SELECT * FROM by_sets WHERE datakeys CONTAINS 'c' AND datakeys CONTAINS 'a' ;
  id | datakeys | datavars
 +--+--
   2 |{'c'} |{'d'}
 (1 rows)
 {noformat}
 Also, on a side-note, I have two indexes on both datakeys and datavars. But 
 when trying to run a query such as:
 {noformat}
 select * from by_sets WHERE datakeys CONTAINS 'a' AND datavars CONTAINS 'z';
 code=2200 [Invalid query] message=Cannot execute this query as it might 
 involve data filtering and thus may have unpredictable performance. 
 If you want to execute this query despite the performance unpredictability, 
 use ALLOW FILTERING
 {noformat}
 The second column, after AND (even if I inverse the order) requires an allow 
 filtering clause yet the column is indexed an an in-memory join of the 
 primary keys of these sets on the coordinator could build up the result.
 Could anyone explain the short-circuit behavior?
 And the requirement for allow-filtering on a secondly indexed column?
 If they're not bugs but intended they should be documented better, at least 
 their limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8131) Short-circuited query results from collection index query

2014-10-21 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178225#comment-14178225
 ] 

Sylvain Lebresne commented on CASSANDRA-8131:
-

Regarding the patch, mostly lgtm but 2 minor nits:
* The {{Iterables.getLast}} in {{needFiltering}} feels misguiding in that there 
is no reason to use the last. It just happen that we're in a branch where 
{{stmt.restrictedColumns}} can only have one element and we want that single 
element. So I'd rather use {{Iterables.getOnlyElement}}.
* In {{isRestrictedByMultipleContains}}, I'd either check that 
{{metadataRestrictions.get()}} don't return {{null}} or at least assert it (I 
know it can't be {{null}} in the context of this patch, but it would be easy to 
misuse that method in the future). Similarly, I'd add a {{instanceof Contains}} 
on the result of that {{get}} so that we don't have a bug waiting to happen if 
we ever allow other type of restriction on collections.

 Short-circuited query results from collection index query
 -

 Key: CASSANDRA-8131
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8131
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian Wheezy, Oracle JDK, Cassandra 2.1
Reporter: Catalin Alexandru Zamfir
Assignee: Benjamin Lerer
  Labels: collections, cql3, cqlsh, query, queryparser, triaged
 Fix For: 2.1.2

 Attachments: CASSANDRA-8131.txt


 After watching Jonathan's 2014 summit video, I wanted to give collection 
 indexes a try as they seem to be a fit for a search by key/values usage 
 pattern we have in our setup. Doing some test queries that I expect users 
 would do against the table, a short-circuit behavior came up:
 Here's the whole transcript:
 {noformat}
 CREATE TABLE by_sets (id int PRIMARY KEY, datakeys settext, datavars 
 settext);
 CREATE INDEX by_sets_datakeys ON by_sets (datakeys);
 CREATE INDEX by_sets_datavars ON by_sets (datavars);
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (1, {'a'}, {'b'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (2, {'c'}, {'d'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (3, {'e'}, {'f'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (4, {'a'}, {'z'});
 SELECT * FROM by_sets;
  id | datakeys | datavars
 +--+--
   1 |{'a'} |{'b'}
   2 |{'c'} |{'d'}
   4 |{'a'} |{'z'}
   3 |{'e'} |{'f'}
 {noformat}
 We then tried this query which short-circuited:
 {noformat}
 SELECT * FROM by_sets WHERE datakeys CONTAINS 'a' AND datakeys CONTAINS 'c';
  id | datakeys | datavars
 +--+--
   1 |{'a'} |{'b'}
   4 |{'a'} |{'z'}
 (2 rows)
 {noformat}
 Instead of receveing 3 rows, which match the datakeys CONTAINS 'a' AND 
 datakeys CONTAINS 'c' we only got the first.
 Doing the same, but with CONTAINS 'c' first, ignores the second AND.
 {noformat}
 SELECT * FROM by_sets WHERE datakeys CONTAINS 'c' AND datakeys CONTAINS 'a' ;
  id | datakeys | datavars
 +--+--
   2 |{'c'} |{'d'}
 (1 rows)
 {noformat}
 Also, on a side-note, I have two indexes on both datakeys and datavars. But 
 when trying to run a query such as:
 {noformat}
 select * from by_sets WHERE datakeys CONTAINS 'a' AND datavars CONTAINS 'z';
 code=2200 [Invalid query] message=Cannot execute this query as it might 
 involve data filtering and thus may have unpredictable performance. 
 If you want to execute this query despite the performance unpredictability, 
 use ALLOW FILTERING
 {noformat}
 The second column, after AND (even if I inverse the order) requires an allow 
 filtering clause yet the column is indexed an an in-memory join of the 
 primary keys of these sets on the coordinator could build up the result.
 Could anyone explain the short-circuit behavior?
 And the requirement for allow-filtering on a secondly indexed column?
 If they're not bugs but intended they should be documented better, at least 
 their limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-5125) Support indexes on composite column components (clustered columns)

2014-10-21 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178232#comment-14178232
 ] 

Sylvain Lebresne commented on CASSANDRA-5125:
-

[~denis.angilella] Correct, the validation during index creation is broken. Do 
you mind creating a ticket indeed so we track the fix?

 Support indexes on composite column components (clustered columns)
 --

 Key: CASSANDRA-5125
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5125
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 2.0 beta 1

 Attachments: 0001-Refactor-aliases-into-column_metadata.txt, 
 0002-Generalize-CompositeIndex-for-all-column-type.txt, 
 0003-Handle-new-type-of-IndexExpression.txt, 
 0004-Handle-partition-key-indexing.txt


 Given
 {code}
 CREATE TABLE foo (
   a int,
   b int,
   c int,
   PRIMARY KEY (a, b)
 );
 {code}
 We should support {{CREATE INDEX ON foo(b)}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7886) TombstoneOverwhelmingException should not wait for timeout

2014-10-21 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178262#comment-14178262
 ] 

Sylvain Lebresne commented on CASSANDRA-7886:
-

bq. I assume you worry about clients not being able to handle the new code

Yes.

bq. In my opinion any client-code that does not have a default-case should be 
punished. So I would not hestitate to add it

Allow me to disagree. Even if drivers have a default case, they will still not 
know what that new exception code is about, so they will likely throw some 
generic ShouldNotHappen exception, which almost surely the client hasn't 
taken into account (or at not in the same way they've taken a timeout exception 
into account, which is what is thrown currently). There's a reason we version 
the protocol and it's so that clients can have the assurance that we don't 
change anything from under them. If we fail that, we should be the ones that 
should be punished.

bq. I assume with CQL 4 (CASSANDRA-8043) a clean code handling and additional 
fields for be implemented for read_failures?

Yes, and I'm saying that such handling should be part of the patch (but please 
don't call it CQL 4 or you'll confuse everyone: it's just version 4 of the 
binary protocol, not of the language).



 TombstoneOverwhelmingException should not wait for timeout
 --

 Key: CASSANDRA-7886
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7886
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: Tested with Cassandra 2.0.8
Reporter: Christian Spriegel
Assignee: Christian Spriegel
Priority: Minor
 Fix For: 3.0

 Attachments: 7886_v1.txt


 *Issue*
 When you have TombstoneOverwhelmingExceptions occuring in queries, this will 
 cause the query to be simply dropped on every data-node, but no response is 
 sent back to the coordinator. Instead the coordinator waits for the specified 
 read_request_timeout_in_ms.
 On the application side this can cause memory issues, since the application 
 is waiting for the timeout interval for every request.Therefore, if our 
 application runs into TombstoneOverwhelmingExceptions, then (sooner or later) 
 our entire application cluster goes down :-(
 *Proposed solution*
 I think the data nodes should send a error message to the coordinator when 
 they run into a TombstoneOverwhelmingException. Then the coordinator does not 
 have to wait for the timeout-interval.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8156) Support indexes on composite column components of COMPACT tables

2014-10-21 Thread Denis Angilella (JIRA)
Denis Angilella created CASSANDRA-8156:
--

 Summary: Support indexes on composite column components of COMPACT 
tables
 Key: CASSANDRA-8156
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8156
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Denis Angilella


CASSANDRA-5125 added support of indexes on composite column components for 
*non-compact* tables (see CASSANDRA-5125 comments for additional information).
This is a follow up for *compact* tables.

Using compact tables it is possible to CREATE INDEX on composite primary key 
columns, but queries returns no results for the tests below.

{code:sql}
CREATE TABLE users2 (
   userID uuid,
   fname text,
   zip int,
   state text,
  PRIMARY KEY ((userID, fname))
) WITH COMPACT STORAGE;

CREATE INDEX ON users2 (userID);
CREATE INDEX ON users2 (fname);

INSERT INTO users2 (userID, fname, zip, state) VALUES 
(b3e3bc33-b237-4b55-9337-3d41de9a5649, 'John', 10007, 'NY');

-- the following queries returns 0 rows, instead of 1 expected
SELECT * FROM users2 WHERE fname='John'; 
SELECT * FROM users2 WHERE userid=b3e3bc33-b237-4b55-9337-3d41de9a5649;
SELECT * FROM users2 WHERE userid=b3e3bc33-b237-4b55-9337-3d41de9a5649 AND 
fname='John';

-- dropping 2ndary indexes restore normal behavior
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-5125) Support indexes on composite column components (clustered columns)

2014-10-21 Thread Denis Angilella (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178274#comment-14178274
 ] 

Denis Angilella commented on CASSANDRA-5125:


[~slebresne]: I created CASSANDRA-8156 to track the fix.

 Support indexes on composite column components (clustered columns)
 --

 Key: CASSANDRA-5125
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5125
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 2.0 beta 1

 Attachments: 0001-Refactor-aliases-into-column_metadata.txt, 
 0002-Generalize-CompositeIndex-for-all-column-type.txt, 
 0003-Handle-new-type-of-IndexExpression.txt, 
 0004-Handle-partition-key-indexing.txt


 Given
 {code}
 CREATE TABLE foo (
   a int,
   b int,
   c int,
   PRIMARY KEY (a, b)
 );
 {code}
 We should support {{CREATE INDEX ON foo(b)}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Git Push Summary

2014-10-21 Thread slebresne
Repository: cassandra
Updated Tags:  refs/tags/2.1.1-tentative [created] ec866fa16


Git Push Summary

2014-10-21 Thread slebresne
Repository: cassandra
Updated Tags:  refs/tags/2.1.1-tentative [deleted] ec866fa16


[jira] [Commented] (CASSANDRA-7886) TombstoneOverwhelmingException should not wait for timeout

2014-10-21 Thread Christian Spriegel (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178282#comment-14178282
 ] 

Christian Spriegel commented on CASSANDRA-7886:
---

[~slebresne]:  Does it make sense that I prepare a patch on trunk that includes 
the errror-handling? Also I would do some (manual) testing on trunk.


 TombstoneOverwhelmingException should not wait for timeout
 --

 Key: CASSANDRA-7886
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7886
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: Tested with Cassandra 2.0.8
Reporter: Christian Spriegel
Assignee: Christian Spriegel
Priority: Minor
 Fix For: 3.0

 Attachments: 7886_v1.txt


 *Issue*
 When you have TombstoneOverwhelmingExceptions occuring in queries, this will 
 cause the query to be simply dropped on every data-node, but no response is 
 sent back to the coordinator. Instead the coordinator waits for the specified 
 read_request_timeout_in_ms.
 On the application side this can cause memory issues, since the application 
 is waiting for the timeout interval for every request.Therefore, if our 
 application runs into TombstoneOverwhelmingExceptions, then (sooner or later) 
 our entire application cluster goes down :-(
 *Proposed solution*
 I think the data nodes should send a error message to the coordinator when 
 they run into a TombstoneOverwhelmingException. Then the coordinator does not 
 have to wait for the timeout-interval.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7886) TombstoneOverwhelmingException should not wait for timeout

2014-10-21 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178289#comment-14178289
 ] 

Sylvain Lebresne commented on CASSANDRA-7886:
-

bq. Does it make sense that I prepare a patch on trunk that includes the 
errror-handling?

Yes.

 TombstoneOverwhelmingException should not wait for timeout
 --

 Key: CASSANDRA-7886
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7886
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: Tested with Cassandra 2.0.8
Reporter: Christian Spriegel
Assignee: Christian Spriegel
Priority: Minor
 Fix For: 3.0

 Attachments: 7886_v1.txt


 *Issue*
 When you have TombstoneOverwhelmingExceptions occuring in queries, this will 
 cause the query to be simply dropped on every data-node, but no response is 
 sent back to the coordinator. Instead the coordinator waits for the specified 
 read_request_timeout_in_ms.
 On the application side this can cause memory issues, since the application 
 is waiting for the timeout interval for every request.Therefore, if our 
 application runs into TombstoneOverwhelmingExceptions, then (sooner or later) 
 our entire application cluster goes down :-(
 *Proposed solution*
 I think the data nodes should send a error message to the coordinator when 
 they run into a TombstoneOverwhelmingException. Then the coordinator does not 
 have to wait for the timeout-interval.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8131) Short-circuited query results from collection index query

2014-10-21 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-8131:
--
Attachment: CASSANDRA-8131-V2.txt

This patch use {{getOnlyElement}} instead of {{getLast}} and add an 
{{instanceof Contains}} in {{isRestrictedByMultipleContains}}.
I did not check for null because the instanceof is already doing it.

 Short-circuited query results from collection index query
 -

 Key: CASSANDRA-8131
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8131
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian Wheezy, Oracle JDK, Cassandra 2.1
Reporter: Catalin Alexandru Zamfir
Assignee: Benjamin Lerer
  Labels: collections, cql3, cqlsh, query, queryparser, triaged
 Fix For: 2.1.2

 Attachments: CASSANDRA-8131-V2.txt, CASSANDRA-8131.txt


 After watching Jonathan's 2014 summit video, I wanted to give collection 
 indexes a try as they seem to be a fit for a search by key/values usage 
 pattern we have in our setup. Doing some test queries that I expect users 
 would do against the table, a short-circuit behavior came up:
 Here's the whole transcript:
 {noformat}
 CREATE TABLE by_sets (id int PRIMARY KEY, datakeys settext, datavars 
 settext);
 CREATE INDEX by_sets_datakeys ON by_sets (datakeys);
 CREATE INDEX by_sets_datavars ON by_sets (datavars);
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (1, {'a'}, {'b'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (2, {'c'}, {'d'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (3, {'e'}, {'f'});
 INSERT INTO by_sets (id, datakeys, datavars) VALUES (4, {'a'}, {'z'});
 SELECT * FROM by_sets;
  id | datakeys | datavars
 +--+--
   1 |{'a'} |{'b'}
   2 |{'c'} |{'d'}
   4 |{'a'} |{'z'}
   3 |{'e'} |{'f'}
 {noformat}
 We then tried this query which short-circuited:
 {noformat}
 SELECT * FROM by_sets WHERE datakeys CONTAINS 'a' AND datakeys CONTAINS 'c';
  id | datakeys | datavars
 +--+--
   1 |{'a'} |{'b'}
   4 |{'a'} |{'z'}
 (2 rows)
 {noformat}
 Instead of receveing 3 rows, which match the datakeys CONTAINS 'a' AND 
 datakeys CONTAINS 'c' we only got the first.
 Doing the same, but with CONTAINS 'c' first, ignores the second AND.
 {noformat}
 SELECT * FROM by_sets WHERE datakeys CONTAINS 'c' AND datakeys CONTAINS 'a' ;
  id | datakeys | datavars
 +--+--
   2 |{'c'} |{'d'}
 (1 rows)
 {noformat}
 Also, on a side-note, I have two indexes on both datakeys and datavars. But 
 when trying to run a query such as:
 {noformat}
 select * from by_sets WHERE datakeys CONTAINS 'a' AND datavars CONTAINS 'z';
 code=2200 [Invalid query] message=Cannot execute this query as it might 
 involve data filtering and thus may have unpredictable performance. 
 If you want to execute this query despite the performance unpredictability, 
 use ALLOW FILTERING
 {noformat}
 The second column, after AND (even if I inverse the order) requires an allow 
 filtering clause yet the column is indexed an an in-memory join of the 
 primary keys of these sets on the coordinator could build up the result.
 Could anyone explain the short-circuit behavior?
 And the requirement for allow-filtering on a secondly indexed column?
 If they're not bugs but intended they should be documented better, at least 
 their limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8157) Opening results early with leveled compactions broken

2014-10-21 Thread Marcus Eriksson (JIRA)
Marcus Eriksson created CASSANDRA-8157:
--

 Summary: Opening results early with leveled compactions broken
 Key: CASSANDRA-8157
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8157
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 2.1.1


CASSANDRA-8034 notifies the listeners whenever we replace an sstable to make 
sure we have track the right instance.

Problem is though that when we open early and finish a compaction, we try to 
re-add the same sstable to the manifest which drops it to level 0 since it 
overlaps with the one that is already there



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8157) Opening results early with leveled compactions broken

2014-10-21 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-8157:
---
Priority: Critical  (was: Major)

 Opening results early with leveled compactions broken
 -

 Key: CASSANDRA-8157
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8157
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
Priority: Critical
 Fix For: 2.1.1


 CASSANDRA-8034 notifies the listeners whenever we replace an sstable to make 
 sure we have track the right instance.
 Problem is though that when we open early and finish a compaction, we try to 
 re-add the same sstable to the manifest which drops it to level 0 since it 
 overlaps with the one that is already there



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8157) Opening results early with leveled compactions broken

2014-10-21 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-8157:
---
Attachment: 0001-dont-notify-when-replacing-fake-files.patch

 Opening results early with leveled compactions broken
 -

 Key: CASSANDRA-8157
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8157
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
Priority: Critical
 Fix For: 2.1.1

 Attachments: 0001-dont-notify-when-replacing-fake-files.patch


 CASSANDRA-8034 notifies the listeners whenever we replace an sstable to make 
 sure we have track the right instance.
 Problem is though that when we open early and finish a compaction, we try to 
 re-add the same sstable to the manifest which drops it to level 0 since it 
 overlaps with the one that is already there



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


git commit: Integrate JMH into build system

2014-10-21 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/trunk fd63d2ae2 - c23560347


Integrate JMH into build system

patch by tjake; reviewed by Jason Brown for CASSANDRA-8151


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c2356034
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c2356034
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c2356034

Branch: refs/heads/trunk
Commit: c235603475358e458c6ce9fed983b68e604d5e25
Parents: fd63d2a
Author: T Jake Luciani j...@apache.org
Authored: Tue Oct 21 09:57:12 2014 -0400
Committer: T Jake Luciani j...@apache.org
Committed: Tue Oct 21 09:57:12 2014 -0400

--
 CHANGES.txt |   1 +
 build.xml   |  26 
 .../cassandra/test/microbench/Sample.java   | 130 +++
 3 files changed, 157 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c2356034/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index c1d262d..85cb24b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0
+ * Integrate JMH for microbenchmarks (CASSANDRA-8151)
  * Keep sstable levels when bootstrapping (CASSANDRA-7460)
  * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
  * Support for aggregation functions (CASSANDRA-4914)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c2356034/build.xml
--
diff --git a/build.xml b/build.xml
index 07d0578..9cd80c2 100644
--- a/build.xml
+++ b/build.xml
@@ -56,10 +56,12 @@
 property name=test.conf value=${test.dir}/conf/
 property name=test.data value=${test.dir}/data/
 property name=test.name value=*Test/
+property name=benchmark.name value=/
 property name=test.methods value=/
 property name=test.runners value=1/
 property name=test.unit.src value=${test.dir}/unit/
 property name=test.long.src value=${test.dir}/long/
+property name=test.microbench.src value=${test.dir}/microbench/
 property name=test.pig.src value=${test.dir}/pig/
 property name=dist.dir value=${build.dir}/dist/

@@ -356,6 +358,9 @@
   dependency groupId=org.jacoco artifactId=org.jacoco.agent 
version=${jacoco.version}/
   dependency groupId=org.jacoco artifactId=org.jacoco.ant 
version=${jacoco.version}/
 
+  dependency groupId=org.openjdk.jmh artifactId=jmh-core 
version=1.1.1/
+  dependency groupId=org.openjdk.jmh 
artifactId=jmh-generator-annprocess version=1.1.1/
+
   dependency groupId=org.apache.cassandra 
artifactId=cassandra-all version=${version} /
   dependency groupId=org.apache.cassandra 
artifactId=cassandra-thrift version=${version} /
   dependency groupId=com.yammer.metrics artifactId=metrics-core 
version=2.2.0 /
@@ -415,6 +420,8 @@
 dependency groupId=com.datastax.cassandra 
artifactId=cassandra-driver-core/
 dependency groupId=net.ju-n.compile-command-annotations 
artifactId=compile-command-annotations/
 dependency groupId=org.javassist artifactId=javassist/
+dependency groupId=org.openjdk.jmh artifactId=jmh-core/
+dependency groupId=org.openjdk.jmh 
artifactId=jmh-generator-annprocess/
   /artifact:pom
 
   artifact:pom id=coverage-deps-pom
@@ -1062,6 +1069,7 @@
   src path=${test.unit.src}/
   src path=${test.long.src}/
   src path=${test.pig.src}/
+  src path=${test.microbench.src}/
 /javac
 
 !-- Non-java resources needed by the test suite --
@@ -1510,6 +1518,24 @@
 ]] /script
   /target
 
+  !-- run microbenchmarks suite --
+  target name=microbench depends=build-test
+  java classname=org.openjdk.jmh.Main
+fork=true
+failonerror=true
+  classpath
+  path refid=cassandra.classpath /
+  pathelement location=${test.classes}/
+  path refid=cobertura.classpath/
+  pathelement location=${test.conf}/
+  fileset dir=${test.lib}
+  include name=**/*.jar /
+  /fileset
+  /classpath
+  arg value=.*microbench.*${benchmark.name}/
+  /java
+  /target
+
   !-- Generate Eclipse project description files --
   target name=generate-eclipse-files depends=build description=Generate 
eclipse files
 echo file=.project![CDATA[?xml version=1.0 encoding=UTF-8?

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c2356034/test/microbench/org/apache/cassandra/test/microbench/Sample.java
--
diff --git a/test/microbench/org/apache/cassandra/test/microbench/Sample.java 

[jira] [Resolved] (CASSANDRA-8151) Add build support for JMH microbenchmarks

2014-10-21 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani resolved CASSANDRA-8151.
---
Resolution: Fixed

committed

 Add build support for JMH microbenchmarks
 -

 Key: CASSANDRA-8151
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8151
 Project: Cassandra
  Issue Type: Improvement
Reporter: T Jake Luciani
Assignee: T Jake Luciani
Priority: Minor
 Fix For: 3.0

 Attachments: 8185.txt


 Making performance changes to C* often requires verification by stress or 
 cstar.datastax.com.  Other changes can be verified with a microbenchmark tool 
 like JMH.  As a developer, maintaining a separate project to run these is a 
 pain.
 This patch adds support to run microbenchmarks as a separate ant target.  
 Also adds a sample showing some of the benchmark annotation options.
 {code}
 ant microbench -Dbenchmark.name=Sample
 ...
  [java] Benchmark  (duplicateLookback)  (pageSize)  
 (randomRatio)  (randomRunLength)  (uniquePages)   Mode  Samples   Score   
 Error   Units
  [java] o.a.c.t.m.Sample.lz44..128   65536
 0.1  4..16   1024  thrpt5  27.215 ± 0.717  
 ops/ms
  [java] o.a.c.t.m.Sample.snappy 4..128   65536
 0.1  4..16   1024  thrpt5  15.306 ± 0.779  
 ops/ms
 {code}
 If you skip the benchmark.name property it will run all microbenchmarks 
 (under the microbenchmark package namespace)
 One annoying thing about this patch is it now adds this error on build which 
 is an annoying but harmless warning
 {code}
 [javac] warning: Supported source version 'RELEASE_6' from annotation 
 processor 'org.openjdk.jmh.generators.BenchmarkProcessor' less than -source 
 '1.7'
 {code}
 https://bugs.openjdk.java.net/browse/JDK-8037955



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Git Push Summary

2014-10-21 Thread slebresne
Repository: cassandra
Updated Tags:  refs/tags/2.1.1-tentative [created] ec866fa16


[jira] [Commented] (CASSANDRA-8090) NullPointerException when using prepared statements

2014-10-21 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178450#comment-14178450
 ] 

Sylvain Lebresne commented on CASSANDRA-8090:
-

Ok. I'm still bugged by the fact we now have to copy the selectors even when 
there is no aggregations, but I don't have a very good alternative to suggest 
so that will do for now.
Mostly good on the patch but 2 minor nits:
* since selection is now quite a few classes, could make sense to move what's 
selection related to a cql3/selection package (instead of having it all in 
cql3/statements).
* I'd rename {{containsNoAggregateSelectorFactories}} to 
{{containsAggregateFunction}}: we use it only negated and the double-negation 
is harder to read (I also don't think the SelectorFactories part adds much 
since the method is on {{SelectorFactories}} but that's more of a personal 
taste). Similarly, {{containsOnlyAggretateSelectorFactories}} could just be 
{{containsScalarFunction}}.


 NullPointerException when using prepared statements
 ---

 Key: CASSANDRA-8090
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8090
 Project: Cassandra
  Issue Type: Bug
Reporter: Carl Yeksigian
Assignee: Benjamin Lerer
 Fix For: 3.0


 Due to the changes in CASSANDRA-4914, using a prepared statement from 
 multiple threads leads to a race condition where the simple selection may be 
 reset from a different thread, causing the following NPE:
 {noformat}
 java.lang.NullPointerException: null
   at org.apache.cassandra.cql3.ResultSet.addRow(ResultSet.java:63) 
 ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.Selection$ResultSetBuilder.build(Selection.java:372)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1120)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:283)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:260)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:213)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:63)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:226)
  ~[main/:na]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:481)
  ~[main/:na]
   at 
 org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:133)
  ~[main/:na]
   at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:438)
  [main/:na]
   at 
 org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:334)
  [main/:na]
   at 
 io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324)
  [netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 [na:1.7.0_67]
   at 
 org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:163)
  [main/:na]
   at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:103) 
 [main/:na]
   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_67]
 {noformat}
 Reproduced this using the stress tool:
 {noformat}
  ./tools/bin/cassandra-stress user profile=tools/cqlstress-example.yaml 
 ops\(insert=1,simple1=1\)
 {noformat}
 You'll need to change the {noformat}select:{noformat} line to be /1000 to 
 prevent the illegal query exceptions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Git Push Summary

2014-10-21 Thread slebresne
Repository: cassandra
Updated Tags:  refs/tags/2.1.1-tentative [deleted] ec866fa16


Git Push Summary

2014-10-21 Thread slebresne
Repository: cassandra
Updated Tags:  refs/tags/2.1.1-tentative [deleted] ec866fa16


Git Push Summary

2014-10-21 Thread slebresne
Repository: cassandra
Updated Tags:  refs/tags/2.0.11-tentative [deleted] 3c8a2a766


git commit: Update versions for 2.0.11

2014-10-21 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 b353aa34e - 02b83d9a8


Update versions for 2.0.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/02b83d9a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/02b83d9a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/02b83d9a

Branch: refs/heads/cassandra-2.0
Commit: 02b83d9a8c240ad94461fb305cc90f275fba03b3
Parents: b353aa3
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Fri Oct 17 13:02:29 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Tue Oct 21 16:23:52 2014 +0200

--
 NEWS.txt | 8 
 build.xml| 2 +-
 debian/changelog | 6 ++
 3 files changed, 15 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/02b83d9a/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 102a87b..6f6b795 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -15,14 +15,22 @@ using the provided 'sstableupgrade' tool.
 
 2.0.11
 ==
+
+Upgrading
+-
+- Nothing specific to this release, but refer to previous entries if you
+  are upgrading from a previous version.
+
 New features
 
 - DateTieredCompactionStrategy added, optimized for time series data and 
groups
   data that is written closely in time (CASSANDRA-6602 for details). 
Consider
   this experimental for now.
 
+
 2.0.10
 ==
+
 New features
 
 - CqlPaginRecordReader and CqlPagingInputFormat have both been removed.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/02b83d9a/build.xml
--
diff --git a/build.xml b/build.xml
index 829c873..8c23407 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 property name=debuglevel value=source,lines,vars/
 
 !-- default version and SCM information --
-property name=base.version value=2.0.10/
+property name=base.version value=2.0.11/
 property name=scm.connection 
value=scm:git://git.apache.org/cassandra.git/
 property name=scm.developerConnection 
value=scm:git://git.apache.org/cassandra.git/
 property name=scm.url 
value=http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree/

http://git-wip-us.apache.org/repos/asf/cassandra/blob/02b83d9a/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index e0b1eae..39d9520 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (2.0.11) unstable; urgency=medium
+
+  * New release
+
+ -- Sylvain Lebresne slebre...@apache.org  Fri, 17 Oct 2014 13:01:02 +0200
+
 cassandra (2.0.10) unstable; urgency=medium
 
   * New release



Git Push Summary

2014-10-21 Thread slebresne
Repository: cassandra
Updated Tags:  refs/tags/2.0.11-tentative [created] 02b83d9a8


git commit: Fix validation with multiple CONTAINS clauses

2014-10-21 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 ec866fa16 - 18cd3e320


Fix validation with multiple CONTAINS clauses

patch by blerer; reviewed by slebresne for CASSANDRA-8131


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/18cd3e32
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/18cd3e32
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/18cd3e32

Branch: refs/heads/cassandra-2.1
Commit: 18cd3e3205ac3301ebce0b047aa444e50388083b
Parents: ec866fa
Author: Benjamin Lerer benjamin.le...@datastax.com
Authored: Tue Oct 21 16:39:35 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Tue Oct 21 16:40:38 2014 +0200

--
 CHANGES.txt |  1 +
 .../cql3/statements/SelectStatement.java| 39 +++-
 .../statements/SingleColumnRestriction.java | 10 +
 .../cassandra/cql3/ContainsRelationTest.java| 22 +++
 4 files changed, 71 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/18cd3e32/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 815bce1..09ab91b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.1
+ * Fix validation with multiple CONTAINS clause (CASSANDRA-8131)
  * Fix validation of collections in TriggerExecutor (CASSANDRA-8146)
  * Fix IllegalArgumentException when a list of IN values containing tuples
is passed as a single arg to a prepared statement with the v1 or v2

http://git-wip-us.apache.org/repos/asf/cassandra/blob/18cd3e32/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index a8c9d44..233f3db 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -31,6 +31,7 @@ import org.github.jamm.MemoryMeter;
 
 import org.apache.cassandra.auth.Permission;
 import org.apache.cassandra.cql3.*;
+import org.apache.cassandra.cql3.statements.SingleColumnRestriction.Contains;
 import org.apache.cassandra.db.composites.*;
 import org.apache.cassandra.transport.messages.ResultMessage;
 import org.apache.cassandra.config.CFMetaData;
@@ -1349,6 +1350,27 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 throw new InvalidRequestException(String.format(SELECT 
DISTINCT queries must request all the partition key columns (missing %s), 
def.name));
 }
 
+/**
+ * Checks if the specified column is restricted by multiple contains or 
contains key.
+ *
+ * @param columnDef the definition of the column to check
+ * @return codetrue/code the specified column is restricted by 
multiple contains or contains key,
+ * codefalse/code otherwise
+ */
+private boolean isRestrictedByMultipleContains(ColumnDefinition columnDef)
+{
+if (!columnDef.type.isCollection())
+return false;
+
+Restriction restriction = metadataRestrictions.get(columnDef.name);
+
+if (!(restriction instanceof Contains))
+return false;
+
+Contains contains = (Contains) restriction;
+return (contains.numberOfValues() + contains.numberOfKeys())  1;
+}
+
 public static class RawStatement extends CFStatement
 {
 private final Parameters parameters;
@@ -2011,7 +2033,7 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 // We will potentially filter data if either:
 //  - Have more than one IndexExpression
 //  - Have no index expression and the column filter is not 
the identity
-if (stmt.restrictedColumns.size()  1 || 
(stmt.restrictedColumns.isEmpty()  !stmt.columnFilterIsIdentity()))
+if (needFiltering(stmt))
 throw new InvalidRequestException(Cannot execute this 
query as it might involve data filtering and  +
   thus may have 
unpredictable performance. If you want to execute  +
   this query despite the 
performance unpredictability, use ALLOW FILTERING);
@@ -2036,6 +2058,21 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 }
 }
 
+/**
+ * Checks if the specified statement will need to filter the data.
+ *
+ * @param stmt the statement to test.
+ * @return 

[jira] [Commented] (CASSANDRA-8152) Cassandra crashes with Native memory allocation failure

2014-10-21 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178473#comment-14178473
 ] 

Brandon Williams commented on CASSANDRA-8152:
-

Perhaps you ran out of file handles?

 Cassandra crashes with Native memory allocation failure
 ---

 Key: CASSANDRA-8152
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8152
 Project: Cassandra
  Issue Type: Bug
 Environment: EC2 (i2.xlarge)
Reporter: Babar Tareen
Assignee: Brandon Williams
Priority: Minor
 Attachments: db06_hs_err_pid26159.log.zip, 
 db_05_hs_err_pid25411.log.zip


 On a 6 node Cassandra (datastax-community-2.1) cluster running on EC2 
 (i2.xlarge) instances, Jvm hosting the cassandra service randomly crashes 
 with following error.
 {code}
 #
 # There is insufficient memory for the Java Runtime Environment to continue.
 # Native memory allocation (malloc) failed to allocate 12288 bytes for 
 committing reserved memory.
 # Possible reasons:
 #   The system is out of physical RAM or swap space
 #   In 32 bit mode, the process size limit was hit
 # Possible solutions:
 #   Reduce memory load on the system
 #   Increase physical memory or swap space
 #   Check if swap backing store is full
 #   Use 64 bit Java on a 64 bit OS
 #   Decrease Java heap size (-Xmx/-Xms)
 #   Decrease number of Java threads
 #   Decrease Java thread stack sizes (-Xss)
 #   Set larger code cache with -XX:ReservedCodeCacheSize=
 # This output file may be truncated or incomplete.
 #
 #  Out of Memory Error (os_linux.cpp:2747), pid=26159, tid=140305605682944
 #
 # JRE version: Java(TM) SE Runtime Environment (7.0_60-b19) (build 
 1.7.0_60-b19)
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.60-b09 mixed mode 
 linux-amd64 compressed oops)
 # Failed to write core dump. Core dumps have been disabled. To enable core 
 dumping, try ulimit -c unlimited before starting Java again
 #
 ---  T H R E A D  ---
 Current thread (0x08341000):  JavaThread MemtableFlushWriter:2055 
 daemon [_thread_new, id=23336, stack(0x7f9b71c56000,0x7f9b71c97000)]
 Stack: [0x7f9b71c56000,0x7f9b71c97000],  sp=0x7f9b71c95820,  free 
 space=254k
 Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
 code)
 V  [libjvm.so+0x99e7ca]  VMError::report_and_die()+0x2ea
 V  [libjvm.so+0x496fbb]  report_vm_out_of_memory(char const*, int, unsigned 
 long, char const*)+0x9b
 V  [libjvm.so+0x81d81e]  os::Linux::commit_memory_impl(char*, unsigned long, 
 bool)+0xfe
 V  [libjvm.so+0x81d8dc]  os::pd_commit_memory(char*, unsigned long, bool)+0xc
 V  [libjvm.so+0x81565a]  os::commit_memory(char*, unsigned long, bool)+0x2a
 V  [libjvm.so+0x81bdcd]  os::pd_create_stack_guard_pages(char*, unsigned 
 long)+0x6d
 V  [libjvm.so+0x9522de]  JavaThread::create_stack_guard_pages()+0x5e
 V  [libjvm.so+0x958c24]  JavaThread::run()+0x34
 V  [libjvm.so+0x81f7f8]  java_start(Thread*)+0x108
 {code}
 Changes in cassandra-env.sh settings
 {code}
 MAX_HEAP_SIZE=8G
 HEAP_NEWSIZE=800M
 JVM_OPTS=$JVM_OPTS -XX:TargetSurvivorRatio=50
 JVM_OPTS=$JVM_OPTS -XX:+AggressiveOpts
 JVM_OPTS=$JVM_OPTS -XX:+UseLargePages
 {code}
 Writes are about 10K-15K/sec and there are very few reads. Cassandra 2.0.9 
 with same settings never crashed. JVM crash logs are attached from two 
 machines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8152) Cassandra crashes with Native memory allocation failure

2014-10-21 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-8152:

Priority: Minor  (was: Critical)

 Cassandra crashes with Native memory allocation failure
 ---

 Key: CASSANDRA-8152
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8152
 Project: Cassandra
  Issue Type: Bug
 Environment: EC2 (i2.xlarge)
Reporter: Babar Tareen
Assignee: Brandon Williams
Priority: Minor
 Attachments: db06_hs_err_pid26159.log.zip, 
 db_05_hs_err_pid25411.log.zip


 On a 6 node Cassandra (datastax-community-2.1) cluster running on EC2 
 (i2.xlarge) instances, Jvm hosting the cassandra service randomly crashes 
 with following error.
 {code}
 #
 # There is insufficient memory for the Java Runtime Environment to continue.
 # Native memory allocation (malloc) failed to allocate 12288 bytes for 
 committing reserved memory.
 # Possible reasons:
 #   The system is out of physical RAM or swap space
 #   In 32 bit mode, the process size limit was hit
 # Possible solutions:
 #   Reduce memory load on the system
 #   Increase physical memory or swap space
 #   Check if swap backing store is full
 #   Use 64 bit Java on a 64 bit OS
 #   Decrease Java heap size (-Xmx/-Xms)
 #   Decrease number of Java threads
 #   Decrease Java thread stack sizes (-Xss)
 #   Set larger code cache with -XX:ReservedCodeCacheSize=
 # This output file may be truncated or incomplete.
 #
 #  Out of Memory Error (os_linux.cpp:2747), pid=26159, tid=140305605682944
 #
 # JRE version: Java(TM) SE Runtime Environment (7.0_60-b19) (build 
 1.7.0_60-b19)
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.60-b09 mixed mode 
 linux-amd64 compressed oops)
 # Failed to write core dump. Core dumps have been disabled. To enable core 
 dumping, try ulimit -c unlimited before starting Java again
 #
 ---  T H R E A D  ---
 Current thread (0x08341000):  JavaThread MemtableFlushWriter:2055 
 daemon [_thread_new, id=23336, stack(0x7f9b71c56000,0x7f9b71c97000)]
 Stack: [0x7f9b71c56000,0x7f9b71c97000],  sp=0x7f9b71c95820,  free 
 space=254k
 Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
 code)
 V  [libjvm.so+0x99e7ca]  VMError::report_and_die()+0x2ea
 V  [libjvm.so+0x496fbb]  report_vm_out_of_memory(char const*, int, unsigned 
 long, char const*)+0x9b
 V  [libjvm.so+0x81d81e]  os::Linux::commit_memory_impl(char*, unsigned long, 
 bool)+0xfe
 V  [libjvm.so+0x81d8dc]  os::pd_commit_memory(char*, unsigned long, bool)+0xc
 V  [libjvm.so+0x81565a]  os::commit_memory(char*, unsigned long, bool)+0x2a
 V  [libjvm.so+0x81bdcd]  os::pd_create_stack_guard_pages(char*, unsigned 
 long)+0x6d
 V  [libjvm.so+0x9522de]  JavaThread::create_stack_guard_pages()+0x5e
 V  [libjvm.so+0x958c24]  JavaThread::run()+0x34
 V  [libjvm.so+0x81f7f8]  java_start(Thread*)+0x108
 {code}
 Changes in cassandra-env.sh settings
 {code}
 MAX_HEAP_SIZE=8G
 HEAP_NEWSIZE=800M
 JVM_OPTS=$JVM_OPTS -XX:TargetSurvivorRatio=50
 JVM_OPTS=$JVM_OPTS -XX:+AggressiveOpts
 JVM_OPTS=$JVM_OPTS -XX:+UseLargePages
 {code}
 Writes are about 10K-15K/sec and there are very few reads. Cassandra 2.0.9 
 with same settings never crashed. JVM crash logs are attached from two 
 machines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8157) Opening results early with leveled compactions broken

2014-10-21 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178476#comment-14178476
 ] 

Yuki Morishita commented on CASSANDRA-8157:
---

+1

 Opening results early with leveled compactions broken
 -

 Key: CASSANDRA-8157
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8157
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
Priority: Critical
 Fix For: 2.1.1

 Attachments: 0001-dont-notify-when-replacing-fake-files.patch


 CASSANDRA-8034 notifies the listeners whenever we replace an sstable to make 
 sure we have track the right instance.
 Problem is though that when we open early and finish a compaction, we try to 
 re-add the same sstable to the manifest which drops it to level 0 since it 
 overlaps with the one that is already there



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7979) Acceptable time skew for C*

2014-10-21 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178475#comment-14178475
 ] 

Joshua McKenzie commented on CASSANDRA-7979:


h5. 2.0
h6. General:
# I don't see anything in there to limit the amount of sampling we're doing - 
right now it looks like we're sampling all updates rather than min delta for 
all columns as Benedict mentioned earlier.

h6. AtomicSortedColumns
# nit: Spacing on addAllWithSizeDelta. Remove extra after assignment of pair
# Update javadoc for return type

h6. ColumnFamilyStore
# nit: extra space after 'timeDelta  ='

h5. trunk
h6. AtomicBTreeColumns
# In ColumnUpdater.apply, the Math.min check is redundant.  Anything is always 
going to be = Long.MAX_VALUE


Looks pretty straightforward and appears to work as expected.  Also - we should 
probably have a 2.0 patch and a 2.1 and merge 2.1 up to trunk.
Once we've limited it to min delta per column on update we should be good to go.

 Acceptable time skew for C*
 ---

 Key: CASSANDRA-7979
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7979
 Project: Cassandra
  Issue Type: Improvement
Reporter: sankalp kohli
Assignee: sankalp kohli
Priority: Minor
 Attachments: 2.0_7979.diff, trunk_7979.diff


 It is very hard to know the bounds on clock skew required for C* to work 
 properly. Since the resolution is based on time and is at thrift column 
 level, it depends on the application. How fast is the application updating 
 the same column. If you update a column say after 5 millisecond and the clock 
 skew is more than that, you might not see the updates in correct order. 
 In this JIRA, I am proposing a change which will answer this question: How 
 much clock skew is acceptable for a given application. This will help answer 
 the question whether the system needs some alternate NTP algorithms to keep 
 time in sync. 
 If we measure the time difference between two updates to the same column,  we 
 will be able to answer the question on clock skew. 
 We can implement this in memtable(AtomicSortedColumns.addColumn). If we find 
 that a column is updated within say 100 millisecond, add the diff to a 
 histogram. Since this might have performance issues, we might want to have 
 some throttling like randomization or only enable it for a small time via 
 nodetool. 
 With this histogram, we will know what is an acceptable clock skew. 
 Also apart from column resolution, is there any other area which will be 
 affected by clock skew? 
 Note: For the sake of argument, I am not talking about back date deletes or 
 application modified timestamps. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8138) replace_address cannot find node to be replaced node after seed node restart

2014-10-21 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178477#comment-14178477
 ] 

Brandon Williams commented on CASSANDRA-8138:
-

Can you explain why the dead node wasn't loaded from the system table?

 replace_address cannot find node to be replaced node after seed node restart
 

 Key: CASSANDRA-8138
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8138
 Project: Cassandra
  Issue Type: Bug
Reporter: Oleg Anastasyev
 Attachments: ReplaceAfterSeedRestart.txt


 If a node failed and a cluster was restarted (which is common case on massive 
 outages), replace_address fails with
 {code}
 Caused by: java.lang.RuntimeException: Cannot replace_address /172.19.56.97 
 because it doesn't exist in gossip
 jvm 1|at 
 org.apache.cassandra.service.StorageService.prepareReplacementInfo(StorageService.java:472)
 jvm 1|at 
 org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:724)
 jvm 1|at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:686)
 jvm 1|at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:562)
 {code}
 Although neccessary information is saved in system tables on seed nodes, it 
 is not loaded to gossip on seed node, so a replacement node cannot get this 
 info.
 Attached patch loads all information from system tables to gossip with 
 generation 0 and fixes some bugs around this info on shadow gossip round.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8138) replace_address cannot find node to be replaced node after seed node restart

2014-10-21 Thread Oleg Anastasyev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178512#comment-14178512
 ] 

Oleg Anastasyev commented on CASSANDRA-8138:


This is because info about tokens, host id and DC:RACK of the dead node from 
system tables are loaded only into TokenMetadata on startup, but not to 
gossip's state. Loading code only calls Gossip.addSavedEndpoint(InetAddr) , 
which only adds an inet address of the dead node with generation 0.
If dead node did not participated in gossip since restart, there are no TOKENS, 
HOST_ID, etc app states for it in EndpointState. 
But replace_node, uses gossip shadow round to detect neccessary information 
about dead node, so it can replace it. And all it can get from gossip - is just 
its inet address. And actually there is a bug in Gossip.examineGossiper, which 
prevents this info to be sent to a replacing node as well, so in fact replacing 
node gets no information on this dead node at all, like it never existed 
before. 

I believe the same would apply to a bootrsrapping node, if there was full 
cluster restart after some node gone dead and a new node is being added to a 
cluster. And it would lead to wrong token metadata at freshly bootsrapped node 
(did not tested this case, through).

 replace_address cannot find node to be replaced node after seed node restart
 

 Key: CASSANDRA-8138
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8138
 Project: Cassandra
  Issue Type: Bug
Reporter: Oleg Anastasyev
 Attachments: ReplaceAfterSeedRestart.txt


 If a node failed and a cluster was restarted (which is common case on massive 
 outages), replace_address fails with
 {code}
 Caused by: java.lang.RuntimeException: Cannot replace_address /172.19.56.97 
 because it doesn't exist in gossip
 jvm 1|at 
 org.apache.cassandra.service.StorageService.prepareReplacementInfo(StorageService.java:472)
 jvm 1|at 
 org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:724)
 jvm 1|at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:686)
 jvm 1|at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:562)
 {code}
 Although neccessary information is saved in system tables on seed nodes, it 
 is not loaded to gossip on seed node, so a replacement node cannot get this 
 info.
 Attached patch loads all information from system tables to gossip with 
 generation 0 and fixes some bugs around this info on shadow gossip round.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8155) confusing error when erroneously querying map secondary index

2014-10-21 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178531#comment-14178531
 ] 

Philip Thompson edited comment on CASSANDRA-8155 at 10/21/14 3:38 PM:
--

This is probably the same problem that was affecting 7891


was (Author: philipthompson):
This probably the same problem that was affecting 7891

 confusing error when erroneously querying map secondary index
 -

 Key: CASSANDRA-8155
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8155
 Project: Cassandra
  Issue Type: Bug
Reporter: Russ Hatch
Priority: Minor
  Labels: cqlsh, lhf

 With a secondary index on values, attempting to query by key returns an error 
 message of list index out of range.
 This is kinda a similar issue to CASSANDRA-8147 (but that scenario results in 
 no error when there probably should be one).
 To repro:
 {noformat}
 cqlsh:test CREATE TABLE test.foo (
 ... id1 text,
 ... id2 text,
 ... categories maptext, text,
 ... PRIMARY KEY (id1, id2));
 cqlsh:test CREATE INDEX foo_categories_idx ON test.foo (categories);
 cqlsh:test insert into foo (id1, id2, categories) values ('foo', 'bar', 
 {'firstkey':'one', 'secondkey':'two'});
 {noformat}
 Now try to query the existing values index by key:
 {noformat}
 cqlsh:test select * from foo where categories contains key 'firstkey';
 list index out of range
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8155) confusing error when erroneously querying map secondary index

2014-10-21 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178533#comment-14178533
 ] 

Philip Thompson commented on CASSANDRA-8155:


[~rhatch] Does this error also happen outside of cqlsh?

 confusing error when erroneously querying map secondary index
 -

 Key: CASSANDRA-8155
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8155
 Project: Cassandra
  Issue Type: Bug
Reporter: Russ Hatch
Priority: Minor
  Labels: cqlsh, lhf

 With a secondary index on values, attempting to query by key returns an error 
 message of list index out of range.
 This is kinda a similar issue to CASSANDRA-8147 (but that scenario results in 
 no error when there probably should be one).
 To repro:
 {noformat}
 cqlsh:test CREATE TABLE test.foo (
 ... id1 text,
 ... id2 text,
 ... categories maptext, text,
 ... PRIMARY KEY (id1, id2));
 cqlsh:test CREATE INDEX foo_categories_idx ON test.foo (categories);
 cqlsh:test insert into foo (id1, id2, categories) values ('foo', 'bar', 
 {'firstkey':'one', 'secondkey':'two'});
 {noformat}
 Now try to query the existing values index by key:
 {noformat}
 cqlsh:test select * from foo where categories contains key 'firstkey';
 list index out of range
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8155) confusing error when erroneously querying map secondary index

2014-10-21 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178531#comment-14178531
 ] 

Philip Thompson commented on CASSANDRA-8155:


This probably the same problem that was affecting 7891

 confusing error when erroneously querying map secondary index
 -

 Key: CASSANDRA-8155
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8155
 Project: Cassandra
  Issue Type: Bug
Reporter: Russ Hatch
Priority: Minor
  Labels: cqlsh, lhf

 With a secondary index on values, attempting to query by key returns an error 
 message of list index out of range.
 This is kinda a similar issue to CASSANDRA-8147 (but that scenario results in 
 no error when there probably should be one).
 To repro:
 {noformat}
 cqlsh:test CREATE TABLE test.foo (
 ... id1 text,
 ... id2 text,
 ... categories maptext, text,
 ... PRIMARY KEY (id1, id2));
 cqlsh:test CREATE INDEX foo_categories_idx ON test.foo (categories);
 cqlsh:test insert into foo (id1, id2, categories) values ('foo', 'bar', 
 {'firstkey':'one', 'secondkey':'two'});
 {noformat}
 Now try to query the existing values index by key:
 {noformat}
 cqlsh:test select * from foo where categories contains key 'firstkey';
 list index out of range
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


git commit: Dont notify when replacing tmplink-files

2014-10-21 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 18cd3e320 - 3261d5e66


Dont notify when replacing tmplink-files

Patch by marcuse; reviewed by yukim for CASSANDRA-8157


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3261d5e6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3261d5e6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3261d5e6

Branch: refs/heads/cassandra-2.1
Commit: 3261d5e668f341464fc322b6bc424b33ce3adffd
Parents: 18cd3e3
Author: Marcus Eriksson marc...@apache.org
Authored: Tue Oct 21 15:46:05 2014 +0200
Committer: Marcus Eriksson marc...@apache.org
Committed: Tue Oct 21 17:57:32 2014 +0200

--
 CHANGES.txt   |  1 +
 src/java/org/apache/cassandra/db/DataTracker.java |  4 ++--
 .../cassandra/io/sstable/IndexSummaryManager.java |  2 +-
 .../cassandra/io/sstable/SSTableRewriter.java | 18 +-
 .../io/sstable/IndexSummaryManagerTest.java   |  2 +-
 .../cassandra/io/sstable/SSTableReaderTest.java   |  2 +-
 6 files changed, 15 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3261d5e6/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 09ab91b..96a5e23 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.1
+ * Dont notify when replacing tmplink files (CASSANDRA-8157)
  * Fix validation with multiple CONTAINS clause (CASSANDRA-8131)
  * Fix validation of collections in TriggerExecutor (CASSANDRA-8146)
  * Fix IllegalArgumentException when a list of IN values containing tuples

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3261d5e6/src/java/org/apache/cassandra/db/DataTracker.java
--
diff --git a/src/java/org/apache/cassandra/db/DataTracker.java 
b/src/java/org/apache/cassandra/db/DataTracker.java
index 2ff040c..7393323 100644
--- a/src/java/org/apache/cassandra/db/DataTracker.java
+++ b/src/java/org/apache/cassandra/db/DataTracker.java
@@ -358,7 +358,7 @@ public class DataTracker
  * @param oldSSTables replaced readers
  * @param newSSTables replacement readers
  */
-public void replaceReaders(CollectionSSTableReader oldSSTables, 
CollectionSSTableReader newSSTables)
+public void replaceReaders(CollectionSSTableReader oldSSTables, 
CollectionSSTableReader newSSTables, boolean notify)
 {
 View currentView, newView;
 do
@@ -368,7 +368,7 @@ public class DataTracker
 }
 while (!view.compareAndSet(currentView, newView));
 
-if (!oldSSTables.isEmpty())
+if (!oldSSTables.isEmpty()  notify)
 notifySSTablesChanged(oldSSTables, newSSTables, 
OperationType.COMPACTION);
 
 for (SSTableReader sstable : newSSTables)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3261d5e6/src/java/org/apache/cassandra/io/sstable/IndexSummaryManager.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/IndexSummaryManager.java 
b/src/java/org/apache/cassandra/io/sstable/IndexSummaryManager.java
index d5b7364..e39d75d 100644
--- a/src/java/org/apache/cassandra/io/sstable/IndexSummaryManager.java
+++ b/src/java/org/apache/cassandra/io/sstable/IndexSummaryManager.java
@@ -426,7 +426,7 @@ public class IndexSummaryManager implements 
IndexSummaryManagerMBean
 
 for (DataTracker tracker : replacedByTracker.keySet())
 {
-tracker.replaceReaders(replacedByTracker.get(tracker), 
replacementsByTracker.get(tracker));
+tracker.replaceReaders(replacedByTracker.get(tracker), 
replacementsByTracker.get(tracker), true);
 newSSTables.addAll(replacementsByTracker.get(tracker));
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3261d5e6/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
index 4055b42..76677ac 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
@@ -174,7 +174,7 @@ public class SSTableRewriter
 SSTableReader reader = writer.openEarly(maxAge);
 if (reader != null)
 {
-replaceReader(currentlyOpenedEarly, reader);
+replaceReader(currentlyOpenedEarly, reader, false);
 currentlyOpenedEarly = reader;
 currentlyOpenedEarlyAt = writer.getFilePointer();
  

[2/3] git commit: Dont notify when replacing tmplink-files

2014-10-21 Thread marcuse
Dont notify when replacing tmplink-files

Patch by marcuse; reviewed by yukim for CASSANDRA-8157


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3261d5e6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3261d5e6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3261d5e6

Branch: refs/heads/trunk
Commit: 3261d5e668f341464fc322b6bc424b33ce3adffd
Parents: 18cd3e3
Author: Marcus Eriksson marc...@apache.org
Authored: Tue Oct 21 15:46:05 2014 +0200
Committer: Marcus Eriksson marc...@apache.org
Committed: Tue Oct 21 17:57:32 2014 +0200

--
 CHANGES.txt   |  1 +
 src/java/org/apache/cassandra/db/DataTracker.java |  4 ++--
 .../cassandra/io/sstable/IndexSummaryManager.java |  2 +-
 .../cassandra/io/sstable/SSTableRewriter.java | 18 +-
 .../io/sstable/IndexSummaryManagerTest.java   |  2 +-
 .../cassandra/io/sstable/SSTableReaderTest.java   |  2 +-
 6 files changed, 15 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3261d5e6/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 09ab91b..96a5e23 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.1
+ * Dont notify when replacing tmplink files (CASSANDRA-8157)
  * Fix validation with multiple CONTAINS clause (CASSANDRA-8131)
  * Fix validation of collections in TriggerExecutor (CASSANDRA-8146)
  * Fix IllegalArgumentException when a list of IN values containing tuples

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3261d5e6/src/java/org/apache/cassandra/db/DataTracker.java
--
diff --git a/src/java/org/apache/cassandra/db/DataTracker.java 
b/src/java/org/apache/cassandra/db/DataTracker.java
index 2ff040c..7393323 100644
--- a/src/java/org/apache/cassandra/db/DataTracker.java
+++ b/src/java/org/apache/cassandra/db/DataTracker.java
@@ -358,7 +358,7 @@ public class DataTracker
  * @param oldSSTables replaced readers
  * @param newSSTables replacement readers
  */
-public void replaceReaders(CollectionSSTableReader oldSSTables, 
CollectionSSTableReader newSSTables)
+public void replaceReaders(CollectionSSTableReader oldSSTables, 
CollectionSSTableReader newSSTables, boolean notify)
 {
 View currentView, newView;
 do
@@ -368,7 +368,7 @@ public class DataTracker
 }
 while (!view.compareAndSet(currentView, newView));
 
-if (!oldSSTables.isEmpty())
+if (!oldSSTables.isEmpty()  notify)
 notifySSTablesChanged(oldSSTables, newSSTables, 
OperationType.COMPACTION);
 
 for (SSTableReader sstable : newSSTables)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3261d5e6/src/java/org/apache/cassandra/io/sstable/IndexSummaryManager.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/IndexSummaryManager.java 
b/src/java/org/apache/cassandra/io/sstable/IndexSummaryManager.java
index d5b7364..e39d75d 100644
--- a/src/java/org/apache/cassandra/io/sstable/IndexSummaryManager.java
+++ b/src/java/org/apache/cassandra/io/sstable/IndexSummaryManager.java
@@ -426,7 +426,7 @@ public class IndexSummaryManager implements 
IndexSummaryManagerMBean
 
 for (DataTracker tracker : replacedByTracker.keySet())
 {
-tracker.replaceReaders(replacedByTracker.get(tracker), 
replacementsByTracker.get(tracker));
+tracker.replaceReaders(replacedByTracker.get(tracker), 
replacementsByTracker.get(tracker), true);
 newSSTables.addAll(replacementsByTracker.get(tracker));
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3261d5e6/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
index 4055b42..76677ac 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
@@ -174,7 +174,7 @@ public class SSTableRewriter
 SSTableReader reader = writer.openEarly(maxAge);
 if (reader != null)
 {
-replaceReader(currentlyOpenedEarly, reader);
+replaceReader(currentlyOpenedEarly, reader, false);
 currentlyOpenedEarly = reader;
 currentlyOpenedEarlyAt = writer.getFilePointer();
 moveStarts(reader, Functions.constant(reader.last), false);
@@ -197,7 +197,7 @@ 

[1/3] git commit: Fix validation with multiple CONTAINS clauses

2014-10-21 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk c23560347 - a0a30e03a


Fix validation with multiple CONTAINS clauses

patch by blerer; reviewed by slebresne for CASSANDRA-8131


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/18cd3e32
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/18cd3e32
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/18cd3e32

Branch: refs/heads/trunk
Commit: 18cd3e3205ac3301ebce0b047aa444e50388083b
Parents: ec866fa
Author: Benjamin Lerer benjamin.le...@datastax.com
Authored: Tue Oct 21 16:39:35 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Tue Oct 21 16:40:38 2014 +0200

--
 CHANGES.txt |  1 +
 .../cql3/statements/SelectStatement.java| 39 +++-
 .../statements/SingleColumnRestriction.java | 10 +
 .../cassandra/cql3/ContainsRelationTest.java| 22 +++
 4 files changed, 71 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/18cd3e32/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 815bce1..09ab91b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.1
+ * Fix validation with multiple CONTAINS clause (CASSANDRA-8131)
  * Fix validation of collections in TriggerExecutor (CASSANDRA-8146)
  * Fix IllegalArgumentException when a list of IN values containing tuples
is passed as a single arg to a prepared statement with the v1 or v2

http://git-wip-us.apache.org/repos/asf/cassandra/blob/18cd3e32/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index a8c9d44..233f3db 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -31,6 +31,7 @@ import org.github.jamm.MemoryMeter;
 
 import org.apache.cassandra.auth.Permission;
 import org.apache.cassandra.cql3.*;
+import org.apache.cassandra.cql3.statements.SingleColumnRestriction.Contains;
 import org.apache.cassandra.db.composites.*;
 import org.apache.cassandra.transport.messages.ResultMessage;
 import org.apache.cassandra.config.CFMetaData;
@@ -1349,6 +1350,27 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 throw new InvalidRequestException(String.format(SELECT 
DISTINCT queries must request all the partition key columns (missing %s), 
def.name));
 }
 
+/**
+ * Checks if the specified column is restricted by multiple contains or 
contains key.
+ *
+ * @param columnDef the definition of the column to check
+ * @return codetrue/code the specified column is restricted by 
multiple contains or contains key,
+ * codefalse/code otherwise
+ */
+private boolean isRestrictedByMultipleContains(ColumnDefinition columnDef)
+{
+if (!columnDef.type.isCollection())
+return false;
+
+Restriction restriction = metadataRestrictions.get(columnDef.name);
+
+if (!(restriction instanceof Contains))
+return false;
+
+Contains contains = (Contains) restriction;
+return (contains.numberOfValues() + contains.numberOfKeys())  1;
+}
+
 public static class RawStatement extends CFStatement
 {
 private final Parameters parameters;
@@ -2011,7 +2033,7 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 // We will potentially filter data if either:
 //  - Have more than one IndexExpression
 //  - Have no index expression and the column filter is not 
the identity
-if (stmt.restrictedColumns.size()  1 || 
(stmt.restrictedColumns.isEmpty()  !stmt.columnFilterIsIdentity()))
+if (needFiltering(stmt))
 throw new InvalidRequestException(Cannot execute this 
query as it might involve data filtering and  +
   thus may have 
unpredictable performance. If you want to execute  +
   this query despite the 
performance unpredictability, use ALLOW FILTERING);
@@ -2036,6 +2058,21 @@ public class SelectStatement implements CQLStatement, 
MeasurableForPreparedCache
 }
 }
 
+/**
+ * Checks if the specified statement will need to filter the data.
+ *
+ * @param stmt the statement to test.
+ * @return codetrue/code if the 

[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-10-21 Thread marcuse
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a0a30e03
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a0a30e03
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a0a30e03

Branch: refs/heads/trunk
Commit: a0a30e03a1ff571b0804ea20c1575fa39542eb6b
Parents: c235603 3261d5e
Author: Marcus Eriksson marc...@apache.org
Authored: Tue Oct 21 17:57:50 2014 +0200
Committer: Marcus Eriksson marc...@apache.org
Committed: Tue Oct 21 17:57:50 2014 +0200

--
 CHANGES.txt |  2 +
 .../cql3/statements/SelectStatement.java| 39 +++-
 .../statements/SingleColumnRestriction.java | 10 +
 .../org/apache/cassandra/db/DataTracker.java|  4 +-
 .../io/sstable/IndexSummaryManager.java |  2 +-
 .../cassandra/io/sstable/SSTableRewriter.java   | 18 -
 .../cassandra/cql3/ContainsRelationTest.java| 22 +++
 .../io/sstable/IndexSummaryManagerTest.java |  2 +-
 .../cassandra/io/sstable/SSTableReaderTest.java |  2 +-
 9 files changed, 86 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a0a30e03/CHANGES.txt
--
diff --cc CHANGES.txt
index 85cb24b,96a5e23..cdae72a
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,35 -1,6 +1,37 @@@
 +3.0
 + * Integrate JMH for microbenchmarks (CASSANDRA-8151)
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + * Support for aggregation functions (CASSANDRA-4914)
 + * Remove cassandra-cli (CASSANDRA-7920)
 + * Accept dollar quoted strings in CQL (CASSANDRA-7769)
 + * Make assassinate a first class command (CASSANDRA-7935)
 + * Support IN clause on any clustering column (CASSANDRA-4762)
 + * Improve compaction logging (CASSANDRA-7818)
 + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
 + * Do anticompaction in groups (CASSANDRA-6851)
 + * Support pure user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 
7781, 7929,
 +   7924, 7812, 8063)
 + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416)
 + * Move sstable RandomAccessReader to nio2, which allows using the
 +   FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050)
 + * Remove CQL2 (CASSANDRA-5918)
 + * Add Thrift get_multi_slice call (CASSANDRA-6757)
 + * Optimize fetching multiple cells by name (CASSANDRA-6933)
 + * Allow compilation in java 8 (CASSANDRA-7028)
 + * Make incremental repair default (CASSANDRA-7250)
 + * Enable code coverage thru JaCoCo (CASSANDRA-7226)
 + * Switch external naming of 'column families' to 'tables' (CASSANDRA-4369) 
 + * Shorten SSTable path (CASSANDRA-6962)
 + * Use unsafe mutations for most unit tests (CASSANDRA-6969)
 + * Fix race condition during calculation of pending ranges (CASSANDRA-7390)
 + * Fail on very large batch sizes (CASSANDRA-8011)
 + * improve concurrency of repair (CASSANDRA-6455)
 +
 +
  2.1.1
+  * Dont notify when replacing tmplink files (CASSANDRA-8157)
+  * Fix validation with multiple CONTAINS clause (CASSANDRA-8131)
   * Fix validation of collections in TriggerExecutor (CASSANDRA-8146)
   * Fix IllegalArgumentException when a list of IN values containing tuples
 is passed as a single arg to a prepared statement with the v1 or v2

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a0a30e03/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --cc src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index ff905c1,233f3db..30c5390
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@@ -31,8 -31,8 +31,9 @@@ import org.github.jamm.MemoryMeter
  
  import org.apache.cassandra.auth.Permission;
  import org.apache.cassandra.cql3.*;
+ import org.apache.cassandra.cql3.statements.SingleColumnRestriction.Contains;
  import org.apache.cassandra.db.composites.*;
 +import org.apache.cassandra.db.composites.Composite.EOC;
  import org.apache.cassandra.transport.messages.ResultMessage;
  import org.apache.cassandra.config.CFMetaData;
  import org.apache.cassandra.config.ColumnDefinition;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a0a30e03/test/unit/org/apache/cassandra/cql3/ContainsRelationTest.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a0a30e03/test/unit/org/apache/cassandra/io/sstable/IndexSummaryManagerTest.java
--


Git Push Summary

2014-10-21 Thread slebresne
Repository: cassandra
Updated Tags:  refs/tags/2.1.1-tentative [created] 3261d5e66


[jira] [Assigned] (CASSANDRA-8154) desc table output shows key-only index ambiguously

2014-10-21 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs reassigned CASSANDRA-8154:
--

Assignee: Tyler Hobbs

 desc table output shows key-only index ambiguously
 --

 Key: CASSANDRA-8154
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8154
 Project: Cassandra
  Issue Type: Bug
Reporter: Russ Hatch
Assignee: Tyler Hobbs
Priority: Minor
  Labels: cqlsh

 When creating a secondary index on a map type, for keys, the DESC TABLE 
 output does not create correct DDL (it omits the keys part). So if someone 
 uses describe to recreate a schema they could end up with a values index 
 instead of a keys index.
 First, create a table and add an index:
 {noformat}
 CREATE TABLE test.foo (
 id1 text,
 id2 text,
 categories maptext, text,
 PRIMARY KEY (id1, id2));
 create index on foo(keys(categories));|
 {noformat}
 Now DESC TABLE and you'll see the incomplete index DDL:
 {noformat}
 CREATE TABLE test.foo (
 id1 text,
 id2 text,
 categories maptext, text,
 PRIMARY KEY (id1, id2)
 ) WITH CLUSTERING ORDER BY (id2 ASC)
 ...snip..
 CREATE INDEX foo_categories_idx ON test.foo (categories);
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8158) network_topology_test is failing with inconsistent results

2014-10-21 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-8158:
--

 Summary: network_topology_test is failing with inconsistent results
 Key: CASSANDRA-8158
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8158
 Project: Cassandra
  Issue Type: Test
Reporter: Philip Thompson


replication_test.py:ReplicationTest.network_topology_test is a no vnode test 
that has been failing in 2.0 and 2.1 for quite a while. Sample cassci output 
here:
http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/lastCompletedBuild/testReport/replication_test/ReplicationTest/network_topology_test/

The missing replicas marked in the failure output are very inconsistent. Due to 
the fact that it is failing on practically every version, and no bugs have been 
filed relating to a failure of the feature this is testing, there is most 
likely an issue with the test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8144) Creating CQL2 tables fails in C* 2.1

2014-10-21 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-8144:
-
Attachment: 8144.txt

 Creating CQL2 tables fails in C* 2.1
 

 Key: CASSANDRA-8144
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8144
 Project: Cassandra
  Issue Type: Bug
Reporter: Sam Tunnicliffe
Assignee: Aleksey Yeschenko
 Attachments: 8144.txt


 Although cql2 has been deprecated and removed from cqlsh, the functionality 
 is still accessible using thrift. However, it seems that creation of new 
 tables via cql2 is broken in 2.1.
 {code}
 CREATE KEYSPACE test_ks WITH strategy_class='SimpleStrategy' AND 
 replication_factor = '1';
 CREATE TABLE test_cf (id text PRIMARY KEY, value text, test text);
 {code}
 fails with the following stacktrace on the server:
 {code}
 ERROR [MigrationStage:1] 2014-10-20 13:53:29,506 CassandraDaemon.java:153 - 
 Exception in thread Thread[MigrationStage:1,5,main]
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
 at java.util.ArrayList.rangeCheck(ArrayList.java:635) ~[na:1.7.0_51]
 at java.util.ArrayList.set(ArrayList.java:426) ~[na:1.7.0_51]
 at org.apache.cassandra.config.CFMetaData.rebuild(CFMetaData.java:2072) 
 ~[main/:na]
 at 
 org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1842)
  ~[main/:na]
 at 
 org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1882) 
 ~[main/:na]
 at 
 org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:320)
  ~[main/:na]
 at 
 org.apache.cassandra.db.DefsTables.mergeColumnFamilies(DefsTables.java:279) 
 ~[main/:na]
 at 
 org.apache.cassandra.db.DefsTables.mergeSchemaInternal(DefsTables.java:193) 
 ~[main/:na]
 at org.apache.cassandra.db.DefsTables.mergeSchema(DefsTables.java:165) 
 ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager$2.runMayThrow(MigrationManager.java:393)
  ~[main/:na]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_51]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_51]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 ERROR [Thrift:1] 2014-10-20 13:53:29,506 CustomTThreadPoolServer.java:219 - 
 Error occurred during processing of message.
 java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
 at 
 org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:397) 
 ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager.announce(MigrationManager.java:374)
  ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:249)
  ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:235)
  ~[main/:na]
 at 
 org.apache.cassandra.cql.QueryProcessor.processStatement(QueryProcessor.java:662)
  ~[main/:na]
 at 
 org.apache.cassandra.cql.QueryProcessor.process(QueryProcessor.java:802) 
 ~[main/:na]
 at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql_query(CassandraServer.java:1941)
  ~[main/:na]
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:4558)
  ~[thrift/:na]
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:4542)
  ~[thrift/:na]
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
 ~[libthrift-0.9.1.jar:0.9.1]
 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
 ~[libthrift-0.9.1.jar:0.9.1]
 at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201)
  ~[main/:na]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  [na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_51]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 Caused by: java.util.concurrent.ExecutionException: 
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
 at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
 ~[na:1.7.0_51]
 at java.util.concurrent.FutureTask.get(FutureTask.java:188) ~[na:1.7.0_51]
 at 
 org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:393) 
 ~[main/:na]
 ... 14 common frames omitted
 Caused by: 

[jira] [Created] (CASSANDRA-8159) NPE in SSTableReader

2014-10-21 Thread Alexander Sterligov (JIRA)
Alexander Sterligov created CASSANDRA-8159:
--

 Summary: NPE in SSTableReader
 Key: CASSANDRA-8159
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8159
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Alexander Sterligov
Priority: Minor


Log file contains a lot of following exceptions:

{quote}
WARN  [CompactionExecutor:15674] 2014-10-21 20:57:56,838 OutputHandler.java:52 
- Out of order row detected (DecoratedKey(8937955371032053430, 
39352e3130382e3234322e32302d6765744d656d42756572734d62) found
 after DecoratedKey(9186481584950194146, 
80010001000c62617463685f6d757461746556640d00010b0d000100))
{quote}

I tried to scrub sstables by nodetool and got:

{quote}
ERROR [CompactionExecutor:15674] 2014-10-21 20:57:57,229 
CassandraDaemon.java:166 - Exception in thread 
Thread[CompactionExecutor:15674,1,RMI Runtime]
java.lang.NullPointerException: null
at 
org.apache.cassandra.io.sstable.SSTableReader.cloneWithNewStart(SSTableReader.java:942)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.sstable.SSTableRewriter.moveStarts(SSTableRewriter.java:238)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:318)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at org.apache.cassandra.db.compaction.Scrubber.scrub(Scrubber.java:257) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionManager.scrubOne(CompactionManager.java:592)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionManager.access$300(CompactionManager.java:100)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionManager$3.execute(CompactionManager.java:315)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:270)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
~[na:1.7.0_51]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_51]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_51]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
{quote}

sstablescrub successfully fixed sstables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8159) NPE in SSTableReader

2014-10-21 Thread Alexander Sterligov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Sterligov updated CASSANDRA-8159:
---
Description: 
Log file contained a lot of following exceptions:

{quote}
WARN  [CompactionExecutor:15674] 2014-10-21 20:57:56,838 OutputHandler.java:52 
- Out of order row detected (DecoratedKey(8937955371032053430, 
39352e3130382e3234322e32302d6765744d656d42756572734d62) found
 after DecoratedKey(9186481584950194146, 
80010001000c62617463685f6d757461746556640d00010b0d000100))
{quote}

I tried to scrub sstables by nodetool and got:

{quote}
ERROR [CompactionExecutor:15674] 2014-10-21 20:57:57,229 
CassandraDaemon.java:166 - Exception in thread 
Thread[CompactionExecutor:15674,1,RMI Runtime]
java.lang.NullPointerException: null
at 
org.apache.cassandra.io.sstable.SSTableReader.cloneWithNewStart(SSTableReader.java:942)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.sstable.SSTableRewriter.moveStarts(SSTableRewriter.java:238)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:318)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at org.apache.cassandra.db.compaction.Scrubber.scrub(Scrubber.java:257) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionManager.scrubOne(CompactionManager.java:592)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionManager.access$300(CompactionManager.java:100)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionManager$3.execute(CompactionManager.java:315)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:270)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
~[na:1.7.0_51]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_51]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_51]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
{quote}

sstablescrub successfully fixed sstables.

  was:
Log file contains a lot of following exceptions:

{quote}
WARN  [CompactionExecutor:15674] 2014-10-21 20:57:56,838 OutputHandler.java:52 
- Out of order row detected (DecoratedKey(8937955371032053430, 
39352e3130382e3234322e32302d6765744d656d42756572734d62) found
 after DecoratedKey(9186481584950194146, 
80010001000c62617463685f6d757461746556640d00010b0d000100))
{quote}

I tried to scrub sstables by nodetool and got:

{quote}
ERROR [CompactionExecutor:15674] 2014-10-21 20:57:57,229 
CassandraDaemon.java:166 - Exception in thread 
Thread[CompactionExecutor:15674,1,RMI Runtime]
java.lang.NullPointerException: null
at 
org.apache.cassandra.io.sstable.SSTableReader.cloneWithNewStart(SSTableReader.java:942)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.sstable.SSTableRewriter.moveStarts(SSTableRewriter.java:238)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:318)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at org.apache.cassandra.db.compaction.Scrubber.scrub(Scrubber.java:257) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionManager.scrubOne(CompactionManager.java:592)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionManager.access$300(CompactionManager.java:100)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionManager$3.execute(CompactionManager.java:315)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:270)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
~[na:1.7.0_51]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_51]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_51]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
{quote}

sstablescrub successfully fixed sstables.


 NPE in SSTableReader
 

 Key: CASSANDRA-8159
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8159
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Alexander Sterligov
Priority: Minor

 Log file contained a lot of following exceptions:
 {quote}
 WARN  [CompactionExecutor:15674] 2014-10-21 20:57:56,838 
 OutputHandler.java:52 - Out of order row detected 
 

[jira] [Updated] (CASSANDRA-8144) Creating CQL2 tables fails in C* 2.1

2014-10-21 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-8144:
-
Attachment: (was: 8144.txt)

 Creating CQL2 tables fails in C* 2.1
 

 Key: CASSANDRA-8144
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8144
 Project: Cassandra
  Issue Type: Bug
Reporter: Sam Tunnicliffe
Assignee: Aleksey Yeschenko
 Fix For: 2.1.2


 Although cql2 has been deprecated and removed from cqlsh, the functionality 
 is still accessible using thrift. However, it seems that creation of new 
 tables via cql2 is broken in 2.1.
 {code}
 CREATE KEYSPACE test_ks WITH strategy_class='SimpleStrategy' AND 
 replication_factor = '1';
 CREATE TABLE test_cf (id text PRIMARY KEY, value text, test text);
 {code}
 fails with the following stacktrace on the server:
 {code}
 ERROR [MigrationStage:1] 2014-10-20 13:53:29,506 CassandraDaemon.java:153 - 
 Exception in thread Thread[MigrationStage:1,5,main]
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
 at java.util.ArrayList.rangeCheck(ArrayList.java:635) ~[na:1.7.0_51]
 at java.util.ArrayList.set(ArrayList.java:426) ~[na:1.7.0_51]
 at org.apache.cassandra.config.CFMetaData.rebuild(CFMetaData.java:2072) 
 ~[main/:na]
 at 
 org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1842)
  ~[main/:na]
 at 
 org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1882) 
 ~[main/:na]
 at 
 org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:320)
  ~[main/:na]
 at 
 org.apache.cassandra.db.DefsTables.mergeColumnFamilies(DefsTables.java:279) 
 ~[main/:na]
 at 
 org.apache.cassandra.db.DefsTables.mergeSchemaInternal(DefsTables.java:193) 
 ~[main/:na]
 at org.apache.cassandra.db.DefsTables.mergeSchema(DefsTables.java:165) 
 ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager$2.runMayThrow(MigrationManager.java:393)
  ~[main/:na]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_51]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_51]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 ERROR [Thrift:1] 2014-10-20 13:53:29,506 CustomTThreadPoolServer.java:219 - 
 Error occurred during processing of message.
 java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
 at 
 org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:397) 
 ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager.announce(MigrationManager.java:374)
  ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:249)
  ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:235)
  ~[main/:na]
 at 
 org.apache.cassandra.cql.QueryProcessor.processStatement(QueryProcessor.java:662)
  ~[main/:na]
 at 
 org.apache.cassandra.cql.QueryProcessor.process(QueryProcessor.java:802) 
 ~[main/:na]
 at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql_query(CassandraServer.java:1941)
  ~[main/:na]
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:4558)
  ~[thrift/:na]
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:4542)
  ~[thrift/:na]
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
 ~[libthrift-0.9.1.jar:0.9.1]
 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
 ~[libthrift-0.9.1.jar:0.9.1]
 at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201)
  ~[main/:na]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  [na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_51]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 Caused by: java.util.concurrent.ExecutionException: 
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
 at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
 ~[na:1.7.0_51]
 at java.util.concurrent.FutureTask.get(FutureTask.java:188) ~[na:1.7.0_51]
 at 
 org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:393) 
 ~[main/:na]
 ... 14 common frames omitted
 Caused 

[jira] [Updated] (CASSANDRA-8144) Creating CQL2 tables fails in C* 2.1

2014-10-21 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-8144:
-
Attachment: 8144.txt

 Creating CQL2 tables fails in C* 2.1
 

 Key: CASSANDRA-8144
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8144
 Project: Cassandra
  Issue Type: Bug
Reporter: Sam Tunnicliffe
Assignee: Aleksey Yeschenko
 Fix For: 2.1.2

 Attachments: 8144.txt


 Although cql2 has been deprecated and removed from cqlsh, the functionality 
 is still accessible using thrift. However, it seems that creation of new 
 tables via cql2 is broken in 2.1.
 {code}
 CREATE KEYSPACE test_ks WITH strategy_class='SimpleStrategy' AND 
 replication_factor = '1';
 CREATE TABLE test_cf (id text PRIMARY KEY, value text, test text);
 {code}
 fails with the following stacktrace on the server:
 {code}
 ERROR [MigrationStage:1] 2014-10-20 13:53:29,506 CassandraDaemon.java:153 - 
 Exception in thread Thread[MigrationStage:1,5,main]
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
 at java.util.ArrayList.rangeCheck(ArrayList.java:635) ~[na:1.7.0_51]
 at java.util.ArrayList.set(ArrayList.java:426) ~[na:1.7.0_51]
 at org.apache.cassandra.config.CFMetaData.rebuild(CFMetaData.java:2072) 
 ~[main/:na]
 at 
 org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1842)
  ~[main/:na]
 at 
 org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1882) 
 ~[main/:na]
 at 
 org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:320)
  ~[main/:na]
 at 
 org.apache.cassandra.db.DefsTables.mergeColumnFamilies(DefsTables.java:279) 
 ~[main/:na]
 at 
 org.apache.cassandra.db.DefsTables.mergeSchemaInternal(DefsTables.java:193) 
 ~[main/:na]
 at org.apache.cassandra.db.DefsTables.mergeSchema(DefsTables.java:165) 
 ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager$2.runMayThrow(MigrationManager.java:393)
  ~[main/:na]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_51]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_51]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 ERROR [Thrift:1] 2014-10-20 13:53:29,506 CustomTThreadPoolServer.java:219 - 
 Error occurred during processing of message.
 java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
 at 
 org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:397) 
 ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager.announce(MigrationManager.java:374)
  ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:249)
  ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:235)
  ~[main/:na]
 at 
 org.apache.cassandra.cql.QueryProcessor.processStatement(QueryProcessor.java:662)
  ~[main/:na]
 at 
 org.apache.cassandra.cql.QueryProcessor.process(QueryProcessor.java:802) 
 ~[main/:na]
 at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql_query(CassandraServer.java:1941)
  ~[main/:na]
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:4558)
  ~[thrift/:na]
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:4542)
  ~[thrift/:na]
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
 ~[libthrift-0.9.1.jar:0.9.1]
 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
 ~[libthrift-0.9.1.jar:0.9.1]
 at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201)
  ~[main/:na]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  [na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_51]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 Caused by: java.util.concurrent.ExecutionException: 
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
 at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
 ~[na:1.7.0_51]
 at java.util.concurrent.FutureTask.get(FutureTask.java:188) ~[na:1.7.0_51]
 at 
 org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:393) 
 ~[main/:na]
 ... 14 common 

[jira] [Commented] (CASSANDRA-8139) The WRITETIME function returns null for negative timestamp values

2014-10-21 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178673#comment-14178673
 ] 

Tyler Hobbs commented on CASSANDRA-8139:


bq. it's true technically that we don't forbid negative timestamps so I suppose 
we should either start forbidding them or fix this, and it's probably easier to 
just fix this so attaching a simple patch.

At least for the v3 protocol, we explicitly disallow negative timestamps.  See 
the discussion on CASSANDRA-6855 (and this commit in particular: 
https://github.com/pcmanus/cassandra/commit/d9a584efa94b9c3deb35746985a45573f16bb9bd).

 The WRITETIME function returns null for negative timestamp values
 -

 Key: CASSANDRA-8139
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8139
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Richard Bremner
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0.12

 Attachments: 8139.txt


 Insert a column with a negative timestamp value:
 {code}
 INSERT INTO my_table (col1, col2, col3)
 VALUES ('val1', 'val2', 'val3') 
 USING TIMESTAMP -1413614886750020;
 {code}
 Then attempt to read the *writetime*:
 {code}
 SELECT WRITETIME(col3) FROM my_table WHERE col1 = 'val1'
 {code}
 The result is *null*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8116) HSHA fails with default rpc_max_threads setting

2014-10-21 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178689#comment-14178689
 ] 

Tyler Hobbs commented on CASSANDRA-8116:


bq. I suppose you could check and throw if the value is Integer.MAX_VALUE but 
you aren't going to be able to check every value.

unlimited is the default value, so I think there's still quite a bit of value 
in checking for just that.  I'll put together a patch.

 HSHA fails with default rpc_max_threads setting
 ---

 Key: CASSANDRA-8116
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8116
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Mike Adamson
Assignee: Mike Adamson
Priority: Minor
 Fix For: 2.0.11, 2.1.1

 Attachments: 8116.txt


 The HSHA server fails with 'Out of heap space' error if the rpc_max_threads 
 is left at its default setting (unlimited) in cassandra.yaml.
 I'm not proposing any code change for this but have submitted a patch for a 
 comment change in cassandra.yaml to indicate that rpc_max_threads needs to be 
 changed if you use HSHA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8159) NPE in SSTableReader

2014-10-21 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178686#comment-14178686
 ] 

Brandon Williams commented on CASSANDRA-8159:
-

So, you had corruption and sstablescrub worked as designed?

 NPE in SSTableReader
 

 Key: CASSANDRA-8159
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8159
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Alexander Sterligov
Priority: Minor

 Log file contained a lot of following exceptions:
 {quote}
 WARN  [CompactionExecutor:15674] 2014-10-21 20:57:56,838 
 OutputHandler.java:52 - Out of order row detected 
 (DecoratedKey(8937955371032053430, 
 39352e3130382e3234322e32302d6765744d656d42756572734d62) found
  after DecoratedKey(9186481584950194146, 
 80010001000c62617463685f6d757461746556640d00010b0d000100))
 {quote}
 I tried to scrub sstables by nodetool and got:
 {quote}
 ERROR [CompactionExecutor:15674] 2014-10-21 20:57:57,229 
 CassandraDaemon.java:166 - Exception in thread 
 Thread[CompactionExecutor:15674,1,RMI Runtime]
 java.lang.NullPointerException: null
 at 
 org.apache.cassandra.io.sstable.SSTableReader.cloneWithNewStart(SSTableReader.java:942)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.io.sstable.SSTableRewriter.moveStarts(SSTableRewriter.java:238)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:318)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.db.compaction.Scrubber.scrub(Scrubber.java:257) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.db.compaction.CompactionManager.scrubOne(CompactionManager.java:592)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.db.compaction.CompactionManager.access$300(CompactionManager.java:100)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$3.execute(CompactionManager.java:315)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:270)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_51]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 {quote}
 sstablescrub successfully fixed sstables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8155) confusing error when erroneously querying map secondary index

2014-10-21 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178690#comment-14178690
 ] 

Russ Hatch commented on CASSANDRA-8155:
---

This does seem to be isolated to cqlsh. Testing directly with the python driver 
just yields no results.

Though perhaps there should be an error here since we're doing a non-index 
query, wouldn't this require allow-filtering to be a valid query?

 confusing error when erroneously querying map secondary index
 -

 Key: CASSANDRA-8155
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8155
 Project: Cassandra
  Issue Type: Bug
Reporter: Russ Hatch
Priority: Minor
  Labels: cqlsh, lhf

 With a secondary index on values, attempting to query by key returns an error 
 message of list index out of range.
 This is kinda a similar issue to CASSANDRA-8147 (but that scenario results in 
 no error when there probably should be one).
 To repro:
 {noformat}
 cqlsh:test CREATE TABLE test.foo (
 ... id1 text,
 ... id2 text,
 ... categories maptext, text,
 ... PRIMARY KEY (id1, id2));
 cqlsh:test CREATE INDEX foo_categories_idx ON test.foo (categories);
 cqlsh:test insert into foo (id1, id2, categories) values ('foo', 'bar', 
 {'firstkey':'one', 'secondkey':'two'});
 {noformat}
 Now try to query the existing values index by key:
 {noformat}
 cqlsh:test select * from foo where categories contains key 'firstkey';
 list index out of range
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8159) NPE in SSTableReader

2014-10-21 Thread Alexander Sterligov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178698#comment-14178698
 ] 

Alexander Sterligov commented on CASSANDRA-8159:


Yes, sstablescrub worked.

 NPE in SSTableReader
 

 Key: CASSANDRA-8159
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8159
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Alexander Sterligov
Priority: Minor

 Log file contained a lot of following exceptions:
 {quote}
 WARN  [CompactionExecutor:15674] 2014-10-21 20:57:56,838 
 OutputHandler.java:52 - Out of order row detected 
 (DecoratedKey(8937955371032053430, 
 39352e3130382e3234322e32302d6765744d656d42756572734d62) found
  after DecoratedKey(9186481584950194146, 
 80010001000c62617463685f6d757461746556640d00010b0d000100))
 {quote}
 I tried to scrub sstables by nodetool and got:
 {quote}
 ERROR [CompactionExecutor:15674] 2014-10-21 20:57:57,229 
 CassandraDaemon.java:166 - Exception in thread 
 Thread[CompactionExecutor:15674,1,RMI Runtime]
 java.lang.NullPointerException: null
 at 
 org.apache.cassandra.io.sstable.SSTableReader.cloneWithNewStart(SSTableReader.java:942)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.io.sstable.SSTableRewriter.moveStarts(SSTableRewriter.java:238)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:318)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.db.compaction.Scrubber.scrub(Scrubber.java:257) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.db.compaction.CompactionManager.scrubOne(CompactionManager.java:592)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.db.compaction.CompactionManager.access$300(CompactionManager.java:100)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$3.execute(CompactionManager.java:315)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:270)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_51]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 {quote}
 sstablescrub successfully fixed sstables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8159) NPE in SSTableReader

2014-10-21 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-8159.
-
Resolution: Not a Problem

 NPE in SSTableReader
 

 Key: CASSANDRA-8159
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8159
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Alexander Sterligov
Priority: Minor

 Log file contained a lot of following exceptions:
 {quote}
 WARN  [CompactionExecutor:15674] 2014-10-21 20:57:56,838 
 OutputHandler.java:52 - Out of order row detected 
 (DecoratedKey(8937955371032053430, 
 39352e3130382e3234322e32302d6765744d656d42756572734d62) found
  after DecoratedKey(9186481584950194146, 
 80010001000c62617463685f6d757461746556640d00010b0d000100))
 {quote}
 I tried to scrub sstables by nodetool and got:
 {quote}
 ERROR [CompactionExecutor:15674] 2014-10-21 20:57:57,229 
 CassandraDaemon.java:166 - Exception in thread 
 Thread[CompactionExecutor:15674,1,RMI Runtime]
 java.lang.NullPointerException: null
 at 
 org.apache.cassandra.io.sstable.SSTableReader.cloneWithNewStart(SSTableReader.java:942)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.io.sstable.SSTableRewriter.moveStarts(SSTableRewriter.java:238)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:318)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.db.compaction.Scrubber.scrub(Scrubber.java:257) 
 ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.db.compaction.CompactionManager.scrubOne(CompactionManager.java:592)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.db.compaction.CompactionManager.access$300(CompactionManager.java:100)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$3.execute(CompactionManager.java:315)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at 
 org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:270)
  ~[apache-cassandra-2.1.0.jar:2.1.0]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_51]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 {quote}
 sstablescrub successfully fixed sstables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8116) HSHA fails with default rpc_max_threads setting

2014-10-21 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8116:
---
Attachment: 8116-throw-exc-2.0.txt

8116-throw-exc-2.0.txt throws a ConfigurationException when hsha is used with 
unlimited rpc_max_threads.

 HSHA fails with default rpc_max_threads setting
 ---

 Key: CASSANDRA-8116
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8116
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Mike Adamson
Assignee: Mike Adamson
Priority: Minor
 Fix For: 2.0.11, 2.1.1

 Attachments: 8116-throw-exc-2.0.txt, 8116.txt


 The HSHA server fails with 'Out of heap space' error if the rpc_max_threads 
 is left at its default setting (unlimited) in cassandra.yaml.
 I'm not proposing any code change for this but have submitted a patch for a 
 comment change in cassandra.yaml to indicate that rpc_max_threads needs to be 
 changed if you use HSHA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8160) CF level option to call posix_fadvise for sstables on creation and startup

2014-10-21 Thread Matt Stump (JIRA)
Matt Stump created CASSANDRA-8160:
-

 Summary: CF level option to call posix_fadvise for sstables on 
creation and startup
 Key: CASSANDRA-8160
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8160
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Matt Stump
Priority: Critical


We should have a CF level configuration with will result in posix_fadvise being 
called for sstables for that CF. It should be called on node startup and for 
new sstables. This should be configurable per CF to allow for some CFs to be 
prioritized above others. Not sure if we should use POSIX_FADV_SEQUENTIAL or 
POSIX_FADV_WILLNEED. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8144) Creating CQL2 tables fails in C* 2.1

2014-10-21 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-8144:
---
Attachment: repro.py

nope sorry, that isn't the patch you're looking for

 Creating CQL2 tables fails in C* 2.1
 

 Key: CASSANDRA-8144
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8144
 Project: Cassandra
  Issue Type: Bug
Reporter: Sam Tunnicliffe
Assignee: Aleksey Yeschenko
 Fix For: 2.1.2

 Attachments: 8144.txt, repro.py


 Although cql2 has been deprecated and removed from cqlsh, the functionality 
 is still accessible using thrift. However, it seems that creation of new 
 tables via cql2 is broken in 2.1.
 {code}
 CREATE KEYSPACE test_ks WITH strategy_class='SimpleStrategy' AND 
 replication_factor = '1';
 CREATE TABLE test_cf (id text PRIMARY KEY, value text, test text);
 {code}
 fails with the following stacktrace on the server:
 {code}
 ERROR [MigrationStage:1] 2014-10-20 13:53:29,506 CassandraDaemon.java:153 - 
 Exception in thread Thread[MigrationStage:1,5,main]
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
 at java.util.ArrayList.rangeCheck(ArrayList.java:635) ~[na:1.7.0_51]
 at java.util.ArrayList.set(ArrayList.java:426) ~[na:1.7.0_51]
 at org.apache.cassandra.config.CFMetaData.rebuild(CFMetaData.java:2072) 
 ~[main/:na]
 at 
 org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1842)
  ~[main/:na]
 at 
 org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1882) 
 ~[main/:na]
 at 
 org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:320)
  ~[main/:na]
 at 
 org.apache.cassandra.db.DefsTables.mergeColumnFamilies(DefsTables.java:279) 
 ~[main/:na]
 at 
 org.apache.cassandra.db.DefsTables.mergeSchemaInternal(DefsTables.java:193) 
 ~[main/:na]
 at org.apache.cassandra.db.DefsTables.mergeSchema(DefsTables.java:165) 
 ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager$2.runMayThrow(MigrationManager.java:393)
  ~[main/:na]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_51]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_51]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 ERROR [Thrift:1] 2014-10-20 13:53:29,506 CustomTThreadPoolServer.java:219 - 
 Error occurred during processing of message.
 java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
 at 
 org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:397) 
 ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager.announce(MigrationManager.java:374)
  ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:249)
  ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:235)
  ~[main/:na]
 at 
 org.apache.cassandra.cql.QueryProcessor.processStatement(QueryProcessor.java:662)
  ~[main/:na]
 at 
 org.apache.cassandra.cql.QueryProcessor.process(QueryProcessor.java:802) 
 ~[main/:na]
 at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql_query(CassandraServer.java:1941)
  ~[main/:na]
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:4558)
  ~[thrift/:na]
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:4542)
  ~[thrift/:na]
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
 ~[libthrift-0.9.1.jar:0.9.1]
 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
 ~[libthrift-0.9.1.jar:0.9.1]
 at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201)
  ~[main/:na]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  [na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_51]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 Caused by: java.util.concurrent.ExecutionException: 
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
 at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
 ~[na:1.7.0_51]
 at java.util.concurrent.FutureTask.get(FutureTask.java:188) ~[na:1.7.0_51]
 at 
 

[jira] [Commented] (CASSANDRA-6285) 2.0 HSHA server introduces corrupt data

2014-10-21 Thread Alexander Sterligov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178814#comment-14178814
 ] 

Alexander Sterligov commented on CASSANDRA-6285:


It looks like this is not fixed in 2.1.0. We have cassandra under heavy load 
through binary interface and only OpsCenter by thrift. OpsCenter rollups are 
corrupted in about an hour minutes after scrub.

{quote}
ERROR [CompactionExecutor:71] 2014-10-21 22:16:39,950 CassandraDaemon.java:166 
- Exception in thread Thread[CompactionExecutor:71,1,main]
java.lang.RuntimeException: Last written key DecoratedKey(-7581200918995348250, 
39352e3130382e3234322e32302d6973732d73686172645f696e666f2d676574426c6f6f6d46696c74657246616c7365506f73697469766573)
 = current key DecoratedKey(-8301289422298317140, 
80010001000c62617463685f6d75746174650006d04a0d00010b0d00010025) 
writing into 
/ssd/cassandra/data/OpsCenter/rollups60/OpsCenter-rollups60-tmp-ka-9128-Data.db
at 
org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:196) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:110)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:177)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:235)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_51]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
~[na:1.7.0_51]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_51]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_51]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
{quote}

We'll try to switch to sync and see what will happen.

Is it possible that streaming hangs because of that exception? Is it possible 
that this exception affect minor compactions of other keyspaces?

 2.0 HSHA server introduces corrupt data
 ---

 Key: CASSANDRA-6285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6285
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 4 nodes, shortly updated from 1.2.11 to 2.0.2
Reporter: David Sauer
Assignee: Pavel Yaskevich
Priority: Critical
 Fix For: 2.0.8

 Attachments: 6285_testnotes1.txt, 
 CASSANDRA-6285-disruptor-heap.patch, cassandra-attack-src.zip, 
 compaction_test.py, disruptor-high-cpu.patch, 
 disruptor-memory-corruption.patch, enable_reallocate_buffers.txt


 After altering everything to LCS the table OpsCenter.rollups60 amd one other 
 none OpsCenter-Table got stuck with everything hanging around in L0.
 The compaction started and ran until the logs showed this:
 ERROR [CompactionExecutor:111] 2013-11-01 19:14:53,865 CassandraDaemon.java 
 (line 187) Exception in thread Thread[CompactionExecutor:111,1,RMI Runtime]
 java.lang.RuntimeException: Last written key 
 DecoratedKey(1326283851463420237, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574426c6f6f6d46696c746572537061636555736564)
  = current key DecoratedKey(954210699457429663, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574546f74616c4469736b5370616365557365640b0f)
  writing into 
 /var/lib/cassandra/data/OpsCenter/rollups60/OpsCenter-rollups60-tmp-jb-58656-Data.db
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:141)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:164)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 

[jira] [Comment Edited] (CASSANDRA-6285) 2.0 HSHA server introduces corrupt data

2014-10-21 Thread Alexander Sterligov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178814#comment-14178814
 ] 

Alexander Sterligov edited comment on CASSANDRA-6285 at 10/21/14 6:42 PM:
--

It looks like this is not fixed in 2.1.0. We have cassandra under heavy load 
through binary interface and only OpsCenter by thrift. OpsCenter rollups are 
corrupted in about an hour after scrub.

{quote}
ERROR [CompactionExecutor:71] 2014-10-21 22:16:39,950 CassandraDaemon.java:166 
- Exception in thread Thread[CompactionExecutor:71,1,main]
java.lang.RuntimeException: Last written key DecoratedKey(-7581200918995348250, 
39352e3130382e3234322e32302d6973732d73686172645f696e666f2d676574426c6f6f6d46696c74657246616c7365506f73697469766573)
 = current key DecoratedKey(-8301289422298317140, 
80010001000c62617463685f6d75746174650006d04a0d00010b0d00010025) 
writing into 
/ssd/cassandra/data/OpsCenter/rollups60/OpsCenter-rollups60-tmp-ka-9128-Data.db
at 
org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:196) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:110)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:177)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:235)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_51]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
~[na:1.7.0_51]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_51]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_51]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
{quote}

We'll try to switch to sync and see what will happen.

Is it possible that streaming hangs because of that exception? Is it possible 
that this exception affect minor compactions of other keyspaces?


was (Author: sterligovak):
It looks like this is not fixed in 2.1.0. We have cassandra under heavy load 
through binary interface and only OpsCenter by thrift. OpsCenter rollups are 
corrupted in about an hour minutes after scrub.

{quote}
ERROR [CompactionExecutor:71] 2014-10-21 22:16:39,950 CassandraDaemon.java:166 
- Exception in thread Thread[CompactionExecutor:71,1,main]
java.lang.RuntimeException: Last written key DecoratedKey(-7581200918995348250, 
39352e3130382e3234322e32302d6973732d73686172645f696e666f2d676574426c6f6f6d46696c74657246616c7365506f73697469766573)
 = current key DecoratedKey(-8301289422298317140, 
80010001000c62617463685f6d75746174650006d04a0d00010b0d00010025) 
writing into 
/ssd/cassandra/data/OpsCenter/rollups60/OpsCenter-rollups60-tmp-ka-9128-Data.db
at 
org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:196) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:110)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:177)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 ~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:235)
 

[jira] [Updated] (CASSANDRA-8144) Creating CQL2 tables fails in C* 2.1

2014-10-21 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-8144:
-
Attachment: (was: 8144.txt)

 Creating CQL2 tables fails in C* 2.1
 

 Key: CASSANDRA-8144
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8144
 Project: Cassandra
  Issue Type: Bug
Reporter: Sam Tunnicliffe
Assignee: Aleksey Yeschenko
 Fix For: 2.1.2

 Attachments: repro.py


 Although cql2 has been deprecated and removed from cqlsh, the functionality 
 is still accessible using thrift. However, it seems that creation of new 
 tables via cql2 is broken in 2.1.
 {code}
 CREATE KEYSPACE test_ks WITH strategy_class='SimpleStrategy' AND 
 replication_factor = '1';
 CREATE TABLE test_cf (id text PRIMARY KEY, value text, test text);
 {code}
 fails with the following stacktrace on the server:
 {code}
 ERROR [MigrationStage:1] 2014-10-20 13:53:29,506 CassandraDaemon.java:153 - 
 Exception in thread Thread[MigrationStage:1,5,main]
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
 at java.util.ArrayList.rangeCheck(ArrayList.java:635) ~[na:1.7.0_51]
 at java.util.ArrayList.set(ArrayList.java:426) ~[na:1.7.0_51]
 at org.apache.cassandra.config.CFMetaData.rebuild(CFMetaData.java:2072) 
 ~[main/:na]
 at 
 org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1842)
  ~[main/:na]
 at 
 org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1882) 
 ~[main/:na]
 at 
 org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:320)
  ~[main/:na]
 at 
 org.apache.cassandra.db.DefsTables.mergeColumnFamilies(DefsTables.java:279) 
 ~[main/:na]
 at 
 org.apache.cassandra.db.DefsTables.mergeSchemaInternal(DefsTables.java:193) 
 ~[main/:na]
 at org.apache.cassandra.db.DefsTables.mergeSchema(DefsTables.java:165) 
 ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager$2.runMayThrow(MigrationManager.java:393)
  ~[main/:na]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_51]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_51]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 ERROR [Thrift:1] 2014-10-20 13:53:29,506 CustomTThreadPoolServer.java:219 - 
 Error occurred during processing of message.
 java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
 at 
 org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:397) 
 ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager.announce(MigrationManager.java:374)
  ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:249)
  ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:235)
  ~[main/:na]
 at 
 org.apache.cassandra.cql.QueryProcessor.processStatement(QueryProcessor.java:662)
  ~[main/:na]
 at 
 org.apache.cassandra.cql.QueryProcessor.process(QueryProcessor.java:802) 
 ~[main/:na]
 at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql_query(CassandraServer.java:1941)
  ~[main/:na]
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:4558)
  ~[thrift/:na]
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:4542)
  ~[thrift/:na]
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
 ~[libthrift-0.9.1.jar:0.9.1]
 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
 ~[libthrift-0.9.1.jar:0.9.1]
 at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201)
  ~[main/:na]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  [na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_51]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 Caused by: java.util.concurrent.ExecutionException: 
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
 at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
 ~[na:1.7.0_51]
 at java.util.concurrent.FutureTask.get(FutureTask.java:188) ~[na:1.7.0_51]
 at 
 org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:393) 
 ~[main/:na]
 ... 

[jira] [Commented] (CASSANDRA-6285) 2.0 HSHA server introduces corrupt data

2014-10-21 Thread Nikolai Grigoriev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178828#comment-14178828
 ] 

Nikolai Grigoriev commented on CASSANDRA-6285:
--

[~sterligovak] I was always wondering why did I always see these problems 
appearing for OpsCenter keyspace. My keyspace had much more traffic but when I 
had this problem - it always manifested itself with OpsCenter keyspace. Even 
when I was also using Thrift (we use native protocol now).

I even remember disabling OpsCenter to prove the point :) 



 2.0 HSHA server introduces corrupt data
 ---

 Key: CASSANDRA-6285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6285
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 4 nodes, shortly updated from 1.2.11 to 2.0.2
Reporter: David Sauer
Assignee: Pavel Yaskevich
Priority: Critical
 Fix For: 2.0.8

 Attachments: 6285_testnotes1.txt, 
 CASSANDRA-6285-disruptor-heap.patch, cassandra-attack-src.zip, 
 compaction_test.py, disruptor-high-cpu.patch, 
 disruptor-memory-corruption.patch, enable_reallocate_buffers.txt


 After altering everything to LCS the table OpsCenter.rollups60 amd one other 
 none OpsCenter-Table got stuck with everything hanging around in L0.
 The compaction started and ran until the logs showed this:
 ERROR [CompactionExecutor:111] 2013-11-01 19:14:53,865 CassandraDaemon.java 
 (line 187) Exception in thread Thread[CompactionExecutor:111,1,RMI Runtime]
 java.lang.RuntimeException: Last written key 
 DecoratedKey(1326283851463420237, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574426c6f6f6d46696c746572537061636555736564)
  = current key DecoratedKey(954210699457429663, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574546f74616c4469736b5370616365557365640b0f)
  writing into 
 /var/lib/cassandra/data/OpsCenter/rollups60/OpsCenter-rollups60-tmp-jb-58656-Data.db
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:141)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:164)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:296)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 Moving back to STC worked to keep the compactions running.
 Especialy my own Table i would like to move to LCS.
 After a major compaction with STC the move to LCS fails with the same 
 Exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8161) Spin loop when streaming tries to acquire SSTable references

2014-10-21 Thread Yuki Morishita (JIRA)
Yuki Morishita created CASSANDRA-8161:
-

 Summary: Spin loop when streaming tries to acquire SSTable 
references
 Key: CASSANDRA-8161
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8161
 Project: Cassandra
  Issue Type: Bug
Reporter: Yuki Morishita
Priority: Minor


When streaming files, stream session tries to acquire references on SSTables to 
stream. When there are multiple sessions trying to stream same part of 
SSTables, they sometimes wait a *very* long time (nealy dead lock) on trying to 
acquire references.

I think we need something we've done to AtomicSortedColulmns in CASSANDRA-7546 
to reduce contention.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8152) Cassandra crashes with Native memory allocation failure

2014-10-21 Thread Babar Tareen (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178887#comment-14178887
 ] 

Babar Tareen commented on CASSANDRA-8152:
-

Under similar load conditions when the crash occurred, number of open files by 
cassandra process on all nodes varies between 6000 and 14000. (lsof -p 
cassandra pid | wc -l)

 Cassandra crashes with Native memory allocation failure
 ---

 Key: CASSANDRA-8152
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8152
 Project: Cassandra
  Issue Type: Bug
 Environment: EC2 (i2.xlarge)
Reporter: Babar Tareen
Assignee: Brandon Williams
Priority: Minor
 Attachments: db06_hs_err_pid26159.log.zip, 
 db_05_hs_err_pid25411.log.zip


 On a 6 node Cassandra (datastax-community-2.1) cluster running on EC2 
 (i2.xlarge) instances, Jvm hosting the cassandra service randomly crashes 
 with following error.
 {code}
 #
 # There is insufficient memory for the Java Runtime Environment to continue.
 # Native memory allocation (malloc) failed to allocate 12288 bytes for 
 committing reserved memory.
 # Possible reasons:
 #   The system is out of physical RAM or swap space
 #   In 32 bit mode, the process size limit was hit
 # Possible solutions:
 #   Reduce memory load on the system
 #   Increase physical memory or swap space
 #   Check if swap backing store is full
 #   Use 64 bit Java on a 64 bit OS
 #   Decrease Java heap size (-Xmx/-Xms)
 #   Decrease number of Java threads
 #   Decrease Java thread stack sizes (-Xss)
 #   Set larger code cache with -XX:ReservedCodeCacheSize=
 # This output file may be truncated or incomplete.
 #
 #  Out of Memory Error (os_linux.cpp:2747), pid=26159, tid=140305605682944
 #
 # JRE version: Java(TM) SE Runtime Environment (7.0_60-b19) (build 
 1.7.0_60-b19)
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.60-b09 mixed mode 
 linux-amd64 compressed oops)
 # Failed to write core dump. Core dumps have been disabled. To enable core 
 dumping, try ulimit -c unlimited before starting Java again
 #
 ---  T H R E A D  ---
 Current thread (0x08341000):  JavaThread MemtableFlushWriter:2055 
 daemon [_thread_new, id=23336, stack(0x7f9b71c56000,0x7f9b71c97000)]
 Stack: [0x7f9b71c56000,0x7f9b71c97000],  sp=0x7f9b71c95820,  free 
 space=254k
 Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
 code)
 V  [libjvm.so+0x99e7ca]  VMError::report_and_die()+0x2ea
 V  [libjvm.so+0x496fbb]  report_vm_out_of_memory(char const*, int, unsigned 
 long, char const*)+0x9b
 V  [libjvm.so+0x81d81e]  os::Linux::commit_memory_impl(char*, unsigned long, 
 bool)+0xfe
 V  [libjvm.so+0x81d8dc]  os::pd_commit_memory(char*, unsigned long, bool)+0xc
 V  [libjvm.so+0x81565a]  os::commit_memory(char*, unsigned long, bool)+0x2a
 V  [libjvm.so+0x81bdcd]  os::pd_create_stack_guard_pages(char*, unsigned 
 long)+0x6d
 V  [libjvm.so+0x9522de]  JavaThread::create_stack_guard_pages()+0x5e
 V  [libjvm.so+0x958c24]  JavaThread::run()+0x34
 V  [libjvm.so+0x81f7f8]  java_start(Thread*)+0x108
 {code}
 Changes in cassandra-env.sh settings
 {code}
 MAX_HEAP_SIZE=8G
 HEAP_NEWSIZE=800M
 JVM_OPTS=$JVM_OPTS -XX:TargetSurvivorRatio=50
 JVM_OPTS=$JVM_OPTS -XX:+AggressiveOpts
 JVM_OPTS=$JVM_OPTS -XX:+UseLargePages
 {code}
 Writes are about 10K-15K/sec and there are very few reads. Cassandra 2.0.9 
 with same settings never crashed. JVM crash logs are attached from two 
 machines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8109) Avoid constant boxing in ColumnStats.{Min/Max}Tracker

2014-10-21 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8109:
---
Reviewer: Tyler Hobbs

 Avoid constant boxing in ColumnStats.{Min/Max}Tracker
 -

 Key: CASSANDRA-8109
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8109
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Rajanarayanan Thottuvaikkatumana
Priority: Minor
  Labels: lhf
 Fix For: 3.0

 Attachments: cassandra-trunk-8109.txt


 We use the {{ColumnStats.MinTracker}} and {{ColumnStats.MaxTracker}} to track 
 timestamps and deletion times in sstable. Those classes are generics but we 
 really ever use them for longs and integers. The consequence is that every 
 call to their {{update}} method (called for every cell during sstable write) 
 box it's argument (since we don't store the cell timestamps and deletion time 
 boxed). That feels like a waste that is easy to fix: we could just make those 
 work on longs only for instance and convert back to int at the end when 
 that's what we need.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8144) Creating CQL2 tables fails in C* 2.1

2014-10-21 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-8144:
-
Attachment: 8144-v2.txt

 Creating CQL2 tables fails in C* 2.1
 

 Key: CASSANDRA-8144
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8144
 Project: Cassandra
  Issue Type: Bug
Reporter: Sam Tunnicliffe
Assignee: Aleksey Yeschenko
 Fix For: 2.1.2

 Attachments: 8144-v2.txt, repro.py


 Although cql2 has been deprecated and removed from cqlsh, the functionality 
 is still accessible using thrift. However, it seems that creation of new 
 tables via cql2 is broken in 2.1.
 {code}
 CREATE KEYSPACE test_ks WITH strategy_class='SimpleStrategy' AND 
 replication_factor = '1';
 CREATE TABLE test_cf (id text PRIMARY KEY, value text, test text);
 {code}
 fails with the following stacktrace on the server:
 {code}
 ERROR [MigrationStage:1] 2014-10-20 13:53:29,506 CassandraDaemon.java:153 - 
 Exception in thread Thread[MigrationStage:1,5,main]
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
 at java.util.ArrayList.rangeCheck(ArrayList.java:635) ~[na:1.7.0_51]
 at java.util.ArrayList.set(ArrayList.java:426) ~[na:1.7.0_51]
 at org.apache.cassandra.config.CFMetaData.rebuild(CFMetaData.java:2072) 
 ~[main/:na]
 at 
 org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1842)
  ~[main/:na]
 at 
 org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1882) 
 ~[main/:na]
 at 
 org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:320)
  ~[main/:na]
 at 
 org.apache.cassandra.db.DefsTables.mergeColumnFamilies(DefsTables.java:279) 
 ~[main/:na]
 at 
 org.apache.cassandra.db.DefsTables.mergeSchemaInternal(DefsTables.java:193) 
 ~[main/:na]
 at org.apache.cassandra.db.DefsTables.mergeSchema(DefsTables.java:165) 
 ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager$2.runMayThrow(MigrationManager.java:393)
  ~[main/:na]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_51]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_51]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 ERROR [Thrift:1] 2014-10-20 13:53:29,506 CustomTThreadPoolServer.java:219 - 
 Error occurred during processing of message.
 java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
 at 
 org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:397) 
 ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager.announce(MigrationManager.java:374)
  ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:249)
  ~[main/:na]
 at 
 org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:235)
  ~[main/:na]
 at 
 org.apache.cassandra.cql.QueryProcessor.processStatement(QueryProcessor.java:662)
  ~[main/:na]
 at 
 org.apache.cassandra.cql.QueryProcessor.process(QueryProcessor.java:802) 
 ~[main/:na]
 at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql_query(CassandraServer.java:1941)
  ~[main/:na]
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:4558)
  ~[thrift/:na]
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql_query.getResult(Cassandra.java:4542)
  ~[thrift/:na]
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
 ~[libthrift-0.9.1.jar:0.9.1]
 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
 ~[libthrift-0.9.1.jar:0.9.1]
 at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:201)
  ~[main/:na]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  [na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_51]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 Caused by: java.util.concurrent.ExecutionException: 
 java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
 at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
 ~[na:1.7.0_51]
 at java.util.concurrent.FutureTask.get(FutureTask.java:188) ~[na:1.7.0_51]
 at 
 org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:393) 
 ~[main/:na]
 

git commit: Avoid boxing in ColumnStats min/max trackers

2014-10-21 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk a0a30e03a - f2673082c


Avoid boxing in ColumnStats min/max trackers

Patch by Rajanarayanan Thottuvaikkatumana; reviewed by Tyler Hobbs for
CASSANDRA-8109


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f2673082
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f2673082
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f2673082

Branch: refs/heads/trunk
Commit: f2673082ce379d2be72871d017b9f47be4dcfa87
Parents: a0a30e0
Author: Rajanarayanan Thottuvaikkatumana rnambood...@gmail.com
Authored: Tue Oct 21 15:06:03 2014 -0500
Committer: Tyler Hobbs ty...@datastax.com
Committed: Tue Oct 21 15:06:03 2014 -0500

--
 CHANGES.txt |  1 +
 .../org/apache/cassandra/db/ColumnFamily.java   |  6 +-
 .../db/compaction/LazilyCompactedRow.java   |  6 +-
 .../cassandra/io/sstable/ColumnStats.java   | 63 +++-
 .../cassandra/io/sstable/SSTableWriter.java |  6 +-
 5 files changed, 59 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f2673082/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index cdae72a..bc0269f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0
+ * Avoid boxing in ColumnStats min/max trackers (CASSANDRA-8109)
  * Integrate JMH for microbenchmarks (CASSANDRA-8151)
  * Keep sstable levels when bootstrapping (CASSANDRA-7460)
  * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f2673082/src/java/org/apache/cassandra/db/ColumnFamily.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamily.java 
b/src/java/org/apache/cassandra/db/ColumnFamily.java
index 06139bb..25904ae 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamily.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamily.java
@@ -401,10 +401,10 @@ public abstract class ColumnFamily implements 
IterableCell, IRowCacheEntry
 // note that we default to MIN_VALUE/MAX_VALUE here to be able to 
override them later in this method
 // we are checking row/range tombstones and actual cells - there 
should always be data that overrides
 // these with actual values
-ColumnStats.MinTrackerLong minTimestampTracker = new 
ColumnStats.MinTracker(Long.MIN_VALUE);
-ColumnStats.MaxTrackerLong maxTimestampTracker = new 
ColumnStats.MaxTracker(Long.MAX_VALUE);
+ColumnStats.MinLongTracker minTimestampTracker = new 
ColumnStats.MinLongTracker(Long.MIN_VALUE);
+ColumnStats.MaxLongTracker maxTimestampTracker = new 
ColumnStats.MaxLongTracker(Long.MAX_VALUE);
 StreamingHistogram tombstones = new 
StreamingHistogram(SSTable.TOMBSTONE_HISTOGRAM_BIN_SIZE);
-ColumnStats.MaxTrackerInteger maxDeletionTimeTracker = new 
ColumnStats.MaxTracker(Integer.MAX_VALUE);
+ColumnStats.MaxIntTracker maxDeletionTimeTracker = new 
ColumnStats.MaxIntTracker(Integer.MAX_VALUE);
 ListByteBuffer minColumnNamesSeen = Collections.emptyList();
 ListByteBuffer maxColumnNamesSeen = Collections.emptyList();
 boolean hasLegacyCounterShards = false;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f2673082/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java 
b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
index fa59dba..cfdbd17 100644
--- a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
+++ b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
@@ -198,11 +198,11 @@ public class LazilyCompactedRow extends 
AbstractCompactedRow
 // if the row tombstone is 'live' we need to set timestamp to 
MAX_VALUE to be able to overwrite it later
 // markedForDeleteAt is MIN_VALUE for 'live' row tombstones (which we 
use to default maxTimestampSeen)
 
-ColumnStats.MinTrackerLong minTimestampTracker = new 
ColumnStats.MinTracker(Long.MIN_VALUE);
-ColumnStats.MaxTrackerLong maxTimestampTracker = new 
ColumnStats.MaxTracker(Long.MAX_VALUE);
+ColumnStats.MinLongTracker minTimestampTracker = new 
ColumnStats.MinLongTracker(Long.MIN_VALUE);
+ColumnStats.MaxLongTracker maxTimestampTracker = new 
ColumnStats.MaxLongTracker(Long.MAX_VALUE);
 // we need to set MIN_VALUE if we are 'live' since we want to 
overwrite it later
 // we are bound to have either a RangeTombstone or standard cells will 
set 

[jira] [Resolved] (CASSANDRA-8109) Avoid constant boxing in ColumnStats.{Min/Max}Tracker

2014-10-21 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs resolved CASSANDRA-8109.

Resolution: Fixed

+1

Committed without MinIntTracker (since it's currently unused) as f267308.  
Thanks!

 Avoid constant boxing in ColumnStats.{Min/Max}Tracker
 -

 Key: CASSANDRA-8109
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8109
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Rajanarayanan Thottuvaikkatumana
Priority: Minor
  Labels: lhf
 Fix For: 3.0

 Attachments: cassandra-trunk-8109.txt


 We use the {{ColumnStats.MinTracker}} and {{ColumnStats.MaxTracker}} to track 
 timestamps and deletion times in sstable. Those classes are generics but we 
 really ever use them for longs and integers. The consequence is that every 
 call to their {{update}} method (called for every cell during sstable write) 
 box it's argument (since we don't store the cell timestamps and deletion time 
 boxed). That feels like a waste that is easy to fix: we could just make those 
 work on longs only for instance and convert back to int at the end when 
 that's what we need.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6283) Windows 7 data files kept open / can't be deleted after compaction.

2014-10-21 Thread Daniel Nuriyev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14179060#comment-14179060
 ] 

Daniel Nuriyev commented on CASSANDRA-6283:
---

Any hope to have this fixed in the current version? Inability to delete unused 
files on windows is critical and leads to accumulation of gigabytes on the 
disk. Waiting for version 3.0 is too long.

 Windows 7 data files kept open / can't be deleted after compaction.
 ---

 Key: CASSANDRA-6283
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6283
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows 7 (32) / Java 1.7.0.45
Reporter: Andreas Schnitzerling
Assignee: Joshua McKenzie
  Labels: Windows, compaction
 Fix For: 3.0

 Attachments: 6283_StreamWriter_patch.txt, leakdetect.patch, 
 neighbor-log.zip, root-log.zip, screenshot-1.jpg, system.log


 Files cannot be deleted, patch CASSANDRA-5383 (Win7 deleting problem) doesn't 
 help on Win-7 on Cassandra 2.0.2. Even 2.1 Snapshot is not running. The cause 
 is: Opened file handles seem to be lost and not closed properly. Win 7 
 blames, that another process is still using the file (but its obviously 
 cassandra). Only restart of the server makes the files deleted. But after 
 heavy using (changes) of tables, there are about 24K files in the data folder 
 (instead of 35 after every restart) and Cassandra crashes. I experiminted and 
 I found out, that a finalizer fixes the problem. So after GC the files will 
 be deleted (not optimal, but working fine). It runs now 2 days continously 
 without problem. Possible fix/test:
 I wrote the following finalizer at the end of class 
 org.apache.cassandra.io.util.RandomAccessReader:
 {code:title=RandomAccessReader.java|borderStyle=solid}
 @Override
 protected void finalize() throws Throwable {
   deallocate();
   super.finalize();
 }
 {code}
 Can somebody test / develop / patch it? Thx.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7510) Notify clients that bootstrap is finished over binary protocol

2014-10-21 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14179082#comment-14179082
 ] 

Tyler Hobbs commented on CASSANDRA-7510:


I tested this out with a ccm cluster, and there's still a pretty large gap (8 
seconds) between when the notification is sent and when the native protocol 
server is actually started.  It looks like most of that is due to waiting for 
gossip to settle:

{noformat}
 INFO [main] 2014-10-21 15:43:50,430 StorageService.java (line 1515) Node 
/127.0.0.4 state jump to normal
 INFO [main] 2014-10-21 15:43:50,444 CassandraDaemon.java (line 543) Waiting 
for gossip to settle before accepting client requests...
 INFO [CompactionExecutor:7] 2014-10-21 15:43:53,066 ColumnFamilyStore.java 
(line 794) Enqueuing flush of Memtable-compactions_in_progress@1446920230(0/0 
serialized/live bytes, 1 ops)
 INFO [FlushWriter:1] 2014-10-21 15:43:53,067 Memtable.java (line 355) Writing 
Memtable-compactions_in_progress@1446920230(0/0 serialized/live bytes, 1 ops)
 INFO [FlushWriter:1] 2014-10-21 15:43:53,084 Memtable.java (line 395) 
Completed flushing 
/home/thobbs/.ccm/devcluster/node4/data/system/compactions_in_progress/system-compactions_in_progress-jb-2-Data.db
 (42 bytes) for commitlog position ReplayPosition(segmentId=1413924189183, 
position=13357643)
 INFO [CompactionExecutor:7] 2014-10-21 15:43:53,091 CompactionTask.java (line 
287) Compacted 7 sstables to 
[/home/thobbs/.ccm/devcluster/node4/data/duration_test/ints/duration_test-ints-jb-8,].
  2,672,491 bytes to 2,098,615 (~78% of original) in 2,755ms = 0.726459MB/s.  
4,167 total partitions merged to 3,325.  Partition merge counts were {1:2497, 
2:814, 3:14, }
 INFO [OptionalTasks:1] 2014-10-21 15:43:56,468 MeteredFlusher.java (line 58) 
flushing high-traffic column family CFS(Keyspace='duration_test', 
ColumnFamily='ints') (estimated 20392963 bytes)
 INFO [OptionalTasks:1] 2014-10-21 15:43:56,469 ColumnFamilyStore.java (line 
794) Enqueuing flush of Memtable-ints@1788959623(3053505/20392963 
serialized/live bytes, 136710 ops)
 INFO [FlushWriter:1] 2014-10-21 15:43:56,470 Memtable.java (line 355) Writing 
Memtable-ints@1788959623(3053505/20392963 serialized/live bytes, 136710 ops)
 INFO [FlushWriter:1] 2014-10-21 15:43:56,963 Memtable.java (line 395) 
Completed flushing 
/home/thobbs/.ccm/devcluster/node4/data/duration_test/ints/duration_test-ints-jb-9-Data.db
 (281136 bytes) for commitlog position ReplayPosition(segmentId=1413924189183, 
position=14376181)
 INFO [main] 2014-10-21 15:43:58,445 CassandraDaemon.java (line 575) No gossip 
backlog; proceeding
 INFO [main] 2014-10-21 15:43:58,551 Server.java (line 156) Starting listening 
for CQL clients on /127.0.0.4:9042...
{noformat}

I expect that in larger clusters the gap will be even larger.

I'm not sure that we can do anything about this without adding something new to 
gossip, though.

 Notify clients that bootstrap is finished over binary protocol
 --

 Key: CASSANDRA-7510
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7510
 Project: Cassandra
  Issue Type: Bug
Reporter: Joost Reuzel
Assignee: Brandon Williams
Priority: Minor
 Fix For: 2.0.12

 Attachments: 7510.txt


 Currently, Cassandra will notify clients when a new node is added to a 
 cluster. However, that node is typically not usable yet. It first needs to 
 gossip its key range and finish loading all its assigned data before it 
 allows clients to connect. Depending on the amount of data this may take 
 quite a while. The clients in the mean time have no clue about the bootstrap 
 status of that node. The only thing they can do is periodically check if it 
 will accept a connection. 
 My proposal would be to send an additional UP event when the bootstrap is 
 done, this allows clients to mark the node initially as down/unavailable and 
 simply wait for the UP event to arrive.
 Kind regards,
 Joost



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8162) Log client address in query trace

2014-10-21 Thread Tyler Hobbs (JIRA)
Tyler Hobbs created CASSANDRA-8162:
--

 Summary: Log client address in query trace
 Key: CASSANDRA-8162
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8162
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Tyler Hobbs
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 2.1.2


With probabilistic tracing, it can be helpful to log the source IP for queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8155) confusing error when erroneously querying map secondary index

2014-10-21 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs reassigned CASSANDRA-8155:
--

Assignee: Tyler Hobbs

 confusing error when erroneously querying map secondary index
 -

 Key: CASSANDRA-8155
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8155
 Project: Cassandra
  Issue Type: Bug
Reporter: Russ Hatch
Assignee: Tyler Hobbs
Priority: Minor
  Labels: cqlsh, lhf

 With a secondary index on values, attempting to query by key returns an error 
 message of list index out of range.
 This is kinda a similar issue to CASSANDRA-8147 (but that scenario results in 
 no error when there probably should be one).
 To repro:
 {noformat}
 cqlsh:test CREATE TABLE test.foo (
 ... id1 text,
 ... id2 text,
 ... categories maptext, text,
 ... PRIMARY KEY (id1, id2));
 cqlsh:test CREATE INDEX foo_categories_idx ON test.foo (categories);
 cqlsh:test insert into foo (id1, id2, categories) values ('foo', 'bar', 
 {'firstkey':'one', 'secondkey':'two'});
 {noformat}
 Now try to query the existing values index by key:
 {noformat}
 cqlsh:test select * from foo where categories contains key 'firstkey';
 list index out of range
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8115) Windows install scripts fail to set logdir and datadir

2014-10-21 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14179153#comment-14179153
 ] 

Philip Thompson commented on CASSANDRA-8115:


+1. Incorrect arguments are being rejected, and service installation/removal is 
working perfectly.

 Windows install scripts fail to set logdir and datadir
 --

 Key: CASSANDRA-8115
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8115
 Project: Cassandra
  Issue Type: Bug
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
  Labels: Windows
 Fix For: 2.1.2

 Attachments: 8115_v1.txt, 8115_v2.txt


 After CASSANDRA-7136, the install scripts to run Cassandra as a service fail 
 on both the legacy and the powershell paths.  Looks like they need to have
 {code}
 ++JvmOptions=-Dcassandra.logdir=%CASSANDRA_HOME%\logs ^
 ++JvmOptions=-Dcassandra.storagedir=%CASSANDRA_HOME%\data
 {code}
 added to function correctly.
 We should take this opportunity to make sure the source of the java options 
 is uniform for both running and installation to prevent mismatches like this 
 in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8155) confusing error when erroneously querying map secondary index

2014-10-21 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14179164#comment-14179164
 ] 

Tyler Hobbs commented on CASSANDRA-8155:


Yes, there's a problem with both cqlsh and filtering.

 confusing error when erroneously querying map secondary index
 -

 Key: CASSANDRA-8155
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8155
 Project: Cassandra
  Issue Type: Bug
Reporter: Russ Hatch
Assignee: Tyler Hobbs
Priority: Minor
  Labels: cqlsh, lhf

 With a secondary index on values, attempting to query by key returns an error 
 message of list index out of range.
 This is kinda a similar issue to CASSANDRA-8147 (but that scenario results in 
 no error when there probably should be one).
 To repro:
 {noformat}
 cqlsh:test CREATE TABLE test.foo (
 ... id1 text,
 ... id2 text,
 ... categories maptext, text,
 ... PRIMARY KEY (id1, id2));
 cqlsh:test CREATE INDEX foo_categories_idx ON test.foo (categories);
 cqlsh:test insert into foo (id1, id2, categories) values ('foo', 'bar', 
 {'firstkey':'one', 'secondkey':'two'});
 {noformat}
 Now try to query the existing values index by key:
 {noformat}
 cqlsh:test select * from foo where categories contains key 'firstkey';
 list index out of range
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8155) confusing error when erroneously querying map secondary index

2014-10-21 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8155:
---
Attachment: 8155.txt

8155.txt fixes the cqlsh grammar and pulls out some of the improved secondary 
index validation from the patch on CASSANDRA-7859.

 confusing error when erroneously querying map secondary index
 -

 Key: CASSANDRA-8155
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8155
 Project: Cassandra
  Issue Type: Bug
Reporter: Russ Hatch
Assignee: Tyler Hobbs
Priority: Minor
  Labels: cqlsh, lhf
 Attachments: 8155.txt


 With a secondary index on values, attempting to query by key returns an error 
 message of list index out of range.
 This is kinda a similar issue to CASSANDRA-8147 (but that scenario results in 
 no error when there probably should be one).
 To repro:
 {noformat}
 cqlsh:test CREATE TABLE test.foo (
 ... id1 text,
 ... id2 text,
 ... categories maptext, text,
 ... PRIMARY KEY (id1, id2));
 cqlsh:test CREATE INDEX foo_categories_idx ON test.foo (categories);
 cqlsh:test insert into foo (id1, id2, categories) values ('foo', 'bar', 
 {'firstkey':'one', 'secondkey':'two'});
 {noformat}
 Now try to query the existing values index by key:
 {noformat}
 cqlsh:test select * from foo where categories contains key 'firstkey';
 list index out of range
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8163) Complete restriction of a user to given keyspace

2014-10-21 Thread Vishy Kasar (JIRA)
Vishy Kasar created CASSANDRA-8163:
--

 Summary: Complete restriction of a user to given keyspace
 Key: CASSANDRA-8163
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8163
 Project: Cassandra
  Issue Type: Improvement
Reporter: Vishy Kasar


We have a cluster like this:

project1_keyspace
table101
table102

project2_keyspace
table201
table202

We have set up following users and grants:

project1_user has all access to project1_keyspace 
project2_user has all access to project2_keyspace

However project1_user can still do a 'describe schema' and get the schema for 
project2_keyspace as well. We do not want project1_user to have any knowledge 
for project2 in any way (cqlsh/java-driver etc) .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6285) 2.0 HSHA server introduces corrupt data

2014-10-21 Thread Alexander Sterligov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14179321#comment-14179321
 ] 

Alexander Sterligov commented on CASSANDRA-6285:


Have you proven that it's really related to OpsCenter?

We've switched to sync, but still get corrupted sstables. Now we get 
exception not during compaction, but at start:
{quote}
ERROR [SSTableBatchOpen:10] 2014-10-22 02:47:48,762 CassandraDaemon.java:166 - 
Exception in thread Thread[SSTableBatchOpen:10,5,main]
java.lang.IllegalStateException: SSTable first key 
DecoratedKey(4206305143314087741, 
80010001000c62617463685f6d757461746510250d00010b0d0001004e33372e3134302e3134312e3231322d6973732d736c6f745f636f6e66696775726174696f6e5f746172)
  last key DecoratedKey(-4632241097675266745, 
80010001000c62617463685f6d757461746510260d00010b0d0001005133372e3134302e3134312e3231322d6973732d736c6f745f636f6e66696775726174696f6e5f746172676574)
at 
org.apache.cassandra.io.sstable.SSTableReader.validate(SSTableReader.java:1083) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:398) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:294) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:430) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_51]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
~[na:1.7.0_51]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_51]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_51]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
{quote}

 2.0 HSHA server introduces corrupt data
 ---

 Key: CASSANDRA-6285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6285
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 4 nodes, shortly updated from 1.2.11 to 2.0.2
Reporter: David Sauer
Assignee: Pavel Yaskevich
Priority: Critical
 Fix For: 2.0.8

 Attachments: 6285_testnotes1.txt, 
 CASSANDRA-6285-disruptor-heap.patch, cassandra-attack-src.zip, 
 compaction_test.py, disruptor-high-cpu.patch, 
 disruptor-memory-corruption.patch, enable_reallocate_buffers.txt


 After altering everything to LCS the table OpsCenter.rollups60 amd one other 
 none OpsCenter-Table got stuck with everything hanging around in L0.
 The compaction started and ran until the logs showed this:
 ERROR [CompactionExecutor:111] 2013-11-01 19:14:53,865 CassandraDaemon.java 
 (line 187) Exception in thread Thread[CompactionExecutor:111,1,RMI Runtime]
 java.lang.RuntimeException: Last written key 
 DecoratedKey(1326283851463420237, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574426c6f6f6d46696c746572537061636555736564)
  = current key DecoratedKey(954210699457429663, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574546f74616c4469736b5370616365557365640b0f)
  writing into 
 /var/lib/cassandra/data/OpsCenter/rollups60/OpsCenter-rollups60-tmp-jb-58656-Data.db
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:141)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:164)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:296)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 Moving back to STC worked to keep the compactions running.
 

[jira] [Comment Edited] (CASSANDRA-6285) 2.0 HSHA server introduces corrupt data

2014-10-21 Thread Alexander Sterligov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14179321#comment-14179321
 ] 

Alexander Sterligov edited comment on CASSANDRA-6285 at 10/21/14 11:41 PM:
---

Have you proven that it's really related to OpsCenter?

We've switched to sync, but still get corrupted sstables. Now we get 
exception not during compaction, but at start:
{quote}
ERROR [SSTableBatchOpen:10] 2014-10-22 02:47:48,762 CassandraDaemon.java:166 - 
Exception in thread Thread[SSTableBatchOpen:10,5,main]
java.lang.IllegalStateException: SSTable first key 
DecoratedKey(4206305143314087741, 
80010001000c62617463685f6d757461746510250d00010b0d0001004e33372e3134302e3134312e3231322d6973732d736c6f745f636f6e66696775726174696f6e5f746172)
  last key DecoratedKey(-4632241097675266745, 
80010001000c62617463685f6d757461746510260d00010b0d0001005133372e3134302e3134312e3231322d6973732d736c6f745f636f6e66696775726174696f6e5f746172676574)
at 
org.apache.cassandra.io.sstable.SSTableReader.validate(SSTableReader.java:1083) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:398) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:294) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:430) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_51]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
~[na:1.7.0_51]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_51]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_51]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
{quote}

And nodetools scrub doesn't help. It finds no errors and after restart we get 
same exceptions.


was (Author: sterligovak):
Have you proven that it's really related to OpsCenter?

We've switched to sync, but still get corrupted sstables. Now we get 
exception not during compaction, but at start:
{quote}
ERROR [SSTableBatchOpen:10] 2014-10-22 02:47:48,762 CassandraDaemon.java:166 - 
Exception in thread Thread[SSTableBatchOpen:10,5,main]
java.lang.IllegalStateException: SSTable first key 
DecoratedKey(4206305143314087741, 
80010001000c62617463685f6d757461746510250d00010b0d0001004e33372e3134302e3134312e3231322d6973732d736c6f745f636f6e66696775726174696f6e5f746172)
  last key DecoratedKey(-4632241097675266745, 
80010001000c62617463685f6d757461746510260d00010b0d0001005133372e3134302e3134312e3231322d6973732d736c6f745f636f6e66696775726174696f6e5f746172676574)
at 
org.apache.cassandra.io.sstable.SSTableReader.validate(SSTableReader.java:1083) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:398) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:294) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:430) 
~[apache-cassandra-2.1.0.jar:2.1.0]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_51]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
~[na:1.7.0_51]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_51]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_51]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
{quote}

 2.0 HSHA server introduces corrupt data
 ---

 Key: CASSANDRA-6285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6285
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 4 nodes, shortly updated from 1.2.11 to 2.0.2
Reporter: David Sauer
Assignee: Pavel Yaskevich
Priority: Critical
 Fix For: 2.0.8

 Attachments: 6285_testnotes1.txt, 
 CASSANDRA-6285-disruptor-heap.patch, cassandra-attack-src.zip, 
 compaction_test.py, disruptor-high-cpu.patch, 
 disruptor-memory-corruption.patch, enable_reallocate_buffers.txt


 After altering everything to LCS the table OpsCenter.rollups60 amd one other 
 none OpsCenter-Table got stuck with everything hanging around in L0.
 The compaction started and ran until the logs showed this:
 ERROR [CompactionExecutor:111] 2013-11-01 19:14:53,865 CassandraDaemon.java 
 (line 187) Exception in thread 

[jira] [Commented] (CASSANDRA-6283) Windows 7 data files kept open / can't be deleted after compaction.

2014-10-21 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14179339#comment-14179339
 ] 

Joshua McKenzie commented on CASSANDRA-6283:


[~daniel.nuriyev] What unused files are you unable to delete on 2.1 / how are 
they created?  Other than manually created snapshots we should have 
snapshot-based operations (repair specifically) bypassed which is where these 
problems originate.

The changes in CASSANDRA-4050 that truly fix this issue are very low-level and 
too high risk to put into the 2.X branch unfortunately.

 Windows 7 data files kept open / can't be deleted after compaction.
 ---

 Key: CASSANDRA-6283
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6283
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows 7 (32) / Java 1.7.0.45
Reporter: Andreas Schnitzerling
Assignee: Joshua McKenzie
  Labels: Windows, compaction
 Fix For: 3.0

 Attachments: 6283_StreamWriter_patch.txt, leakdetect.patch, 
 neighbor-log.zip, root-log.zip, screenshot-1.jpg, system.log


 Files cannot be deleted, patch CASSANDRA-5383 (Win7 deleting problem) doesn't 
 help on Win-7 on Cassandra 2.0.2. Even 2.1 Snapshot is not running. The cause 
 is: Opened file handles seem to be lost and not closed properly. Win 7 
 blames, that another process is still using the file (but its obviously 
 cassandra). Only restart of the server makes the files deleted. But after 
 heavy using (changes) of tables, there are about 24K files in the data folder 
 (instead of 35 after every restart) and Cassandra crashes. I experiminted and 
 I found out, that a finalizer fixes the problem. So after GC the files will 
 be deleted (not optimal, but working fine). It runs now 2 days continously 
 without problem. Possible fix/test:
 I wrote the following finalizer at the end of class 
 org.apache.cassandra.io.util.RandomAccessReader:
 {code:title=RandomAccessReader.java|borderStyle=solid}
 @Override
 protected void finalize() throws Throwable {
   deallocate();
   super.finalize();
 }
 {code}
 Can somebody test / develop / patch it? Thx.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-6890) Standardize on a single read path

2014-10-21 Thread graham sanderson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14179429#comment-14179429
 ] 

graham sanderson edited comment on CASSANDRA-6890 at 10/22/14 12:55 AM:


I haven't followed this entire thread - I just happened across it - so this may 
be moot at this point, but I thought I'd throw it out there.

bq. mmap is better for the number of reasons but mostly because we have to do 
less syscalls and less copies (especially cross mode boundary). So I would, as 
well, vote to leave it and remove buffered I/O instead.

We just observed this in the wild on similar high end machines to that on which 
we run C* (40 cores, lots of high perf disks, RAM etc). With a large number of 
reader threads (in this cased doing random pread64 I/O against a single fd) we 
were seeing a *significant* cliff where performance dropped off alarmingly 
quickly (very high user and kernel CPU, with little to no actual disk I/O). In 
this case the data files fit in the page cache which obviously makes the 
problem more visible. Anyway.


was (Author: graham sanderson):
I haven't followed this entire thread - I just happened across it - so this may 
be moot at this point, but I thought I'd throw it out there.

bq. mmap is better for the number of reasons but mostly because we have to do 
less syscalls and less copies (especially cross mode boundary). So I would, as 
well, vote to leave it and remove buffered I/O instead.

We just observed this in the wild on similar high end machines to that on which 
we run C* (40 cores, lots of high perf disks, RAM etc). With a large number of 
reader threads (in this cased doing random pread64 I/O against a single fd) we 
were seeing a *significant* cliff where performance dropped off alarmingly 
quickly (very high user and kernel CPU, with little to no actual disk I/O). In 
this case the data files fit in the page cache which obviously makes the 
problem more visible. Anyway.

 Standardize on a single read path
 -

 Key: CASSANDRA-6890
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6890
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
  Labels: performance
 Fix For: 3.0

 Attachments: 6890_v1.txt, mmap_gc.jpg, mmap_jstat.txt, mmap_perf.txt, 
 nommap_gc.jpg, nommap_jstat.txt


 Since we actively unmap unreferenced SSTR's and also copy data out of those 
 readers on the read path, the current memory mapped i/o is a lot of 
 complexity for very little payoff.  Clean out the mmapp'ed i/o on the read 
 path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6890) Standardize on a single read path

2014-10-21 Thread graham sanderson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14179429#comment-14179429
 ] 

graham sanderson commented on CASSANDRA-6890:
-

I haven't followed this entire thread - I just happened across it - so this may 
be moot at this point, but I thought I'd throw it out there.

bq. mmap is better for the number of reasons but mostly because we have to do 
less syscalls and less copies (especially cross mode boundary). So I would, as 
well, vote to leave it and remove buffered I/O instead.

We just observed this in the wild on similar high end machines to that on which 
we run C* (40 cores, lots of high perf disks, RAM etc). With a large number of 
reader threads (in this cased doing random pread64 I/O against a single fd) we 
were seeing a *significant* cliff where performance dropped off alarmingly 
quickly (very high user and kernel CPU, with little to no actual disk I/O). In 
this case the data files fit in the page cache which obviously makes the 
problem more visible. Anyway.

 Standardize on a single read path
 -

 Key: CASSANDRA-6890
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6890
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
  Labels: performance
 Fix For: 3.0

 Attachments: 6890_v1.txt, mmap_gc.jpg, mmap_jstat.txt, mmap_perf.txt, 
 nommap_gc.jpg, nommap_jstat.txt


 Since we actively unmap unreferenced SSTR's and also copy data out of those 
 readers on the read path, the current memory mapped i/o is a lot of 
 complexity for very little payoff.  Clean out the mmapp'ed i/o on the read 
 path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6890) Standardize on a single read path

2014-10-21 Thread graham sanderson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14179432#comment-14179432
 ] 

graham sanderson commented on CASSANDRA-6890:
-

I guess I should point out that the pread64s were coming from NIO(1) 
{{FileChannel.read(ByteBuffer dst, long position)}}

 Standardize on a single read path
 -

 Key: CASSANDRA-6890
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6890
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
  Labels: performance
 Fix For: 3.0

 Attachments: 6890_v1.txt, mmap_gc.jpg, mmap_jstat.txt, mmap_perf.txt, 
 nommap_gc.jpg, nommap_jstat.txt


 Since we actively unmap unreferenced SSTR's and also copy data out of those 
 readers on the read path, the current memory mapped i/o is a lot of 
 complexity for very little payoff.  Clean out the mmapp'ed i/o on the read 
 path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6285) 2.0 HSHA server introduces corrupt data

2014-10-21 Thread Nikolai Grigoriev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14179431#comment-14179431
 ] 

Nikolai Grigoriev commented on CASSANDRA-6285:
--

I think this is the error that you cannot fix by scrubbing. Corrupted sstable. 
I was fixing those by deleting the sstables and doing repairs. Unfortunately, 
if that happens on many nodes there is a risk of data loss.

As for the OpsCenter - do not get me wrong ;) I did not want to say that 
OpsCenter was directly responsible for these troubles. But I do believe that 
OpsCenter does something particular that reveals the bug in hsha server. At 
least this was my impression. After disabling OpsCenter and fixing the 
outstanding problems I do not recall seeing those errors anymore. And I was 
also using Thrift and I was writing and reading 100x more data than OpsCenter.



 2.0 HSHA server introduces corrupt data
 ---

 Key: CASSANDRA-6285
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6285
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 4 nodes, shortly updated from 1.2.11 to 2.0.2
Reporter: David Sauer
Assignee: Pavel Yaskevich
Priority: Critical
 Fix For: 2.0.8

 Attachments: 6285_testnotes1.txt, 
 CASSANDRA-6285-disruptor-heap.patch, cassandra-attack-src.zip, 
 compaction_test.py, disruptor-high-cpu.patch, 
 disruptor-memory-corruption.patch, enable_reallocate_buffers.txt


 After altering everything to LCS the table OpsCenter.rollups60 amd one other 
 none OpsCenter-Table got stuck with everything hanging around in L0.
 The compaction started and ran until the logs showed this:
 ERROR [CompactionExecutor:111] 2013-11-01 19:14:53,865 CassandraDaemon.java 
 (line 187) Exception in thread Thread[CompactionExecutor:111,1,RMI Runtime]
 java.lang.RuntimeException: Last written key 
 DecoratedKey(1326283851463420237, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574426c6f6f6d46696c746572537061636555736564)
  = current key DecoratedKey(954210699457429663, 
 37382e34362e3132382e3139382d6a7576616c69735f6e6f72785f696e6465785f323031335f31305f30382d63616368655f646f63756d656e74736c6f6f6b75702d676574546f74616c4469736b5370616365557365640b0f)
  writing into 
 /var/lib/cassandra/data/OpsCenter/rollups60/OpsCenter-rollups60-tmp-jb-58656-Data.db
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:141)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:164)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:296)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:724)
 Moving back to STC worked to keep the compactions running.
 Especialy my own Table i would like to move to LCS.
 After a major compaction with STC the move to LCS fails with the same 
 Exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7668) Make gc_grace_seconds 7 days for system tables

2014-10-21 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14179493#comment-14179493
 ] 

sankalp kohli commented on CASSANDRA-7668:
--

If people are migrating to 1.2.19 and has done some schema changes in the last 
7 days, it will cause problems during migration.
This is because tombstones written within last 7 days will be picked up by 
1.2.19 nodes and 1.2.17 nodes will not add them to schema version. 
This will cause schema version to differ till all nodes have been upgraded. 

 Make gc_grace_seconds 7 days for system tables
 --

 Key: CASSANDRA-7668
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7668
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 1.2.19, 2.0.10, 2.1 rc5

 Attachments: 7668-1.2.txt, 7668-2.0.txt


 The system tables have had a {{gc_grace_seconds}} of 8640 since 
 CASSANDRA-4018.  This was probably a typo and was intended to be 10 days.  In 
 CASSANDRA-6717 we will set gc_grace to seven days, so that would be a 
 reasonable value to use here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)