[jira] [Commented] (CASSANDRA-6246) EPaxos

2015-01-07 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267389#comment-14267389
 ] 

sankalp kohli commented on CASSANDRA-6246:
--

Sure. Make sense

 EPaxos
 --

 Key: CASSANDRA-6246
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6246
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Blake Eggleston
Priority: Minor

 One reason we haven't optimized our Paxos implementation with Multi-paxos is 
 that Multi-paxos requires leader election and hence, a period of 
 unavailability when the leader dies.
 EPaxos is a Paxos variant that requires (1) less messages than multi-paxos, 
 (2) is particularly useful across multiple datacenters, and (3) allows any 
 node to act as coordinator: 
 http://sigops.org/sosp/sosp13/papers/p358-moraru.pdf
 However, there is substantial additional complexity involved if we choose to 
 implement it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7970) JSON support for CQL

2015-01-07 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267409#comment-14267409
 ] 

Sylvain Lebresne commented on CASSANDRA-7970:
-

I think it might be worth clarifying what we're talking about here because it 
appears I don't have the same understanding as some of you guys.

I'm not at all in favor of making CQL weakly typed. We've remove the accept 
string for everything in CQL early on and I'm still convinced it was a good 
thing.

My understanding of Tyler's suggestion is that it's only related to JSON 
support. It's to say that if we have the table:
{noformat}
CREATE TABLE foo (
   c1 int PRIMARY KEY,
   c2 float,
   c3 mapint, timestamp,
)
{noformat}
then we'll agree to map to it a json literal like
{noformat}
{
  'c1' : '3',
  'c2' : '4.2',
  'c3' : { '4', '2011-02-03 04:05'}
}
{noformat}
and this even though the JSON uses strings everywhere. And I think it's ok to 
do this because JSON is not really a typed thing in the first place: it has 
different kind of literals, but it's not typed per se, and it doesn't support 
all the literals that CQL supports anyway (typically uuid literals, which is 
why we will have to at least accept string for uuids). But I think it's ok to 
do this for the JSON mapping (which again, I just see as considering JSON as 
untyped/weakly typed) without going as far as making CQL itself weakly typed. 
But if we disagree on that part, and you guys think it would be horribly 
inconsistent to accept that kind of thing in the JSON translation without 
weakening the CQL typing, then count me as a *strong* -1 on the whole thing.

 JSON support for CQL
 

 Key: CASSANDRA-7970
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7970
 Project: Cassandra
  Issue Type: New Feature
  Components: API
Reporter: Jonathan Ellis
Assignee: Tyler Hobbs
 Fix For: 3.0


 JSON is popular enough that not supporting it is becoming a competitive 
 weakness.  We can add JSON support in a way that is compatible with our 
 performance goals by *mapping* JSON to an existing schema: one JSON documents 
 maps to one CQL row.
 Thus, it is NOT a goal to support schemaless documents, which is a misfeature 
 [1] [2] [3].  Rather, it is to allow a convenient way to easily turn a JSON 
 document from a service or a user into a CQL row, with all the validation 
 that entails.
 Since we are not looking to support schemaless documents, we will not be 
 adding a JSON data type (CASSANDRA-6833) a la postgresql.  Rather, we will 
 map the JSON to UDT, collections, and primitive CQL types.
 Here's how this might look:
 {code}
 CREATE TYPE address (
   street text,
   city text,
   zip_code int,
   phones settext
 );
 CREATE TABLE users (
   id uuid PRIMARY KEY,
   name text,
   addresses maptext, address
 );
 INSERT INTO users JSON
 {‘id’: 4b856557-7153,
‘name’: ‘jbellis’,
‘address’: {“home”: {“street”: “123 Cassandra Dr”,
 “city”: “Austin”,
 “zip_code”: 78747,
 “phones”: [2101234567]}}};
 SELECT JSON id, address FROM users;
 {code}
 (We would also want to_json and from_json functions to allow mapping a single 
 column's worth of data.  These would not require extra syntax.)
 [1] http://rustyrazorblade.com/2014/07/the-myth-of-schema-less/
 [2] https://blog.compose.io/schema-less-is-usually-a-lie/
 [3] http://dl.acm.org/citation.cfm?id=2481247



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7970) JSON support for CQL

2015-01-07 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267412#comment-14267412
 ] 

Sylvain Lebresne commented on CASSANDRA-7970:
-

Or to clarify, if Tyler's everywhere means everywhere in CQL, then I'm a 
strong -1. I had understood it as systematically in the JSON translation, 
which I'm fine with. If I misunderstood, my bad.

 JSON support for CQL
 

 Key: CASSANDRA-7970
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7970
 Project: Cassandra
  Issue Type: New Feature
  Components: API
Reporter: Jonathan Ellis
Assignee: Tyler Hobbs
 Fix For: 3.0


 JSON is popular enough that not supporting it is becoming a competitive 
 weakness.  We can add JSON support in a way that is compatible with our 
 performance goals by *mapping* JSON to an existing schema: one JSON documents 
 maps to one CQL row.
 Thus, it is NOT a goal to support schemaless documents, which is a misfeature 
 [1] [2] [3].  Rather, it is to allow a convenient way to easily turn a JSON 
 document from a service or a user into a CQL row, with all the validation 
 that entails.
 Since we are not looking to support schemaless documents, we will not be 
 adding a JSON data type (CASSANDRA-6833) a la postgresql.  Rather, we will 
 map the JSON to UDT, collections, and primitive CQL types.
 Here's how this might look:
 {code}
 CREATE TYPE address (
   street text,
   city text,
   zip_code int,
   phones settext
 );
 CREATE TABLE users (
   id uuid PRIMARY KEY,
   name text,
   addresses maptext, address
 );
 INSERT INTO users JSON
 {‘id’: 4b856557-7153,
‘name’: ‘jbellis’,
‘address’: {“home”: {“street”: “123 Cassandra Dr”,
 “city”: “Austin”,
 “zip_code”: 78747,
 “phones”: [2101234567]}}};
 SELECT JSON id, address FROM users;
 {code}
 (We would also want to_json and from_json functions to allow mapping a single 
 column's worth of data.  These would not require extra syntax.)
 [1] http://rustyrazorblade.com/2014/07/the-myth-of-schema-less/
 [2] https://blog.compose.io/schema-less-is-usually-a-lie/
 [3] http://dl.acm.org/citation.cfm?id=2481247



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7970) JSON support for CQL

2015-01-07 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267422#comment-14267422
 ] 

Robert Stupp commented on CASSANDRA-7970:
-

-1 on accepting strings for anything _outside_ of JSON *map keys* (object 
names).

So don't allow it on JSON object values - just on [JSON object 
names|http://www.json.org/]
{noformat}
{
  'c1' : 3,
  'c2' : 4.2,
  'c3' : { '4', '2011-02-03 04:05'}
}
{noformat}


 JSON support for CQL
 

 Key: CASSANDRA-7970
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7970
 Project: Cassandra
  Issue Type: New Feature
  Components: API
Reporter: Jonathan Ellis
Assignee: Tyler Hobbs
 Fix For: 3.0


 JSON is popular enough that not supporting it is becoming a competitive 
 weakness.  We can add JSON support in a way that is compatible with our 
 performance goals by *mapping* JSON to an existing schema: one JSON documents 
 maps to one CQL row.
 Thus, it is NOT a goal to support schemaless documents, which is a misfeature 
 [1] [2] [3].  Rather, it is to allow a convenient way to easily turn a JSON 
 document from a service or a user into a CQL row, with all the validation 
 that entails.
 Since we are not looking to support schemaless documents, we will not be 
 adding a JSON data type (CASSANDRA-6833) a la postgresql.  Rather, we will 
 map the JSON to UDT, collections, and primitive CQL types.
 Here's how this might look:
 {code}
 CREATE TYPE address (
   street text,
   city text,
   zip_code int,
   phones settext
 );
 CREATE TABLE users (
   id uuid PRIMARY KEY,
   name text,
   addresses maptext, address
 );
 INSERT INTO users JSON
 {‘id’: 4b856557-7153,
‘name’: ‘jbellis’,
‘address’: {“home”: {“street”: “123 Cassandra Dr”,
 “city”: “Austin”,
 “zip_code”: 78747,
 “phones”: [2101234567]}}};
 SELECT JSON id, address FROM users;
 {code}
 (We would also want to_json and from_json functions to allow mapping a single 
 column's worth of data.  These would not require extra syntax.)
 [1] http://rustyrazorblade.com/2014/07/the-myth-of-schema-less/
 [2] https://blog.compose.io/schema-less-is-usually-a-lie/
 [3] http://dl.acm.org/citation.cfm?id=2481247



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8303) Provide strict mode for CQL Queries

2015-01-07 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267496#comment-14267496
 ] 

Sam Tunnicliffe commented on CASSANDRA-8303:


TBH, I was thinking of something simpler than defaults and layering. Basically, 
if a restriction is enabled in the yaml it gets applied to all relevant queries 
by any user (excluding superusers), regardless of authz configuration.

That would support restrictions even on clusters providing anonymous access 
(i.e. those configured with AllowAllAuthenticator). 

 Provide strict mode for CQL Queries
 -

 Key: CASSANDRA-8303
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8303
 Project: Cassandra
  Issue Type: Improvement
Reporter: Anupam Arora
 Fix For: 3.0


 Please provide a strict mode option in cassandra that will kick out any CQL 
 queries that are expensive, e.g. any query with ALLOWS FILTERING, 
 multi-partition queries, secondary index queries, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8281) CQLSSTableWriter close does not work

2015-01-07 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-8281:
--
Fix Version/s: 2.1.3

 CQLSSTableWriter close does not work
 

 Key: CASSANDRA-8281
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8281
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: Cassandra 2.1.1
Reporter: Xu Zhongxing
Assignee: Benjamin Lerer
 Fix For: 2.1.3

 Attachments: CASSANDRA-8281.txt


 I called CQLSSTableWriter.close(). But the program still cannot exit. But the 
 same code works fine on Cassandra 2.0.10.
 It seems that CQLSSTableWriter cannot be closed, and blocks the program from 
 exit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8374) Better support of null for UDF

2015-01-07 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267515#comment-14267515
 ] 

Sylvain Lebresne commented on CASSANDRA-8374:
-

bq. 99% of the time nobody will notice, 1% of the time this will cause hours of 
head scratching.

Mostly, I don't buy that the alternatives suggested will avoid this. The 
potential problem you're talking about (and correct me if I'm wrong) is the 
case where someone does {{UPDATE ... SET v = fct(?) ...}} and has a bug in it's 
code that makes it pass null for the bind marker while it shouldn't. And I 
*don't* disagree that finding such bug is made harder by the function silently 
returning {{null}} in that case. But I disagree that any choice we make for the 
default we're discussion will change that fact.  Because:
1. whatever that default is, most function *will* end up returning null on null 
anyway. Because as I've argued at length already, while doing so has the 
inconvenience described above, the only other concrete alternative for 99% of 
functions would be to throw an exception, and that option would make the 
function unusable in select clause, which is, imo, just not ok (and I've seen 
no argument offered to the contrary so far). Again, that's the reasoning that 
has made us return null on null for all our existing functions, and I don't see 
why any future hard-coded functions would do differently.
2. the potential head scratching is due to the ultimate behavior of the 
function of returning null on null (which again is the less evil solution in 
practice). It is not due to what the default at creating time is. You might 
argue that forcing the user to choose the behavior at creation time will help 
it be aware of said behavior and that awareness will reduce the time of head 
scratching. But I don't think that argument stands terribly well in practice 
because it's assuming that the user scratching it's head is the one that has 
defined the function in the first place. But this won't be the case for 
standard (hard-coded) functions, which, provided we add a reasonably good 
standard library of functions (which we should do soonish as there is no point 
in having every use reinvent the wheel), might just be the most often used 
functions. And even for UDF, there is no reason for this to be the case in 
general for any organisation with more than one developer.

So basically, I agree that we should try to make people generally aware that 
most function returns null on null so they can more easily find the problem 
described above if they run into it, but I'm just not convinced that forcing 
the choice of behavior at function creation time (for the sake of education 
since again 99% of the time people would choose {{RETURNS NULL ON NULL INPUT}} 
for the reasons discussed above) is a very good way to create that awareness 
(because that doesn't help for standard functions). And on the flip side, 
forcing the choice will be annoying every time you create a UDF (and aren't 
defaults exactly made to reduce annoyance when you know that one of the option 
will be the right choice 99% of the time?).

Anyway, I continue to think that {{RETURNS NULL ON NULL}} is likely the right 
default. I've tried to explain my reasoning as clearly as I can and I don't 
think I can do any better. If the majority still disagrees, so be it (though 
I'll admit being fuzzy on the actual counter-arguments to my reasoning and 
would certainly love to understand them better). For what it's worth, if we 
don't go with {{RETURNS NULL ON NULL}}, I think I prefer forcing the choice of 
behavior explicitly because at least that might somehow help create that 
awareness of the actual behavior (even though I've explained why I don't find 
it a very good argument).  The only argument for {{CALLED ON NULL INPUT}} as 
default I've seen is that it's this way in other DBs, but it's not an argument 
in itself in my book if we can't come with a good reasoning why it's a good 
default, and I haven't really seen one.



 Better support of null for UDF
 --

 Key: CASSANDRA-8374
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8374
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Robert Stupp
 Fix For: 3.0

 Attachments: 8473-1.txt, 8473-2.txt


 Currently, every function needs to deal with it's argument potentially being 
 {{null}}. There is very many case where that's just annoying, users should be 
 able to define a function like:
 {noformat}
 CREATE FUNCTION addTwo(val int) RETURNS int LANGUAGE JAVA AS 'return val + 2;'
 {noformat}
 without having this crashing as soon as a column it's applied to doesn't a 
 value for some rows (I'll note that this definition apparently cannot be 
 compiled currently, which should be 

[jira] [Comment Edited] (CASSANDRA-8374) Better support of null for UDF

2015-01-07 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267515#comment-14267515
 ] 

Sylvain Lebresne edited comment on CASSANDRA-8374 at 1/7/15 11:03 AM:
--

bq. 99% of the time nobody will notice, 1% of the time this will cause hours of 
head scratching.

Mostly, I don't buy that the alternatives suggested will avoid this. The 
potential problem you're talking about (and correct me if I'm wrong) is the 
case where someone does {{UPDATE ... SET v = fct\(?\) ...}} and has a bug in 
it's code that makes it pass null for the bind marker while it shouldn't. And I 
*don't* disagree that finding such bug is made harder by the function silently 
returning {{null}} in that case. But I disagree that any choice we make for the 
default we're discussion will change that fact.  Because:
# whatever that default is, most function *will* end up returning null on null 
anyway. Because as I've argued at length already, while doing so has the 
inconvenience described above, the only other concrete alternative for 99% of 
functions would be to throw an exception, and that option would make the 
function unusable in select clause, which is, imo, just not ok (and I've seen 
no argument offered to the contrary so far). Again, that's the reasoning that 
has made us return null on null for all our existing functions, and I don't see 
why any future hard-coded functions would do differently.
# the potential head scratching is due to the ultimate behavior of the function 
of returning null on null (which again is the less evil solution in practice). 
It is not due to what the default at creating time is. You might argue that 
forcing the user to choose the behavior at creation time will help it be aware 
of said behavior and that awareness will reduce the time of head scratching. 
But I don't think that argument stands terribly well in practice because it's 
assuming that the user scratching it's head is the one that has defined the 
function in the first place. But this won't be the case for standard 
(hard-coded) functions, which, provided we add a reasonably good standard 
library of functions (which we should do soonish as there is no point in having 
every use reinvent the wheel), might just be the most often used functions. And 
even for UDF, there is no reason for this to be the case in general for any 
organisation with more than one developer.

So basically, I agree that we should try to make people generally aware that 
most function returns null on null so they can more easily find the problem 
described above if they run into it, but I'm just not convinced that forcing 
the choice of behavior at function creation time (for the sake of education 
since again 99% of the time people would choose {{RETURNS NULL ON NULL INPUT}} 
for the reasons discussed above) is a very good way to create that awareness 
(because that doesn't help for standard functions). And on the flip side, 
forcing the choice will be annoying every time you create a UDF (and aren't 
defaults exactly made to reduce annoyance when you know that one of the option 
will be the right choice 99% of the time?).

Anyway, I continue to think that {{RETURNS NULL ON NULL}} is likely the right 
default. I've tried to explain my reasoning as clearly as I can and I don't 
think I can do any better. If the majority still disagrees, so be it (though 
I'll admit being fuzzy on the actual counter-arguments to my reasoning and 
would certainly love to understand them better). For what it's worth, if we 
don't go with {{RETURNS NULL ON NULL}}, I think I prefer forcing the choice of 
behavior explicitly because at least that might somehow help create that 
awareness of the actual behavior (even though I've explained why I don't find 
it a very good argument).  The only argument for {{CALLED ON NULL INPUT}} as 
default I've seen is that it's this way in other DBs, but it's not an argument 
in itself in my book if we can't come with a good reasoning why it's a good 
default, and I haven't really seen one.




was (Author: slebresne):
bq. 99% of the time nobody will notice, 1% of the time this will cause hours of 
head scratching.

Mostly, I don't buy that the alternatives suggested will avoid this. The 
potential problem you're talking about (and correct me if I'm wrong) is the 
case where someone does {{UPDATE ... SET v = fct(?) ...}} and has a bug in it's 
code that makes it pass null for the bind marker while it shouldn't. And I 
*don't* disagree that finding such bug is made harder by the function silently 
returning {{null}} in that case. But I disagree that any choice we make for the 
default we're discussion will change that fact.  Because:
1. whatever that default is, most function *will* end up returning null on null 
anyway. Because as I've argued at length already, while doing so has the 
inconvenience 

[jira] [Created] (CASSANDRA-8572) Opening a Keyspace trigger the start of the commit log

2015-01-07 Thread Benjamin Lerer (JIRA)
Benjamin Lerer created CASSANDRA-8572:
-

 Summary: Opening a Keyspace trigger the start of the commit log
 Key: CASSANDRA-8572
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8572
 Project: Cassandra
  Issue Type: Bug
Reporter: Benjamin Lerer
Priority: Minor


Due to static dependencies, calling {{Keyspace.open}} trigger the creation of 
the {{CommitLog}} singleton which in turn trigger the start of all the threads 
used by the {{CommitLog}}.

For simple client like {{CQLSSTableWriter}} that dependency is an issue as it 
can prevent the JVM from shutting down if the CommitLog is not shutdown 
explicitly. 
  
The following stacktrace show the initialization chain that trigger the 
{{CommitLog}}:

{code}
CommitLogSegmentManager.init() line: 173  
CommitLog.init() line: 70 
CommitLog.clinit() line: 55   
Memtable.init(ColumnFamilyStore) line: 66 
DataTracker.init() line: 378
DataTracker.init(ColumnFamilyStore) line: 54  
ColumnFamilyStore.init(Keyspace, String, IPartitioner, int, CFMetaData, 
Directories, boolean) line: 281   
ColumnFamilyStore.createColumnFamilyStore(Keyspace, String, IPartitioner, 
CFMetaData, boolean) line: 443
ColumnFamilyStore.createColumnFamilyStore(Keyspace, String, boolean) line: 414  
Keyspace.initCf(UUID, String, boolean) line: 327
Keyspace.init(String, boolean) line: 280  
Keyspace.open(String, Schema, boolean) line: 122
{code}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8374) Better support of null for UDF

2015-01-07 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267515#comment-14267515
 ] 

Sylvain Lebresne edited comment on CASSANDRA-8374 at 1/7/15 11:32 AM:
--

bq. 99% of the time nobody will notice, 1% of the time this will cause hours of 
head scratching.

Mostly, I don't buy that the alternatives suggested will avoid this. The 
potential problem you're talking about (and correct me if I'm wrong) is the 
case where someone does {{UPDATE ... SET v = fct\(?\) ...}} and has a bug in 
it's code that makes it pass null for the bind marker while it shouldn't. And I 
*don't* disagree that finding such bug is made harder by the function silently 
returning {{null}} in that case. But I disagree that any choice we make for the 
default we're discussing here will change that fact.  Because:
# whatever that default is, most function *will* end up returning null on null 
anyway. And that despite the inconvenience described above because the only 
other concrete alternative for 99% of functions would be to throw an exception, 
and that option would make the function unusable in select clause, which is, 
imo, just not ok (and I've seen no argument offered to the contrary so far). 
Again, that's the reasoning that has made us return null on null for all our 
existing functions, and I don't see why any future hard-coded functions would 
do differently.
# the potential head scratching is due to the ultimate behavior of the function 
of returning null on null (which again is the less evil solution in practice). 
It is not due to what the default at creating time was. I suppose you might 
argue that forcing the user to choose the behavior at creation time will help 
it be aware of said behavior and that awareness will reduce the time of head 
scratching. But I don't think that argument stands terribly well in practice 
because it's assuming that the user scratching it's head is the one that has 
defined the function in the first place. But this won't be the case for 
standard (hard-coded) functions, which, provided we add a reasonably good 
standard library of functions (which we should do soonish as there is no point 
in having every use reinvent the wheel), might just be the most often used 
functions. And even for UDF, there is no reason for this to be the case in 
general for any organisation with more than one developer.

So basically, I agree that we should try to make people generally aware that 
most function returns null on null so they can more easily find the problem 
described above if they run into it, but I'm just not convinced that forcing 
the choice of behavior at function creation time (for the sake of education 
since again 99% of the time people would choose {{RETURNS NULL ON NULL INPUT}} 
for the reasons discussed above) is a very good way to create that awareness 
(because that doesn't help for standard functions). And on the flip side, 
forcing the choice will be annoying every time you create a UDF (and aren't 
defaults exactly made to reduce annoyance when you know that one of the option 
will be the right choice 99% of the time?).

Anyway, I continue to think that {{RETURNS NULL ON NULL}} is likely the right 
default. I've tried to explain my reasoning as clearly as I can and I don't 
think I can do any better. If the majority still disagrees, so be it (though 
I'll admit being fuzzy on the actual counter-arguments to my reasoning and 
would certainly love to understand them better). For what it's worth, if we 
don't go with {{RETURNS NULL ON NULL}}, I think I prefer forcing the choice of 
behavior explicitly because at least that might somehow help create that 
awareness of the actual behavior (even though I've explained why I don't find 
it a very good argument).  The only argument for {{CALLED ON NULL INPUT}} as 
default I've seen is that it's this way in other DBs, but it's not an argument 
in itself in my book if we can't come with a good reasoning why it's a good 
default, and I haven't really seen one.




was (Author: slebresne):
bq. 99% of the time nobody will notice, 1% of the time this will cause hours of 
head scratching.

Mostly, I don't buy that the alternatives suggested will avoid this. The 
potential problem you're talking about (and correct me if I'm wrong) is the 
case where someone does {{UPDATE ... SET v = fct\(?\) ...}} and has a bug in 
it's code that makes it pass null for the bind marker while it shouldn't. And I 
*don't* disagree that finding such bug is made harder by the function silently 
returning {{null}} in that case. But I disagree that any choice we make for the 
default we're discussion will change that fact.  Because:
# whatever that default is, most function *will* end up returning null on null 
anyway. Because as I've argued at length already, while doing so has the 
inconvenience described above, the 

[jira] [Updated] (CASSANDRA-8572) CQLSSTableWriter shouldn't trigger the start of the commit log

2015-01-07 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-8572:

Summary: CQLSSTableWriter shouldn't trigger the start of the commit log  
(was: Opening a Keyspace trigger the start of the commit log)

 CQLSSTableWriter shouldn't trigger the start of the commit log
 --

 Key: CASSANDRA-8572
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8572
 Project: Cassandra
  Issue Type: Bug
Reporter: Benjamin Lerer
Priority: Minor

 Due to static dependencies, calling {{Keyspace.open}} trigger the creation of 
 the {{CommitLog}} singleton which in turn trigger the start of all the 
 threads used by the {{CommitLog}}.
 For simple client like {{CQLSSTableWriter}} that dependency is an issue as it 
 can prevent the JVM from shutting down if the CommitLog is not shutdown 
 explicitly. 
   
 The following stacktrace show the initialization chain that trigger the 
 {{CommitLog}}:
 {code}
 CommitLogSegmentManager.init() line: 173
 CommitLog.init() line: 70   
 CommitLog.clinit() line: 55 
 Memtable.init(ColumnFamilyStore) line: 66   
 DataTracker.init() line: 378  
 DataTracker.init(ColumnFamilyStore) line: 54
 ColumnFamilyStore.init(Keyspace, String, IPartitioner, int, CFMetaData, 
 Directories, boolean) line: 281 
 ColumnFamilyStore.createColumnFamilyStore(Keyspace, String, IPartitioner, 
 CFMetaData, boolean) line: 443  
 ColumnFamilyStore.createColumnFamilyStore(Keyspace, String, boolean) line: 
 414
 Keyspace.initCf(UUID, String, boolean) line: 327  
 Keyspace.init(String, boolean) line: 280
 Keyspace.open(String, Schema, boolean) line: 122  
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8569) org.apache.cassandra.db.KeyspaceTest failing

2015-01-07 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-8569.

Resolution: Duplicate

This was fixed by the patch for CASSANDRA-8570

 org.apache.cassandra.db.KeyspaceTest failing
 

 Key: CASSANDRA-8569
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8569
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Philip Thompson
Assignee: Brandon Williams
 Fix For: 2.1.3


 org.apache.cassandra.db.KeyspaceTest began failing after the patch for 
 CASSANDRA-8245.
 {code}
 java.lang.NullPointerException
   at 
 org.apache.cassandra.db.KeyspaceTest.testGetSliceFromLarge(KeyspaceTest.java:425)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-5449) Make sstable compacting status un/marking less error-prone

2015-01-07 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson resolved CASSANDRA-5449.

Resolution: Duplicate

 Make sstable compacting status un/marking less error-prone
 --

 Key: CASSANDRA-5449
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5449
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 3.0


 As demonstrated by a393072aac6292412fc465d207c411c4b6b69e0b, it's easy to 
 introduce regressions where we don't unmark the same tables we marked.  This 
 is primarily because the marking and unmarking are usually done by separate 
 methods.  (The opposite problem is also possible -- 
 performAllSSTableOperation unmarks compacting, and so does 
 CompactionTask.execute, which can be part of a pASOp via the scrub path.)
 I suggest making markCompacting return a callable that will wrap the 
 caller-provided code in a try/finally to centralize this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Use the correct repairedAt value when closing writer

2015-01-07 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 55750e07d - 2afe0e880


Use the correct repairedAt value when closing writer

Patch by marcuse; reviewed by benedict for CASSANDRA-8570


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2afe0e88
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2afe0e88
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2afe0e88

Branch: refs/heads/cassandra-2.1
Commit: 2afe0e8803752b38aa6c803b818e21434034678a
Parents: 55750e0
Author: Marcus Eriksson marc...@apache.org
Authored: Wed Jan 7 13:52:31 2015 +0100
Committer: Marcus Eriksson marc...@apache.org
Committed: Wed Jan 7 14:07:48 2015 +0100

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/io/sstable/SSTableWriter.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2afe0e88/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 372972d..1f93bf5 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.3
+ * Use the correct repairedAt value when closing writer (CASSANDRA-8570)
  * (cqlsh) Handle a schema mismatch being detected on startup (CASSANDRA-8512)
  * Properly calculate expected write size during compaction (CASSANDRA-8532)
  * Invalidate affected prepared statements when a table's columns

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2afe0e88/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
index b0365ad..5f78132 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
@@ -463,7 +463,7 @@ public class SSTableWriter extends SSTable
 {
 PairDescriptor, StatsMetadata p;
 
-p = close(finishType, repairedAt);
+p = close(finishType, repairedAt  0 ? this.repairedAt : repairedAt);
 Descriptor desc = p.left;
 StatsMetadata metadata = p.right;
 



[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-01-07 Thread marcuse
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/49dea419
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/49dea419
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/49dea419

Branch: refs/heads/trunk
Commit: 49dea419319177d06652a5253e7bb3f8c65a89a0
Parents: 729ebe0 2afe0e8
Author: Marcus Eriksson marc...@apache.org
Authored: Wed Jan 7 14:12:15 2015 +0100
Committer: Marcus Eriksson marc...@apache.org
Committed: Wed Jan 7 14:12:15 2015 +0100

--
 CHANGES.txt| 1 +
 .../org/apache/cassandra/io/sstable/format/big/BigTableWriter.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/49dea419/CHANGES.txt
--
diff --cc CHANGES.txt
index 0fe2285,1f93bf5..9086774
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,48 -1,5 +1,49 @@@
 +3.0
 + * Support index key/value entries on map collections (CASSANDRA-8473)
 + * Modernize schema tables (CASSANDRA-8261)
 + * Support for user-defined aggregation functions (CASSANDRA-8053)
 + * Fix NPE in SelectStatement with empty IN values (CASSANDRA-8419)
 + * Refactor SelectStatement, return IN results in natural order instead
 +   of IN value list order (CASSANDRA-7981)
 + * Support UDTs, tuples, and collections in user-defined
 +   functions (CASSANDRA-7563)
 + * Fix aggregate fn results on empty selection, result column name,
 +   and cqlsh parsing (CASSANDRA-8229)
 + * Mark sstables as repaired after full repair (CASSANDRA-7586)
 + * Extend Descriptor to include a format value and refactor reader/writer
 +   APIs (CASSANDRA-7443)
 + * Integrate JMH for microbenchmarks (CASSANDRA-8151)
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + * Support for aggregation functions (CASSANDRA-4914)
 + * Remove cassandra-cli (CASSANDRA-7920)
 + * Accept dollar quoted strings in CQL (CASSANDRA-7769)
 + * Make assassinate a first class command (CASSANDRA-7935)
 + * Support IN clause on any clustering column (CASSANDRA-4762)
 + * Improve compaction logging (CASSANDRA-7818)
 + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
 + * Do anticompaction in groups (CASSANDRA-6851)
 + * Support user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 7781, 
7929,
 +   7924, 7812, 8063, 7813, 7708)
 + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416)
 + * Move sstable RandomAccessReader to nio2, which allows using the
 +   FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050)
 + * Remove CQL2 (CASSANDRA-5918)
 + * Add Thrift get_multi_slice call (CASSANDRA-6757)
 + * Optimize fetching multiple cells by name (CASSANDRA-6933)
 + * Allow compilation in java 8 (CASSANDRA-7028)
 + * Make incremental repair default (CASSANDRA-7250)
 + * Enable code coverage thru JaCoCo (CASSANDRA-7226)
 + * Switch external naming of 'column families' to 'tables' (CASSANDRA-4369) 
 + * Shorten SSTable path (CASSANDRA-6962)
 + * Use unsafe mutations for most unit tests (CASSANDRA-6969)
 + * Fix race condition during calculation of pending ranges (CASSANDRA-7390)
 + * Fail on very large batch sizes (CASSANDRA-8011)
 + * Improve concurrency of repair (CASSANDRA-6455, 8208)
 +
 +
  2.1.3
+  * Use the correct repairedAt value when closing writer (CASSANDRA-8570)
   * (cqlsh) Handle a schema mismatch being detected on startup (CASSANDRA-8512)
   * Properly calculate expected write size during compaction (CASSANDRA-8532)
   * Invalidate affected prepared statements when a table's columns

http://git-wip-us.apache.org/repos/asf/cassandra/blob/49dea419/src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java
--
diff --cc 
src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java
index 2d34209,000..868ee9f
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java
@@@ -1,591 -1,0 +1,591 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * License); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in 

[1/2] cassandra git commit: Use the correct repairedAt value when closing writer

2015-01-07 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk 729ebe078 - 49dea4193


Use the correct repairedAt value when closing writer

Patch by marcuse; reviewed by benedict for CASSANDRA-8570


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2afe0e88
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2afe0e88
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2afe0e88

Branch: refs/heads/trunk
Commit: 2afe0e8803752b38aa6c803b818e21434034678a
Parents: 55750e0
Author: Marcus Eriksson marc...@apache.org
Authored: Wed Jan 7 13:52:31 2015 +0100
Committer: Marcus Eriksson marc...@apache.org
Committed: Wed Jan 7 14:07:48 2015 +0100

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/io/sstable/SSTableWriter.java | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2afe0e88/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 372972d..1f93bf5 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.3
+ * Use the correct repairedAt value when closing writer (CASSANDRA-8570)
  * (cqlsh) Handle a schema mismatch being detected on startup (CASSANDRA-8512)
  * Properly calculate expected write size during compaction (CASSANDRA-8532)
  * Invalidate affected prepared statements when a table's columns

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2afe0e88/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
index b0365ad..5f78132 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
@@ -463,7 +463,7 @@ public class SSTableWriter extends SSTable
 {
 PairDescriptor, StatsMetadata p;
 
-p = close(finishType, repairedAt);
+p = close(finishType, repairedAt  0 ? this.repairedAt : repairedAt);
 Descriptor desc = p.left;
 StatsMetadata metadata = p.right;
 



[jira] [Commented] (CASSANDRA-8281) CQLSSTableWriter close does not work

2015-01-07 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267607#comment-14267607
 ] 

Benjamin Lerer commented on CASSANDRA-8281:
---

In the latest 2.1 the prepared statement eviction thread is not a problem 
anymore as it is now a daemon thread. 
Due to some change in the code, what is now preventing the application to stop 
is the commit log threads as specified into CASSANDRA-8572.

The quick fix for this problem is to shutdown the {{CommitLog}} in the close 
method of {{CQLSSTableWriter}}.

An other problem is that if {{Config.setClientMode(true)}} is used, the code 
will not work anymore for the latest 2.0 and 2.1 ({{setClientMode}} has been 
removed on trunk). By consequence, I will also have to remove the previous fix 
for this ticket. 

 CQLSSTableWriter close does not work
 

 Key: CASSANDRA-8281
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8281
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: Cassandra 2.1.1
Reporter: Xu Zhongxing
Assignee: Benjamin Lerer
 Fix For: 2.1.3

 Attachments: CASSANDRA-8281.txt


 I called CQLSSTableWriter.close(). But the program still cannot exit. But the 
 same code works fine on Cassandra 2.0.10.
 It seems that CQLSSTableWriter cannot be closed, and blocks the program from 
 exit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7970) JSON support for CQL

2015-01-07 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267611#comment-14267611
 ] 

Aleksey Yeschenko commented on CASSANDRA-7970:
--

I'm sure Tyler's 'everywhere' meant 'everywhere in JSON', in which case I'm 
okay with it - we already have to accept it b/c a JSON key can only be a 
string, and we'll need to do type-casting there anyway.

The alternative would be to limit tojson/fromjson to the subset of schemas w/ 
types strictly matching to JSON literals' types - when keys are strings, and 
values are strings/numbers/booleans/lists/UDTs. I will not fight for that 
option anymore, though.

 JSON support for CQL
 

 Key: CASSANDRA-7970
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7970
 Project: Cassandra
  Issue Type: New Feature
  Components: API
Reporter: Jonathan Ellis
Assignee: Tyler Hobbs
 Fix For: 3.0


 JSON is popular enough that not supporting it is becoming a competitive 
 weakness.  We can add JSON support in a way that is compatible with our 
 performance goals by *mapping* JSON to an existing schema: one JSON documents 
 maps to one CQL row.
 Thus, it is NOT a goal to support schemaless documents, which is a misfeature 
 [1] [2] [3].  Rather, it is to allow a convenient way to easily turn a JSON 
 document from a service or a user into a CQL row, with all the validation 
 that entails.
 Since we are not looking to support schemaless documents, we will not be 
 adding a JSON data type (CASSANDRA-6833) a la postgresql.  Rather, we will 
 map the JSON to UDT, collections, and primitive CQL types.
 Here's how this might look:
 {code}
 CREATE TYPE address (
   street text,
   city text,
   zip_code int,
   phones settext
 );
 CREATE TABLE users (
   id uuid PRIMARY KEY,
   name text,
   addresses maptext, address
 );
 INSERT INTO users JSON
 {‘id’: 4b856557-7153,
‘name’: ‘jbellis’,
‘address’: {“home”: {“street”: “123 Cassandra Dr”,
 “city”: “Austin”,
 “zip_code”: 78747,
 “phones”: [2101234567]}}};
 SELECT JSON id, address FROM users;
 {code}
 (We would also want to_json and from_json functions to allow mapping a single 
 column's worth of data.  These would not require extra syntax.)
 [1] http://rustyrazorblade.com/2014/07/the-myth-of-schema-less/
 [2] https://blog.compose.io/schema-less-is-usually-a-lie/
 [3] http://dl.acm.org/citation.cfm?id=2481247



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-3025) PHP/PDO driver for Cassandra CQL

2015-01-07 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267621#comment-14267621
 ] 

Sylvain Lebresne commented on CASSANDRA-3025:
-

bq. If this JIRA is not the right place to bring the issue to the table, where 
would that be?

There is a client dev mailing list, 
client-dev-subscr...@cassandra.apache.org, that is meant for that kind of 
discussion (see the mailing list section at the bottom of cassandra.apache.org) 
and you should start there. I won't lie though, I don't think this mailing list 
has too much traffic these days. So if you really can't get much mileage out of 
it, I guess nobody will blame you too much if you bring this discussion to the 
cassandra user mailing list. But this JIRA is for the development of the 
Cassandra server.

 PHP/PDO driver for Cassandra CQL
 

 Key: CASSANDRA-3025
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3025
 Project: Cassandra
  Issue Type: New Feature
  Components: API
Reporter: Mikko Koppanen
Assignee: Mikko Koppanen
  Labels: php
 Attachments: pdo_cassandra-0.1.0.tgz, pdo_cassandra-0.1.1.tgz, 
 pdo_cassandra-0.1.2.tgz, pdo_cassandra-0.1.3.tgz, pdo_cassandra-0.2.0.tgz, 
 pdo_cassandra-0.2.1.tgz, php_test_results_20110818_2317.txt


 Hello,
 attached is the initial version of the PDO driver for Cassandra CQL language. 
 This is a native PHP extension written in what I would call a combination of 
 C and C++, due to PHP being C. The thrift API used is the C++.
 The API looks roughly following:
 {code}
 ?php
 $db = new PDO('cassandra:host=127.0.0.1;port=9160');
 $db-exec (CREATE KEYSPACE mytest with strategy_class = 'SimpleStrategy' and 
 strategy_options:replication_factor=1;);
 $db-exec (USE mytest);
 $db-exec (CREATE COLUMNFAMILY users (
   my_key varchar PRIMARY KEY,
   full_name varchar ););
   
 $stmt = $db-prepare (INSERT INTO users (my_key, full_name) VALUES (:key, 
 :full_name););
 $stmt-execute (array (':key' = 'mikko', ':full_name' = 'Mikko K' ));
 {code}
 Currently prepared statements are emulated on the client side but I 
 understand that there is a plan to add prepared statements to Cassandra CQL 
 API as well. I will add this feature in to the extension as soon as they are 
 implemented.
 Additional documentation can be found in github 
 https://github.com/mkoppanen/php-pdo_cassandra, in the form of rendered 
 MarkDown file. Tests are currently not included in the package file and they 
 can be found in the github for now as well.
 I have created documentation in docbook format as well, but have not yet 
 rendered it.
 Comments and feedback are welcome.
 Thanks,
 Mikko



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8568) Impose new API on data tracker modifications that makes correct usage obvious and imposes safety

2015-01-07 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267554#comment-14267554
 ] 

Marcus Eriksson commented on CASSANDRA-8568:


I linked CASSANDRA-8506 which I bet will be fixed here

I also tried to put some lipstick on the pig in CASSANDRA-7852 but this sounds 
like the proper fix

 Impose new API on data tracker modifications that makes correct usage obvious 
 and imposes safety
 

 Key: CASSANDRA-8568
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8568
 Project: Cassandra
  Issue Type: Bug
Reporter: Benedict

 DataTracker has become a bit of a quagmire, and not at all obvious to 
 interface with, with many subtly different modifiers. I suspect it is still 
 subtly broken, especially around error recovery.
 I propose piggy-backing on CASSANDRA-7705 to offer RAII (and GC-enforced, for 
 those situations where a try/finally block isn't possible) objects that have 
 transactional behaviour, and with few simple declarative methods that can be 
 composed simply to provide all of the functionality we currently need.
 See CASSANDRA-8399 for context



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7124) Use JMX Notifications to Indicate Success/Failure of Long-Running Operations

2015-01-07 Thread Rajanarayanan Thottuvaikkatumana (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267568#comment-14267568
 ] 

Rajanarayanan Thottuvaikkatumana commented on CASSANDRA-7124:
-

[~yukim], Merged the code for cleanup with the commit - 
https://github.com/rnamboodiri/cassandra/commit/5ec552c326119cf573dd3b07344e88ad0f5d2449

I have run the test {{ant test -Dtest.name=CleanupTest}} and it is working 
fine. Also ran the cleanup command from the command line and looks good. 

Thanks

 Use JMX Notifications to Indicate Success/Failure of Long-Running Operations
 

 Key: CASSANDRA-7124
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7124
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Rajanarayanan Thottuvaikkatumana
Priority: Minor
  Labels: lhf
 Fix For: 3.0

 Attachments: 7124-wip.txt, cassandra-trunk-compact-7124.txt, 
 cassandra-trunk-decommission-7124.txt


 If {{nodetool cleanup}} or some other long-running operation takes too long 
 to complete, you'll see an error like the one in CASSANDRA-2126, so you can't 
 tell if the operation completed successfully or not.  CASSANDRA-4767 fixed 
 this for repairs with JMX notifications.  We should do something similar for 
 nodetool cleanup, compact, decommission, move, relocate, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8316) Did not get positive replies from all endpoints error on incremental repair

2015-01-07 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267561#comment-14267561
 ] 

Marcus Eriksson commented on CASSANDRA-8316:


bq. B does not mark sstables as repaired for just receiving prepare message, 
doesn't it?

no - but it keeps the sstables in a set to make sure that we don't start 
multiple repairs including the same sstables - this would be pretty pointless 
as the sstables will be gone after anticompaction and can't be marked

I also think it is 'good enough' for now to let it fail and let users clean up 
manually (since incremental repairs are not default in 2.1)

[~yukim] could you review the patch as well? Pushed rebased here: 
https://github.com/krummas/cassandra/commits/marcuse/8316

  Did not get positive replies from all endpoints error on incremental repair
 --

 Key: CASSANDRA-8316
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8316
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: cassandra 2.1.2
Reporter: Loic Lambiel
Assignee: Marcus Eriksson
 Fix For: 2.1.3

 Attachments: 0001-patch.patch, 8316-v2.patch, 
 CassandraDaemon-2014-11-25-2.snapshot.tar.gz, 
 CassandraDaemon-2014-12-14.snapshot.tar.gz, test.sh


 Hi,
 I've got an issue with incremental repairs on our production 15 nodes 2.1.2 
 (new cluster, not yet loaded, RF=3)
 After having successfully performed an incremental repair (-par -inc) on 3 
 nodes, I started receiving Repair failed with error Did not get positive 
 replies from all endpoints. from nodetool on all remaining nodes :
 [2014-11-14 09:12:36,488] Starting repair command #3, repairing 108 ranges 
 for keyspace  (seq=false, full=false)
 [2014-11-14 09:12:47,919] Repair failed with error Did not get positive 
 replies from all endpoints.
 All the nodes are up and running and the local system log shows that the 
 repair commands got started and that's it.
 I've also noticed that soon after the repair, several nodes started having 
 more cpu load indefinitely without any particular reason (no tasks / queries, 
 nothing in the logs). I then restarted C* on these nodes and retried the 
 repair on several nodes, which were successful until facing the issue again.
 I tried to repro on our 3 nodes preproduction cluster without success
 It looks like I'm not the only one having this issue: 
 http://www.mail-archive.com/user%40cassandra.apache.org/msg39145.html
 Any idea?
 Thanks
 Loic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[Cassandra Wiki] Trivial Update of Committers by gdusbabek

2015-01-07 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The Committers page has been changed by gdusbabek:
https://wiki.apache.org/cassandra/Committers?action=diffrev1=48rev2=49

Comment:
Updating employer for Gary Dusbabek

  ||Jun Rao ||Jun 2009 ||!LinkedIn ||PMC member ||
  ||Chris Goffinet ||Sept 2009 ||Twitter ||PMC member ||
  ||Johan Oskarsson ||Nov 2009 ||Twitter ||Also a 
[[http://hadoop.apache.org/|Hadoop]] committer ||
- ||Gary Dusbabek ||Dec 2009 ||Rackspace ||PMC member ||
+ ||Gary Dusbabek ||Dec 2009 ||Silicon Valley Data Science ||PMC member ||
  ||Jaakko Laine ||Dec 2009 ||? || ||
  ||Brandon Williams ||Jun 2010 ||Datastax ||PMC member ||
  ||Jake Luciani ||Jan 2011 ||Datastax ||PMC member, 
[[http://thrift.apache.org/|Thrift]] PMC member ||


[jira] [Updated] (CASSANDRA-8570) org.apache.cassandra.db.compaction.CompactionsPurgeTest failing

2015-01-07 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-8570:
---
Attachment: 0001-use-the-correct-repairedAt-value.patch

we pass in -1 to indicate that the repairedAt value used when creating the 
writer should be used

 org.apache.cassandra.db.compaction.CompactionsPurgeTest failing
 ---

 Key: CASSANDRA-8570
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8570
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Philip Thompson
Assignee: Marcus Eriksson
 Fix For: 2.1.3

 Attachments: 0001-use-the-correct-repairedAt-value.patch


 The patch for CASSANDRA-8429 broke the tests 
 {{org.apache.cassandra.db.compaction.CompactionsPurgeTest.testCompactionPurgeTombstonedRow}}
  and 
 {{org.apache.cassandra.db.compaction.CompactionsPurgeTest.testRowTombstoneObservedBeforePurging}}
 {code}
 junit.framework.AssertionFailedError: 
   at 
 org.apache.cassandra.db.compaction.CompactionsPurgeTest.testCompactionPurgeTombstonedRow(CompactionsPurgeTest.java:308)
 {code}
 {code}expected:0 but was:1
  Stack Trace
 junit.framework.AssertionFailedError: expected:0 but was:1
   at 
 org.apache.cassandra.db.compaction.CompactionsPurgeTest.testRowTombstoneObservedBeforePurging(CompactionsPurgeTest.java:372)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8578) LeveledCompactionStrategyTest.testGrouperLevels failings

2015-01-07 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-8578:
---
Attachment: 0001-make-sure-we-group-by-the-compaction-strategy-we-hav.patch

 LeveledCompactionStrategyTest.testGrouperLevels failings
 

 Key: CASSANDRA-8578
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8578
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Philip Thompson
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 3.0

 Attachments: 
 0001-make-sure-we-group-by-the-compaction-strategy-we-hav.patch


 This test is failing on trunk. Here is the jenkins output:
 {code}
  Error Details
 org.apache.cassandra.db.compaction.WrappingCompactionStrategy cannot be cast 
 to org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Stack Trace
 java.lang.ClassCastException: 
 org.apache.cassandra.db.compaction.WrappingCompactionStrategy cannot be cast 
 to org.apache.cassandra.db.compaction.LeveledCompactionStrategy
   at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategyTest.testGrouperLevels(LeveledCompactionStrategyTest.java:124)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8537) ConcurrentModificationException while executing 'nodetool cleanup'

2015-01-07 Thread Sebastian Estevez (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268793#comment-14268793
 ] 

Sebastian Estevez commented on CASSANDRA-8537:
--

Additional details, tried the following procedures with the same result:

restart Cassandra then cleanup.
repair -pr then cleanup (No errors with repair)
repair then cleanup (No errors with repair)
nodetool scrub then cleanup (No errors with scrub)
nodetool rebuild_index (the only index on the table) then cleanup (No errors on 
the rebuild_index)

 ConcurrentModificationException while executing 'nodetool cleanup'
 --

 Key: CASSANDRA-8537
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8537
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Debian 7.7, Oracle JRE 1.7.0_72
Reporter: Noureddine Chatti
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.1.3


 After adding a new node to an existing cluster (7 already started nodes), and 
 waiting a few minutes to be sure that data migration to the new node is 
 completed, I began to use the command nodetool cleanup sequentially on each 
 old node. When I issued this command on the third node, after a few minutes I 
 got a ConcurrentModificationException.
 ~$ nodetool cleanup
 error: null
 -- StackTrace --
 java.util.ConcurrentModificationException
 at java.util.ArrayList$Itr.checkForComodification(Unknown Source)
 at java.util.ArrayList$Itr.next(Unknown Source)
 at 
 org.apache.cassandra.db.index.SecondaryIndexManager.deleteFromIndexes(SecondaryIndexManager.java:476)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$CleanupStrategy$Full.cleanup(CompactionManager.java:833)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.doCleanupOne(CompactionManager.java:704)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.access$400(CompactionManager.java:97)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$5.execute(CompactionManager.java:370)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:267)
 at java.util.concurrent.FutureTask.run(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6983) DirectoriesTest fails when run as root

2015-01-07 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268800#comment-14268800
 ] 

Yuki Morishita commented on CASSANDRA-6983:
---

+1. Committed, Thanks!

 DirectoriesTest fails when run as root
 --

 Key: CASSANDRA-6983
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6983
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Brandon Williams
Assignee: Alan Boudreault
Priority: Minor
 Fix For: 2.0.12, 2.1.3

 Attachments: 6983-v2.patch


 When you run the DirectoriesTest as a normal user, it passes because it fails 
 to create the 'bad' directory:
 {noformat}
 [junit] - Standard Error -
 [junit] ERROR 16:16:18,111 Failed to create 
 /tmp/cassandra4119802552776680052unittest/ks/bad directory
 [junit]  WARN 16:16:18,112 Blacklisting 
 /tmp/cassandra4119802552776680052unittest/ks/bad for writes
 [junit] -  ---
 {noformat}
 But when you run the test as root, it succeeds in making the directory, 
 causing an assertion failure that it's unwritable:
 {noformat}
 [junit] Testcase: 
 testDiskFailurePolicy_best_effort(org.apache.cassandra.db.DirectoriesTest):   
 FAILED
 [junit] 
 [junit] junit.framework.AssertionFailedError: 
 [junit] at 
 org.apache.cassandra.db.DirectoriesTest.testDiskFailurePolicy_best_effort(DirectoriesTest.java:199)
 {noformat}
 It seems to me that we shouldn't be relying on failing the make the 
 directory.  If we're just going to test a nonexistent dir, why try to make 
 one at all?  And if that is supposed to succeed, then we have a problem with 
 either the test or blacklisting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8194) Reading from Auth table should not be in the request path

2015-01-07 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268810#comment-14268810
 ] 

Jeremiah Jordan commented on CASSANDRA-8194:


My only question would be should we have a MAX_STALE time or something, after 
which we actually do start to fail queries?  Without that someone could take 
down X nodes, and be able to get at all the rest of the data with an old 
password.

 Reading from Auth table should not be in the request path
 -

 Key: CASSANDRA-8194
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8194
 Project: Cassandra
  Issue Type: Improvement
Reporter: Vishy Kasar
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.0.12, 3.0

 Attachments: 8194-V2.patch, 8194-V3.txt, 8194-V4.txt, 8194.patch, 
 CacheTest2.java


 We use PasswordAuthenticator and PasswordAuthorizer. The system_auth has a RF 
 of 10 per DC over 2 DCs. The permissions_validity_in_ms is 5 minutes. 
 We still have few thousand requests failing each day with the trace below. 
 The reason for this is read cache request realizing that cached entry has 
 expired and doing a blocking request to refresh cache. 
 We should have cache refreshed periodically only in the back ground. The user 
 request should simply look at the cache and not try to refresh it. 
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2258)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3990)
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3994)
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4878)
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:292)
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:172)
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:165)
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:149)
   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:75)
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:102)
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:113)
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1735)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4162)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4150)
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at org.apache.cassandra.auth.Auth.selectUser(Auth.java:256)
   at org.apache.cassandra.auth.Auth.isSuperuser(Auth.java:84)
   at 
 org.apache.cassandra.auth.AuthenticatedUser.isSuper(AuthenticatedUser.java:50)
   at 
 org.apache.cassandra.auth.CassandraAuthorizer.authorize(CassandraAuthorizer.java:68)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:278)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:275)
   at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3589)
   at 
 com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2374)
   at 
 com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2337)
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2252)
   ... 19 more
 Caused by: org.apache.cassandra.exceptions.ReadTimeoutException: Operation 
 timed out - received only 0 responses.
   at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:105)
   at 
 org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:943)
   at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:828)
   at 
 

[jira] [Updated] (CASSANDRA-7395) Support for pure user-defined functions (UDF)

2015-01-07 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-7395:

Tester: Shawn Kumar  (was: Ryan McGuire)

 Support for pure user-defined functions (UDF)
 -

 Key: CASSANDRA-7395
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7395
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
Assignee: Robert Stupp
  Labels: cql
 Fix For: 3.0

 Attachments: 7395-dtest.txt, 7395.txt, udf-create-syntax.png, 
 udf-drop-syntax.png


 We have some tickets for various aspects of UDF (CASSANDRA-4914, 
 CASSANDRA-5970, CASSANDRA-4998) but they all suffer from various degrees of 
 ocean-boiling.
 Let's start with something simple: allowing pure user-defined functions in 
 the SELECT clause of a CQL query.  That's it.
 By pure I mean, must depend only on the input parameters.  No side effects. 
  No exposure to C* internals.  Column values in, result out.  
 http://en.wikipedia.org/wiki/Pure_function



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7458) functional indexes

2015-01-07 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-7458:

Tester: Shawn Kumar

 functional indexes
 --

 Key: CASSANDRA-7458
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7458
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
Assignee: Mikhail Stepura
 Fix For: 3.0


 Indexing information derived from the row can be powerful.  For example, 
 using the hypothetical {{extract_date}} function,
 {code}
 create table ticks (
 symbol text,
 ticked_at datetime,
 price int,
 tags settext,
 PRIMARY KEY (symbol, ticked_at)
 );
 CREATE INDEX ticks_by_day ON ticks(extract_date(ticked_at));
 SELECT * FROM ticks_by_day WHERE extract_date(ticked_at) = '2014-5-13';
 {code}
 http://www.postgresql.org/docs/9.3/static/indexes-expressional.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7526) Defining UDFs using scripting language directly from CQL

2015-01-07 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-7526:

Tester: Shawn Kumar

 Defining UDFs using scripting language directly from CQL
 

 Key: CASSANDRA-7526
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7526
 Project: Cassandra
  Issue Type: New Feature
Reporter: Sylvain Lebresne
Assignee: Robert Stupp
 Fix For: 3.0

 Attachments: 7526.txt, 7526v2.txt, 7526v3.txt, 7526v4.txt, 
 7526v5.txt, 7526v6.txt


 In CASSANDRA-7395 we'll introduce the ability to define user functions by 
 dropping a java class server side. While this is a good first step and a good 
 option to have in any case, it would be nice to provide a simpler way to 
 define those functions directly from CQL. And while we probably don't want to 
 re-invent a new programming language inside CQL, we can reuse one. Typically, 
 with java 8, we could use nashorn. This would allow a syntax along the lines 
 of:
 {noformat}
 CREATE FUNCTION sum (a bigint, b bigint) bigint AS { return a + b; }
 {noformat}
 Note that in this, everything before the AS will be parsed by us, which we'll 
 probably want because we'll probably need to have the types of 
 arguments/return in practice anyway, and it's a good idea to reuse CQL types. 
 The expression after the AS will be given to Nashorn however.
 Please note that in theory we could ultimately support multiple language 
 after the AS. However, I'd like to focus on supporting just one for this 
 ticket and I'm keen on using javascript through Nashorn because as it's the 
 one that will ship with java from now on, it feels like a safe default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-4914) Aggregation functions in CQL

2015-01-07 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-4914:

Tester: Shawn Kumar

 Aggregation functions in CQL
 

 Key: CASSANDRA-4914
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4914
 Project: Cassandra
  Issue Type: New Feature
Reporter: Vijay
Assignee: Benjamin Lerer
  Labels: cql, docs
 Fix For: 3.0

 Attachments: CASSANDRA-4914-V2.txt, CASSANDRA-4914-V3.txt, 
 CASSANDRA-4914-V4.txt, CASSANDRA-4914-V5.txt, CASSANDRA-4914.txt


 The requirement is to do aggregation of data in Cassandra (Wide row of column 
 values of int, double, float etc).
 With some basic agree gate functions like AVG, SUM, Mean, Min, Max, etc (for 
 the columns within a row).
 Example:
 SELECT * FROM emp WHERE empID IN (130) ORDER BY deptID DESC;  
   
  empid | deptid | first_name | last_name | salary
 ---+++---+
130 |  3 | joe| doe   |   10.1
130 |  2 | joe| doe   |100
130 |  1 | joe| doe   |  1e+03
  
 SELECT sum(salary), empid FROM emp WHERE empID IN (130);  
   
  sum(salary) | empid
 -+
1110.1|  130



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[4/6] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-01-07 Thread yukim
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
test/unit/org/apache/cassandra/db/DirectoriesTest.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5ac5ee66
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5ac5ee66
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5ac5ee66

Branch: refs/heads/trunk
Commit: 5ac5ee666d754e39db8dcad474a51e140f3511ef
Parents: dcc90ef ad37533
Author: Yuki Morishita yu...@apache.org
Authored: Wed Jan 7 22:13:15 2015 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Jan 7 22:13:15 2015 -0600

--
 .../apache/cassandra/db/DirectoriesTest.java| 25 
 1 file changed, 10 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ac5ee66/test/unit/org/apache/cassandra/db/DirectoriesTest.java
--
diff --cc test/unit/org/apache/cassandra/db/DirectoriesTest.java
index 34d10d2,c4471e5..b1c51ee
--- a/test/unit/org/apache/cassandra/db/DirectoriesTest.java
+++ b/test/unit/org/apache/cassandra/db/DirectoriesTest.java
@@@ -19,19 -19,13 +19,21 @@@ package org.apache.cassandra.db
  
  import java.io.File;
  import java.io.IOException;
 -import java.util.*;
 -import java.util.concurrent.*;
 +import java.util.ArrayList;
 +import java.util.Arrays;
 +import java.util.HashMap;
 +import java.util.HashSet;
 +import java.util.IdentityHashMap;
 +import java.util.List;
 +import java.util.Map;
 +import java.util.Set;
 +import java.util.concurrent.Callable;
 +import java.util.concurrent.Executors;
 +import java.util.concurrent.Future;
  
+ import org.apache.commons.lang3.StringUtils;
+ 
  import org.junit.AfterClass;
 -import org.junit.Assert;
  import org.junit.BeforeClass;
  import org.junit.Test;
  
@@@ -42,7 -35,8 +44,8 @@@ import org.apache.cassandra.db.Director
  import org.apache.cassandra.io.sstable.Component;
  import org.apache.cassandra.io.sstable.Descriptor;
  import org.apache.cassandra.io.util.FileUtils;
 +import org.apache.cassandra.utils.ByteBufferUtil;
+ import org.apache.cassandra.io.FSWriteError;
  
  import static org.junit.Assert.assertEquals;
  import static org.junit.Assert.assertFalse;
@@@ -194,20 -193,18 +197,18 @@@ public class DirectoriesTes
  try 
  {
  
DatabaseDescriptor.setDiskFailurePolicy(DiskFailurePolicy.best_effort);
- 
- for (DataDirectory dd : Directories.dataDirectories)
+ // Fake a Directory creation failure
 -if (Directories.dataFileLocations.length  0)
++if (Directories.dataDirectories.length  0)
  {
- dd.location.setExecutable(false);
- dd.location.setWritable(false);
+ String[] path = new String[] {KS, bad};
 -File dir = new 
File(Directories.dataFileLocations[0].location, StringUtils.join(path, 
File.separator));
++File dir = new File(Directories.dataDirectories[0].location, 
StringUtils.join(path, File.separator));
+ FileUtils.handleFSError(new FSWriteError(new 
IOException(Unable to create directory  + dir), dir));
  }
  
- // nested folders in /tmp is enough to fail on *nix but we need 
to pass the 255 char limit to get a failure on Windows and blacklist
- CFMetaData cfm = new CFMetaData(KS, 
badbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbad,
 ColumnFamilyType.Standard, null);
- Directories dir = new Directories(cfm);
- 
- for (File file : dir.getCFDirectories())
 -for (DataDirectory dd : Directories.dataFileLocations)
++for (DataDirectory dd : Directories.dataDirectories)
  {
+ File file = new File(dd.location, new File(KS, 
bad).getPath());
 -Assert.assertTrue(BlacklistedDirectories.isUnwritable(file));
 +assertTrue(BlacklistedDirectories.isUnwritable(file));
  }
  } 
  finally 



[1/6] cassandra git commit: Avoid creating dir in DirectoriesTest

2015-01-07 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 1eea31460 - ad3753309
  refs/heads/cassandra-2.1 dcc90ef35 - 5ac5ee666
  refs/heads/trunk 12f17b203 - 9606a17b3


Avoid creating dir in DirectoriesTest

patch by Alan Boudreault; reviewed by yukim for CASSANDRA-6983


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ad375330
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ad375330
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ad375330

Branch: refs/heads/cassandra-2.0
Commit: ad3753309776fb0b7096d15a7535ac76511779e3
Parents: 1eea314
Author: Alan Boudreault a...@alanb.ca
Authored: Wed Jan 7 18:34:01 2015 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Jan 7 18:36:01 2015 -0600

--
 .../apache/cassandra/db/DirectoriesTest.java| 22 
 1 file changed, 9 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad375330/test/unit/org/apache/cassandra/db/DirectoriesTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/DirectoriesTest.java 
b/test/unit/org/apache/cassandra/db/DirectoriesTest.java
index 8754fe0..c4471e5 100644
--- a/test/unit/org/apache/cassandra/db/DirectoriesTest.java
+++ b/test/unit/org/apache/cassandra/db/DirectoriesTest.java
@@ -22,6 +22,8 @@ import java.io.IOException;
 import java.util.*;
 import java.util.concurrent.*;
 
+import org.apache.commons.lang3.StringUtils;
+
 import org.junit.AfterClass;
 import org.junit.Assert;
 import org.junit.BeforeClass;
@@ -34,6 +36,7 @@ import org.apache.cassandra.db.compaction.LeveledManifest;
 import org.apache.cassandra.io.sstable.Component;
 import org.apache.cassandra.io.sstable.Descriptor;
 import org.apache.cassandra.io.util.FileUtils;
+import org.apache.cassandra.io.FSWriteError;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
@@ -190,15 +193,14 @@ public class DirectoriesTest
 try 
 {
 
DatabaseDescriptor.setDiskFailurePolicy(DiskFailurePolicy.best_effort);
-
-for (DataDirectory dd : Directories.dataFileLocations)
+// Fake a Directory creation failure
+if (Directories.dataFileLocations.length  0)
 {
-dd.location.setExecutable(false);
-dd.location.setWritable(false);
+String[] path = new String[] {KS, bad};
+File dir = new File(Directories.dataFileLocations[0].location, 
StringUtils.join(path, File.separator));
+FileUtils.handleFSError(new FSWriteError(new 
IOException(Unable to create directory  + dir), dir));
 }
-
-Directories.create(KS, bad);
-
+
 for (DataDirectory dd : Directories.dataFileLocations)
 {
 File file = new File(dd.location, new File(KS, 
bad).getPath());
@@ -207,12 +209,6 @@ public class DirectoriesTest
 } 
 finally 
 {
-for (DataDirectory dd : Directories.dataFileLocations)
-{
-dd.location.setExecutable(true);
-dd.location.setWritable(true);
-}
-
 DatabaseDescriptor.setDiskFailurePolicy(origPolicy);
 }
 }



[5/6] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-01-07 Thread yukim
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
test/unit/org/apache/cassandra/db/DirectoriesTest.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5ac5ee66
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5ac5ee66
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5ac5ee66

Branch: refs/heads/cassandra-2.1
Commit: 5ac5ee666d754e39db8dcad474a51e140f3511ef
Parents: dcc90ef ad37533
Author: Yuki Morishita yu...@apache.org
Authored: Wed Jan 7 22:13:15 2015 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Jan 7 22:13:15 2015 -0600

--
 .../apache/cassandra/db/DirectoriesTest.java| 25 
 1 file changed, 10 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ac5ee66/test/unit/org/apache/cassandra/db/DirectoriesTest.java
--
diff --cc test/unit/org/apache/cassandra/db/DirectoriesTest.java
index 34d10d2,c4471e5..b1c51ee
--- a/test/unit/org/apache/cassandra/db/DirectoriesTest.java
+++ b/test/unit/org/apache/cassandra/db/DirectoriesTest.java
@@@ -19,19 -19,13 +19,21 @@@ package org.apache.cassandra.db
  
  import java.io.File;
  import java.io.IOException;
 -import java.util.*;
 -import java.util.concurrent.*;
 +import java.util.ArrayList;
 +import java.util.Arrays;
 +import java.util.HashMap;
 +import java.util.HashSet;
 +import java.util.IdentityHashMap;
 +import java.util.List;
 +import java.util.Map;
 +import java.util.Set;
 +import java.util.concurrent.Callable;
 +import java.util.concurrent.Executors;
 +import java.util.concurrent.Future;
  
+ import org.apache.commons.lang3.StringUtils;
+ 
  import org.junit.AfterClass;
 -import org.junit.Assert;
  import org.junit.BeforeClass;
  import org.junit.Test;
  
@@@ -42,7 -35,8 +44,8 @@@ import org.apache.cassandra.db.Director
  import org.apache.cassandra.io.sstable.Component;
  import org.apache.cassandra.io.sstable.Descriptor;
  import org.apache.cassandra.io.util.FileUtils;
 +import org.apache.cassandra.utils.ByteBufferUtil;
+ import org.apache.cassandra.io.FSWriteError;
  
  import static org.junit.Assert.assertEquals;
  import static org.junit.Assert.assertFalse;
@@@ -194,20 -193,18 +197,18 @@@ public class DirectoriesTes
  try 
  {
  
DatabaseDescriptor.setDiskFailurePolicy(DiskFailurePolicy.best_effort);
- 
- for (DataDirectory dd : Directories.dataDirectories)
+ // Fake a Directory creation failure
 -if (Directories.dataFileLocations.length  0)
++if (Directories.dataDirectories.length  0)
  {
- dd.location.setExecutable(false);
- dd.location.setWritable(false);
+ String[] path = new String[] {KS, bad};
 -File dir = new 
File(Directories.dataFileLocations[0].location, StringUtils.join(path, 
File.separator));
++File dir = new File(Directories.dataDirectories[0].location, 
StringUtils.join(path, File.separator));
+ FileUtils.handleFSError(new FSWriteError(new 
IOException(Unable to create directory  + dir), dir));
  }
  
- // nested folders in /tmp is enough to fail on *nix but we need 
to pass the 255 char limit to get a failure on Windows and blacklist
- CFMetaData cfm = new CFMetaData(KS, 
badbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbadbad,
 ColumnFamilyType.Standard, null);
- Directories dir = new Directories(cfm);
- 
- for (File file : dir.getCFDirectories())
 -for (DataDirectory dd : Directories.dataFileLocations)
++for (DataDirectory dd : Directories.dataDirectories)
  {
+ File file = new File(dd.location, new File(KS, 
bad).getPath());
 -Assert.assertTrue(BlacklistedDirectories.isUnwritable(file));
 +assertTrue(BlacklistedDirectories.isUnwritable(file));
  }
  } 
  finally 



[6/6] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-01-07 Thread yukim
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9606a17b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9606a17b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9606a17b

Branch: refs/heads/trunk
Commit: 9606a17b364a10d2aeb2977dd448fd6d51bfcada
Parents: 12f17b2 5ac5ee6
Author: Yuki Morishita yu...@apache.org
Authored: Wed Jan 7 22:13:21 2015 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Jan 7 22:13:21 2015 -0600

--
 .../apache/cassandra/db/DirectoriesTest.java| 25 
 1 file changed, 10 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9606a17b/test/unit/org/apache/cassandra/db/DirectoriesTest.java
--



[2/6] cassandra git commit: Avoid creating dir in DirectoriesTest

2015-01-07 Thread yukim
Avoid creating dir in DirectoriesTest

patch by Alan Boudreault; reviewed by yukim for CASSANDRA-6983


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ad375330
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ad375330
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ad375330

Branch: refs/heads/cassandra-2.1
Commit: ad3753309776fb0b7096d15a7535ac76511779e3
Parents: 1eea314
Author: Alan Boudreault a...@alanb.ca
Authored: Wed Jan 7 18:34:01 2015 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Jan 7 18:36:01 2015 -0600

--
 .../apache/cassandra/db/DirectoriesTest.java| 22 
 1 file changed, 9 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad375330/test/unit/org/apache/cassandra/db/DirectoriesTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/DirectoriesTest.java 
b/test/unit/org/apache/cassandra/db/DirectoriesTest.java
index 8754fe0..c4471e5 100644
--- a/test/unit/org/apache/cassandra/db/DirectoriesTest.java
+++ b/test/unit/org/apache/cassandra/db/DirectoriesTest.java
@@ -22,6 +22,8 @@ import java.io.IOException;
 import java.util.*;
 import java.util.concurrent.*;
 
+import org.apache.commons.lang3.StringUtils;
+
 import org.junit.AfterClass;
 import org.junit.Assert;
 import org.junit.BeforeClass;
@@ -34,6 +36,7 @@ import org.apache.cassandra.db.compaction.LeveledManifest;
 import org.apache.cassandra.io.sstable.Component;
 import org.apache.cassandra.io.sstable.Descriptor;
 import org.apache.cassandra.io.util.FileUtils;
+import org.apache.cassandra.io.FSWriteError;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
@@ -190,15 +193,14 @@ public class DirectoriesTest
 try 
 {
 
DatabaseDescriptor.setDiskFailurePolicy(DiskFailurePolicy.best_effort);
-
-for (DataDirectory dd : Directories.dataFileLocations)
+// Fake a Directory creation failure
+if (Directories.dataFileLocations.length  0)
 {
-dd.location.setExecutable(false);
-dd.location.setWritable(false);
+String[] path = new String[] {KS, bad};
+File dir = new File(Directories.dataFileLocations[0].location, 
StringUtils.join(path, File.separator));
+FileUtils.handleFSError(new FSWriteError(new 
IOException(Unable to create directory  + dir), dir));
 }
-
-Directories.create(KS, bad);
-
+
 for (DataDirectory dd : Directories.dataFileLocations)
 {
 File file = new File(dd.location, new File(KS, 
bad).getPath());
@@ -207,12 +209,6 @@ public class DirectoriesTest
 } 
 finally 
 {
-for (DataDirectory dd : Directories.dataFileLocations)
-{
-dd.location.setExecutable(true);
-dd.location.setWritable(true);
-}
-
 DatabaseDescriptor.setDiskFailurePolicy(origPolicy);
 }
 }



[3/6] cassandra git commit: Avoid creating dir in DirectoriesTest

2015-01-07 Thread yukim
Avoid creating dir in DirectoriesTest

patch by Alan Boudreault; reviewed by yukim for CASSANDRA-6983


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ad375330
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ad375330
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ad375330

Branch: refs/heads/trunk
Commit: ad3753309776fb0b7096d15a7535ac76511779e3
Parents: 1eea314
Author: Alan Boudreault a...@alanb.ca
Authored: Wed Jan 7 18:34:01 2015 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Jan 7 18:36:01 2015 -0600

--
 .../apache/cassandra/db/DirectoriesTest.java| 22 
 1 file changed, 9 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad375330/test/unit/org/apache/cassandra/db/DirectoriesTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/DirectoriesTest.java 
b/test/unit/org/apache/cassandra/db/DirectoriesTest.java
index 8754fe0..c4471e5 100644
--- a/test/unit/org/apache/cassandra/db/DirectoriesTest.java
+++ b/test/unit/org/apache/cassandra/db/DirectoriesTest.java
@@ -22,6 +22,8 @@ import java.io.IOException;
 import java.util.*;
 import java.util.concurrent.*;
 
+import org.apache.commons.lang3.StringUtils;
+
 import org.junit.AfterClass;
 import org.junit.Assert;
 import org.junit.BeforeClass;
@@ -34,6 +36,7 @@ import org.apache.cassandra.db.compaction.LeveledManifest;
 import org.apache.cassandra.io.sstable.Component;
 import org.apache.cassandra.io.sstable.Descriptor;
 import org.apache.cassandra.io.util.FileUtils;
+import org.apache.cassandra.io.FSWriteError;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
@@ -190,15 +193,14 @@ public class DirectoriesTest
 try 
 {
 
DatabaseDescriptor.setDiskFailurePolicy(DiskFailurePolicy.best_effort);
-
-for (DataDirectory dd : Directories.dataFileLocations)
+// Fake a Directory creation failure
+if (Directories.dataFileLocations.length  0)
 {
-dd.location.setExecutable(false);
-dd.location.setWritable(false);
+String[] path = new String[] {KS, bad};
+File dir = new File(Directories.dataFileLocations[0].location, 
StringUtils.join(path, File.separator));
+FileUtils.handleFSError(new FSWriteError(new 
IOException(Unable to create directory  + dir), dir));
 }
-
-Directories.create(KS, bad);
-
+
 for (DataDirectory dd : Directories.dataFileLocations)
 {
 File file = new File(dd.location, new File(KS, 
bad).getPath());
@@ -207,12 +209,6 @@ public class DirectoriesTest
 } 
 finally 
 {
-for (DataDirectory dd : Directories.dataFileLocations)
-{
-dd.location.setExecutable(true);
-dd.location.setWritable(true);
-}
-
 DatabaseDescriptor.setDiskFailurePolicy(origPolicy);
 }
 }



[jira] [Resolved] (CASSANDRA-7321) retire jython paging tests and port over to python driver

2015-01-07 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire resolved CASSANDRA-7321.
-
Resolution: Fixed

 retire jython paging tests and port over to python driver
 -

 Key: CASSANDRA-7321
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7321
 Project: Cassandra
  Issue Type: Test
Reporter: Russ Hatch
Assignee: Russ Hatch

 The latest version of the python driver now supports paging, so we have no 
 need to test paging with jython anymore.
 We should either create another paging test project to supplant 
 https://github.com/riptano/cassandra-dtest-jython, or better yet get these 
 tests incorporated into cassandra-dtest since we are soon going to port dtest 
 over to the new python driver (away from dbapi2).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: ninja: fix column name in SELECT stmt error message

2015-01-07 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 96365bf55 - 6afab52b2


ninja: fix column name in SELECT stmt error message


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6afab52b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6afab52b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6afab52b

Branch: refs/heads/cassandra-2.1
Commit: 6afab52b27de4b5d8b377d4fb7d15012bc43527c
Parents: 96365bf
Author: Tyler Hobbs ty...@datastax.com
Authored: Wed Jan 7 17:11:01 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Wed Jan 7 17:11:01 2015 -0600

--
 src/java/org/apache/cassandra/cql3/statements/SelectStatement.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6afab52b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 4163315..96d91b9 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -1945,7 +1945,7 @@ public class SelectStatement implements CQLStatement
 break;
 }
 throw new InvalidRequestException(String.format(
-PRIMARY KEY column \%s\ cannot be 
restricted (preceding column \%s\ is either not restricted or by a non-EQ 
relation), cdef.name, previous));
+PRIMARY KEY column \%s\ cannot be 
restricted (preceding column \%s\ is either not restricted or by a non-EQ 
relation), cdef.name, previous.name));
 }
 }
 else if (restriction.isSlice())



[jira] [Updated] (CASSANDRA-8578) LeveledCompactionStrategyTest.testGrouperLevels failings

2015-01-07 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8578:
---
Priority: Minor  (was: Major)

 LeveledCompactionStrategyTest.testGrouperLevels failings
 

 Key: CASSANDRA-8578
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8578
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Philip Thompson
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 3.0


 This test is failing on trunk. Here is the jenkins output:
 {code}
  Error Details
 org.apache.cassandra.db.compaction.WrappingCompactionStrategy cannot be cast 
 to org.apache.cassandra.db.compaction.LeveledCompactionStrategy
  Stack Trace
 java.lang.ClassCastException: 
 org.apache.cassandra.db.compaction.WrappingCompactionStrategy cannot be cast 
 to org.apache.cassandra.db.compaction.LeveledCompactionStrategy
   at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategyTest.testGrouperLevels(LeveledCompactionStrategyTest.java:124)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8578) LeveledCompactionStrategyTest.testGrouperLevels failings

2015-01-07 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-8578:
--

 Summary: LeveledCompactionStrategyTest.testGrouperLevels failings
 Key: CASSANDRA-8578
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8578
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Philip Thompson
Assignee: Marcus Eriksson
 Fix For: 3.0


This test is failing on trunk. Here is the jenkins output:

{code}
 Error Details

org.apache.cassandra.db.compaction.WrappingCompactionStrategy cannot be cast to 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy

 Stack Trace

java.lang.ClassCastException: 
org.apache.cassandra.db.compaction.WrappingCompactionStrategy cannot be cast to 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy
at 
org.apache.cassandra.db.compaction.LeveledCompactionStrategyTest.testGrouperLevels(LeveledCompactionStrategyTest.java:124)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8577) Values of set types not loading correctly into Pig

2015-01-07 Thread Oksana Danylyshyn (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oksana Danylyshyn updated CASSANDRA-8577:
-
Description: 
Values of set types are not loading correctly from Cassandra (cql3 table, 
Native protocol v3) into Pig using CqlNativeStorage. 
When using Cassandra version 2.1.0 only empty values are loaded, and for newer 
versions (2.1.1 and 2.1.2) the following error is received: 
org.apache.cassandra.serializers.MarshalException: Unexpected extraneous bytes 
after set value
at 
org.apache.cassandra.serializers.SetSerializer.deserializeForNativeProtocol(SetSerializer.java:94)

Steps to reproduce:

{code}cqlsh:socialdata CREATE TABLE test (
 key varchar PRIMARY KEY,
 tags setvarchar
   );
cqlsh:socialdata insert into test (key, tags) values ('key', {'Running', 
'onestep4red', 'running'});
cqlsh:socialdata select * from test;

 key | tags
-+---
 key | {'Running', 'onestep4red', 'running'}

(1 rows){code}


With version 2.1.0:
{code}grunt data = load 'cql://socialdata/test' using 
org.apache.cassandra.hadoop.pig.CqlNativeStorage();
grunt dump data;

(key,()){code}

With version 2.1.2:
{code}grunt data = load 'cql://socialdata/test' using 
org.apache.cassandra.hadoop.pig.CqlNativeStorage();
grunt dump data;

org.apache.cassandra.serializers.MarshalException: Unexpected extraneous bytes 
after set value
  at 
org.apache.cassandra.serializers.SetSerializer.deserializeForNativeProtocol(SetSerializer.java:94)
  at 
org.apache.cassandra.serializers.SetSerializer.deserializeForNativeProtocol(SetSerializer.java:27)
  at 
org.apache.cassandra.hadoop.pig.AbstractCassandraStorage.cassandraToObj(AbstractCassandraStorage.java:796)
  at 
org.apache.cassandra.hadoop.pig.CqlStorage.cqlColumnToObj(CqlStorage.java:195)
  at 
org.apache.cassandra.hadoop.pig.CqlNativeStorage.getNext(CqlNativeStorage.java:106)
  at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:211)
  at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
  at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
  at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
  at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212){code}

Expected result:
{code}(key,(Running,onestep4red,running)){code}

  was:
Values of set types are not loading correctly from Cassandra (cql3 table, 
Native protocol v3) into Pig using CqlNativeStorage. 
When using Cassandra version 2.1.0 only empty values are loaded, and for newer 
versions (2.1.1 and 2.1.2) the following error is received: 
org.apache.cassandra.serializers.MarshalException: Unexpected extraneous bytes 
after set value
at 
org.apache.cassandra.serializers.SetSerializer.deserializeForNativeProtocol(SetSerializer.java:94)


 Values of set types not loading correctly into Pig
 --

 Key: CASSANDRA-8577
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8577
 Project: Cassandra
  Issue Type: Bug
Reporter: Oksana Danylyshyn
Assignee: Brandon Williams
 Fix For: 2.1.3


 Values of set types are not loading correctly from Cassandra (cql3 table, 
 Native protocol v3) into Pig using CqlNativeStorage. 
 When using Cassandra version 2.1.0 only empty values are loaded, and for 
 newer versions (2.1.1 and 2.1.2) the following error is received: 
 org.apache.cassandra.serializers.MarshalException: Unexpected extraneous 
 bytes after set value
 at 
 org.apache.cassandra.serializers.SetSerializer.deserializeForNativeProtocol(SetSerializer.java:94)
 Steps to reproduce:
 {code}cqlsh:socialdata CREATE TABLE test (
  key varchar PRIMARY KEY,
  tags setvarchar
);
 cqlsh:socialdata insert into test (key, tags) values ('key', {'Running', 
 'onestep4red', 'running'});
 cqlsh:socialdata select * from test;
  key | tags
 -+---
  key | {'Running', 'onestep4red', 'running'}
 (1 rows){code}
 With version 2.1.0:
 {code}grunt data = load 'cql://socialdata/test' using 
 org.apache.cassandra.hadoop.pig.CqlNativeStorage();
 grunt dump data;
 (key,()){code}
 With version 2.1.2:
 {code}grunt data = load 'cql://socialdata/test' using 
 org.apache.cassandra.hadoop.pig.CqlNativeStorage();
 grunt dump data;
 org.apache.cassandra.serializers.MarshalException: Unexpected extraneous 
 bytes after set value
   at 
 org.apache.cassandra.serializers.SetSerializer.deserializeForNativeProtocol(SetSerializer.java:94)
   at 
 

[jira] [Commented] (CASSANDRA-7124) Use JMX Notifications to Indicate Success/Failure of Long-Running Operations

2015-01-07 Thread Rajanarayanan Thottuvaikkatumana (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268434#comment-14268434
 ] 

Rajanarayanan Thottuvaikkatumana commented on CASSANDRA-7124:
-

[~yukim], Please find the commit for the upgradesstable task - 
https://github.com/rnamboodiri/cassandra/commit/ac90750e1cc6e7b45cbe0fbd29d702d800e759a0

I could not find any automated tests for this task. Any idea how to do that?

Regarding the coding style, I will have a look at it and do the needful. 
Thanks


 Use JMX Notifications to Indicate Success/Failure of Long-Running Operations
 

 Key: CASSANDRA-7124
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7124
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Rajanarayanan Thottuvaikkatumana
Priority: Minor
  Labels: lhf
 Fix For: 3.0

 Attachments: 7124-wip.txt, cassandra-trunk-compact-7124.txt, 
 cassandra-trunk-decommission-7124.txt


 If {{nodetool cleanup}} or some other long-running operation takes too long 
 to complete, you'll see an error like the one in CASSANDRA-2126, so you can't 
 tell if the operation completed successfully or not.  CASSANDRA-4767 fixed 
 this for repairs with JMX notifications.  We should do something similar for 
 nodetool cleanup, compact, decommission, move, relocate, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: ninja: fix column name in SELECT stmt error message

2015-01-07 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 2b7522f66 - 66348bbe5


ninja: fix column name in SELECT stmt error message


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6afab52b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6afab52b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6afab52b

Branch: refs/heads/trunk
Commit: 6afab52b27de4b5d8b377d4fb7d15012bc43527c
Parents: 96365bf
Author: Tyler Hobbs ty...@datastax.com
Authored: Wed Jan 7 17:11:01 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Wed Jan 7 17:11:01 2015 -0600

--
 src/java/org/apache/cassandra/cql3/statements/SelectStatement.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6afab52b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index 4163315..96d91b9 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -1945,7 +1945,7 @@ public class SelectStatement implements CQLStatement
 break;
 }
 throw new InvalidRequestException(String.format(
-PRIMARY KEY column \%s\ cannot be 
restricted (preceding column \%s\ is either not restricted or by a non-EQ 
relation), cdef.name, previous));
+PRIMARY KEY column \%s\ cannot be 
restricted (preceding column \%s\ is either not restricted or by a non-EQ 
relation), cdef.name, previous.name));
 }
 }
 else if (restriction.isSlice())



[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-01-07 Thread tylerhobbs
Merge branch 'cassandra-2.1' into trunk

Conflicts:
src/java/org/apache/cassandra/cql3/statements/SelectStatement.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/66348bbe
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/66348bbe
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/66348bbe

Branch: refs/heads/trunk
Commit: 66348bbe5914581d53e0e76f7c64237d5a625615
Parents: 2b7522f 6afab52
Author: Tyler Hobbs ty...@datastax.com
Authored: Wed Jan 7 17:13:03 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Wed Jan 7 17:13:03 2015 -0600

--

--




[jira] [Assigned] (CASSANDRA-8194) Reading from Auth table should not be in the request path

2015-01-07 Thread Vishy Kasar (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishy Kasar reassigned CASSANDRA-8194:
--

Assignee: Aleksey Yeschenko  (was: Sam Tunnicliffe)

Thanks for the new patch Sam. Assigning it to Aleksey Yeschenko.

 Reading from Auth table should not be in the request path
 -

 Key: CASSANDRA-8194
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8194
 Project: Cassandra
  Issue Type: Improvement
Reporter: Vishy Kasar
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.0.12, 3.0

 Attachments: 8194-V2.patch, 8194-V3.txt, 8194-V4.txt, 8194.patch, 
 CacheTest2.java


 We use PasswordAuthenticator and PasswordAuthorizer. The system_auth has a RF 
 of 10 per DC over 2 DCs. The permissions_validity_in_ms is 5 minutes. 
 We still have few thousand requests failing each day with the trace below. 
 The reason for this is read cache request realizing that cached entry has 
 expired and doing a blocking request to refresh cache. 
 We should have cache refreshed periodically only in the back ground. The user 
 request should simply look at the cache and not try to refresh it. 
 com.google.common.util.concurrent.UncheckedExecutionException: 
 java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2258)
   at com.google.common.cache.LocalCache.get(LocalCache.java:3990)
   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3994)
   at 
 com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4878)
   at 
 org.apache.cassandra.service.ClientState.authorize(ClientState.java:292)
   at 
 org.apache.cassandra.service.ClientState.ensureHasPermission(ClientState.java:172)
   at 
 org.apache.cassandra.service.ClientState.hasAccess(ClientState.java:165)
   at 
 org.apache.cassandra.service.ClientState.hasColumnFamilyAccess(ClientState.java:149)
   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.checkAccess(ModificationStatement.java:75)
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:102)
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:113)
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1735)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4162)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4150)
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:206)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.RuntimeException: 
 org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - 
 received only 0 responses.
   at org.apache.cassandra.auth.Auth.selectUser(Auth.java:256)
   at org.apache.cassandra.auth.Auth.isSuperuser(Auth.java:84)
   at 
 org.apache.cassandra.auth.AuthenticatedUser.isSuper(AuthenticatedUser.java:50)
   at 
 org.apache.cassandra.auth.CassandraAuthorizer.authorize(CassandraAuthorizer.java:68)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:278)
   at org.apache.cassandra.service.ClientState$1.load(ClientState.java:275)
   at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3589)
   at 
 com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2374)
   at 
 com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2337)
   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2252)
   ... 19 more
 Caused by: org.apache.cassandra.exceptions.ReadTimeoutException: Operation 
 timed out - received only 0 responses.
   at org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:105)
   at 
 org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:943)
   at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:828)
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:140)
   at org.apache.cassandra.auth.Auth.selectUser(Auth.java:245)
   ... 28 more
 ERROR [Thrift:17232] 2014-10-24 05:06:51,004 

[jira] [Commented] (CASSANDRA-8577) Values of set types not loading correctly into Pig

2015-01-07 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268482#comment-14268482
 ] 

Philip Thompson commented on CASSANDRA-8577:


[~Oksana Danylyshyn], any chance you can attach some reproduction code to 
assist with debugging? Thanks.

 Values of set types not loading correctly into Pig
 --

 Key: CASSANDRA-8577
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8577
 Project: Cassandra
  Issue Type: Bug
Reporter: Oksana Danylyshyn
Assignee: Brandon Williams
 Fix For: 2.1.3


 Values of set types are not loading correctly from Cassandra (cql3 table, 
 Native protocol v3) into Pig using CqlNativeStorage. 
 When using Cassandra version 2.1.0 only empty values are loaded, and for 
 newer versions (2.1.1 and 2.1.2) the following error is received: 
 org.apache.cassandra.serializers.MarshalException: Unexpected extraneous 
 bytes after set value
 at 
 org.apache.cassandra.serializers.SetSerializer.deserializeForNativeProtocol(SetSerializer.java:94)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7124) Use JMX Notifications to Indicate Success/Failure of Long-Running Operations

2015-01-07 Thread Rajanarayanan Thottuvaikkatumana (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268511#comment-14268511
 ] 

Rajanarayanan Thottuvaikkatumana commented on CASSANDRA-7124:
-

Some of the code formatting issues fixed in 
https://github.com/rnamboodiri/cassandra/commit/e24a698c6960da5038ce0abf464e99a87a70ffb7
Thanks


 Use JMX Notifications to Indicate Success/Failure of Long-Running Operations
 

 Key: CASSANDRA-7124
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7124
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Rajanarayanan Thottuvaikkatumana
Priority: Minor
  Labels: lhf
 Fix For: 3.0

 Attachments: 7124-wip.txt, cassandra-trunk-compact-7124.txt, 
 cassandra-trunk-decommission-7124.txt


 If {{nodetool cleanup}} or some other long-running operation takes too long 
 to complete, you'll see an error like the one in CASSANDRA-2126, so you can't 
 tell if the operation completed successfully or not.  CASSANDRA-4767 fixed 
 this for repairs with JMX notifications.  We should do something similar for 
 nodetool cleanup, compact, decommission, move, relocate, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8577) Values of set types not loading correctly into Pig

2015-01-07 Thread Oksana Danylyshyn (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268524#comment-14268524
 ] 

Oksana Danylyshyn commented on CASSANDRA-8577:
--

[~philipthompson], updated description with reproduction code.

 Values of set types not loading correctly into Pig
 --

 Key: CASSANDRA-8577
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8577
 Project: Cassandra
  Issue Type: Bug
Reporter: Oksana Danylyshyn
Assignee: Brandon Williams
 Fix For: 2.1.3


 Values of set types are not loading correctly from Cassandra (cql3 table, 
 Native protocol v3) into Pig using CqlNativeStorage. 
 When using Cassandra version 2.1.0 only empty values are loaded, and for 
 newer versions (2.1.1 and 2.1.2) the following error is received: 
 org.apache.cassandra.serializers.MarshalException: Unexpected extraneous 
 bytes after set value
 at 
 org.apache.cassandra.serializers.SetSerializer.deserializeForNativeProtocol(SetSerializer.java:94)
 Steps to reproduce:
 {code}cqlsh:socialdata CREATE TABLE test (
  key varchar PRIMARY KEY,
  tags setvarchar
);
 cqlsh:socialdata insert into test (key, tags) values ('key', {'Running', 
 'onestep4red', 'running'});
 cqlsh:socialdata select * from test;
  key | tags
 -+---
  key | {'Running', 'onestep4red', 'running'}
 (1 rows){code}
 With version 2.1.0:
 {code}grunt data = load 'cql://socialdata/test' using 
 org.apache.cassandra.hadoop.pig.CqlNativeStorage();
 grunt dump data;
 (key,()){code}
 With version 2.1.2:
 {code}grunt data = load 'cql://socialdata/test' using 
 org.apache.cassandra.hadoop.pig.CqlNativeStorage();
 grunt dump data;
 org.apache.cassandra.serializers.MarshalException: Unexpected extraneous 
 bytes after set value
   at 
 org.apache.cassandra.serializers.SetSerializer.deserializeForNativeProtocol(SetSerializer.java:94)
   at 
 org.apache.cassandra.serializers.SetSerializer.deserializeForNativeProtocol(SetSerializer.java:27)
   at 
 org.apache.cassandra.hadoop.pig.AbstractCassandraStorage.cassandraToObj(AbstractCassandraStorage.java:796)
   at 
 org.apache.cassandra.hadoop.pig.CqlStorage.cqlColumnToObj(CqlStorage.java:195)
   at 
 org.apache.cassandra.hadoop.pig.CqlNativeStorage.getNext(CqlNativeStorage.java:106)
   at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.nextKeyValue(PigRecordReader.java:211)
   at 
 org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:532)
   at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
   at 
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212){code}
 Expected result:
 {code}(key,(Running,onestep4red,running)){code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8490) DISTINCT queries with LIMITs or paging are incorrect when partitions are deleted

2015-01-07 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8490:
---
Attachment: 8490-trunk.txt
8490-2.0.txt

Okay, the attached patches take the approach of using -2 for 
{{compositesToGroup()}}.  I've also pushed a 
[dtest|https://github.com/thobbs/cassandra-dtest/tree/CASSANDRA-8490].

 DISTINCT queries with LIMITs or paging are incorrect when partitions are 
 deleted
 

 Key: CASSANDRA-8490
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8490
 Project: Cassandra
  Issue Type: Bug
 Environment: Driver version: 2.1.3.
 Cassandra version: 2.0.11/2.1.2.
Reporter: Frank Limstrand
Assignee: Tyler Hobbs
 Fix For: 2.0.12, 2.1.3

 Attachments: 8490-2.0.txt, 8490-trunk.txt


 Using paging demo code from 
 https://github.com/PatrickCallaghan/datastax-paging-demo
 The code creates and populates a table with 1000 entries and pages through 
 them with setFetchSize set to 100. If we then delete one entry with 'cqlsh':
 {noformat}
 cqlsh:datastax_paging_demo delete from datastax_paging_demo.products  where 
 productId = 'P142'; (The specified productid is number 6 in the resultset.)
 {noformat}
 and run the same query (Select * from) again we get:
 {noformat}
 [com.datastax.paging.Main.main()] INFO  com.datastax.paging.Main - Paging 
 demo took 0 secs. Total Products : 999
 {noformat}
 which is what we would expect.
 If we then change the select statement in dao/ProductDao.java (line 70) 
 from Select * from  to Select DISTINCT productid from  we get this result:
 {noformat}
 [com.datastax.paging.Main.main()] INFO  com.datastax.paging.Main - Paging 
 demo took 0 secs. Total Products : 99
 {noformat}
 So it looks like the tombstone stops the paging behaviour. Is this a bug?
 {noformat}
 DEBUG [Native-Transport-Requests:788] 2014-12-16 10:09:13,431 Message.java 
 (line 319) Received: QUERY Select DISTINCT productid from 
 datastax_paging_demo.products, v=2
 DEBUG [Native-Transport-Requests:788] 2014-12-16 10:09:13,434 
 AbstractQueryPager.java (line 98) Fetched 99 live rows
 DEBUG [Native-Transport-Requests:788] 2014-12-16 10:09:13,434 
 AbstractQueryPager.java (line 115) Got result (99) smaller than page size 
 (100), considering pager exhausted
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[4/8] cassandra git commit: Fix test build

2015-01-07 Thread yukim
Fix test build


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1eea3146
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1eea3146
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1eea3146

Branch: refs/heads/trunk
Commit: 1eea314602d50491093cf63bef6a26eed9df3ceb
Parents: eeaa3e0
Author: Yuki Morishita yu...@apache.org
Authored: Wed Jan 7 18:19:28 2015 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Jan 7 18:19:28 2015 -0600

--
 .../org/apache/cassandra/streaming/StreamTransferTaskTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1eea3146/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java 
b/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
index 1c28cbd..94fa92c 100644
--- a/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
+++ b/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
@@ -64,7 +64,7 @@ public class StreamTransferTaskTest extends SchemaLoader
 {
 ListRangeToken ranges = new ArrayList();
 ranges.add(new Range(sstable.first.getToken(), 
sstable.last.getToken()));
-task.addTransferFile(sstable, 1, 
sstable.getPositionsForRanges(ranges), 0);
+task.addTransferFile(sstable, 1, 
sstable.getPositionsForRanges(ranges));
 }
 assertEquals(2, task.getTotalNumberOfFiles());
 



[7/8] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-01-07 Thread yukim
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dcc90ef3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dcc90ef3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dcc90ef3

Branch: refs/heads/cassandra-2.1
Commit: dcc90ef35c4efa99b3e1823950c4bd44f38265fc
Parents: 6afab52 1eea314
Author: Yuki Morishita yu...@apache.org
Authored: Wed Jan 7 18:24:09 2015 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Jan 7 18:24:09 2015 -0600

--

--




[2/8] cassandra git commit: Fix race condition in StreamTransferTask that could lead to infinite loops and premature sstable deletion

2015-01-07 Thread yukim
Fix race condition in StreamTransferTask that could lead to
infinite loops and premature sstable deletion

patch by benedict; reviewed by yukim for CASSANDRA-7704


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/eeaa3e01
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/eeaa3e01
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/eeaa3e01

Branch: refs/heads/trunk
Commit: eeaa3e01235c98421fecc46eaed877b207fb5a33
Parents: 8078a58
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Jan 7 19:44:00 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Jan 7 19:44:00 2015 +

--
 CHANGES.txt |  2 +
 .../cassandra/streaming/StreamTransferTask.java | 73 
 .../streaming/StreamTransferTaskTest.java   | 19 +++--
 3 files changed, 62 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/eeaa3e01/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index c1bb28c..9ccbf45 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.0.12:
+ * Fix race condition in StreamTransferTask that could lead to
+   infinite loops and premature sstable deletion (CASSANDRA-7704)
  * Add an extra version check to MigrationTask (CASSANDRA-8462)
  * Ensure SSTableWriter cleans up properly after failure (CASSANDRA-8499)
  * Increase bf true positive count on key cache hit (CASSANDRA-8525)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eeaa3e01/src/java/org/apache/cassandra/streaming/StreamTransferTask.java
--
diff --git a/src/java/org/apache/cassandra/streaming/StreamTransferTask.java 
b/src/java/org/apache/cassandra/streaming/StreamTransferTask.java
index a543d01..5b7 100644
--- a/src/java/org/apache/cassandra/streaming/StreamTransferTask.java
+++ b/src/java/org/apache/cassandra/streaming/StreamTransferTask.java
@@ -19,8 +19,10 @@ package org.apache.cassandra.streaming;
 
 import java.util.*;
 import java.util.concurrent.*;
+import java.util.concurrent.ScheduledFuture;
 import java.util.concurrent.atomic.AtomicInteger;
 
+import org.apache.cassandra.concurrent.NamedThreadFactory;
 import org.apache.cassandra.io.sstable.SSTableReader;
 import org.apache.cassandra.streaming.messages.OutgoingFileMessage;
 import org.apache.cassandra.utils.Pair;
@@ -30,13 +32,13 @@ import org.apache.cassandra.utils.Pair;
  */
 public class StreamTransferTask extends StreamTask
 {
-private final ScheduledExecutorService timeoutExecutor = 
Executors.newSingleThreadScheduledExecutor();
+private static final ScheduledExecutorService timeoutExecutor = 
Executors.newSingleThreadScheduledExecutor(new 
NamedThreadFactory(StreamingTransferTaskTimeouts));
 
 private final AtomicInteger sequenceNumber = new AtomicInteger(0);
+private boolean aborted = false;
 
-private final MapInteger, OutgoingFileMessage files = new 
ConcurrentHashMap();
-
-private final MapInteger, ScheduledFuture timeoutTasks = new 
ConcurrentHashMap();
+private final MapInteger, OutgoingFileMessage files = new HashMap();
+private final MapInteger, ScheduledFuture timeoutTasks = new HashMap();
 
 private long totalSize;
 
@@ -45,7 +47,7 @@ public class StreamTransferTask extends StreamTask
 super(session, cfId);
 }
 
-public void addTransferFile(SSTableReader sstable, long estimatedKeys, 
ListPairLong, Long sections)
+public synchronized void addTransferFile(SSTableReader sstable, long 
estimatedKeys, ListPairLong, Long sections)
 {
 assert sstable != null  cfId.equals(sstable.metadata.cfId);
 OutgoingFileMessage message = new OutgoingFileMessage(sstable, 
sequenceNumber.getAndIncrement(), estimatedKeys, sections);
@@ -58,31 +60,42 @@ public class StreamTransferTask extends StreamTask
  *
  * @param sequenceNumber sequence number of file
  */
-public synchronized void complete(int sequenceNumber)
+public void complete(int sequenceNumber)
 {
-OutgoingFileMessage file = files.remove(sequenceNumber);
-if (file != null)
+boolean signalComplete;
+synchronized (this)
 {
-file.sstable.releaseReference();
-// all file sent, notify session this task is complete.
-if (files.isEmpty())
-{
-timeoutExecutor.shutdownNow();
-session.taskCompleted(this);
-}
+ScheduledFuture timeout = timeoutTasks.remove(sequenceNumber);
+if (timeout != null)
+timeout.cancel(false);
+
+OutgoingFileMessage file = 

[8/8] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-01-07 Thread yukim
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/12f17b20
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/12f17b20
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/12f17b20

Branch: refs/heads/trunk
Commit: 12f17b2037043de51714494e63e4521bce58c524
Parents: 66348bb dcc90ef
Author: Yuki Morishita yu...@apache.org
Authored: Wed Jan 7 18:28:44 2015 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Jan 7 18:28:44 2015 -0600

--

--




[1/8] cassandra git commit: Fix race condition in StreamTransferTask that could lead to infinite loops and premature sstable deletion

2015-01-07 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 eeaa3e012 - 1eea31460
  refs/heads/cassandra-2.1 6afab52b2 - dcc90ef35
  refs/heads/trunk 66348bbe5 - 12f17b203


Fix race condition in StreamTransferTask that could lead to
infinite loops and premature sstable deletion

patch by benedict; reviewed by yukim for CASSANDRA-7704


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/eeaa3e01
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/eeaa3e01
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/eeaa3e01

Branch: refs/heads/cassandra-2.1
Commit: eeaa3e01235c98421fecc46eaed877b207fb5a33
Parents: 8078a58
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Jan 7 19:44:00 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Jan 7 19:44:00 2015 +

--
 CHANGES.txt |  2 +
 .../cassandra/streaming/StreamTransferTask.java | 73 
 .../streaming/StreamTransferTaskTest.java   | 19 +++--
 3 files changed, 62 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/eeaa3e01/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index c1bb28c..9ccbf45 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.0.12:
+ * Fix race condition in StreamTransferTask that could lead to
+   infinite loops and premature sstable deletion (CASSANDRA-7704)
  * Add an extra version check to MigrationTask (CASSANDRA-8462)
  * Ensure SSTableWriter cleans up properly after failure (CASSANDRA-8499)
  * Increase bf true positive count on key cache hit (CASSANDRA-8525)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eeaa3e01/src/java/org/apache/cassandra/streaming/StreamTransferTask.java
--
diff --git a/src/java/org/apache/cassandra/streaming/StreamTransferTask.java 
b/src/java/org/apache/cassandra/streaming/StreamTransferTask.java
index a543d01..5b7 100644
--- a/src/java/org/apache/cassandra/streaming/StreamTransferTask.java
+++ b/src/java/org/apache/cassandra/streaming/StreamTransferTask.java
@@ -19,8 +19,10 @@ package org.apache.cassandra.streaming;
 
 import java.util.*;
 import java.util.concurrent.*;
+import java.util.concurrent.ScheduledFuture;
 import java.util.concurrent.atomic.AtomicInteger;
 
+import org.apache.cassandra.concurrent.NamedThreadFactory;
 import org.apache.cassandra.io.sstable.SSTableReader;
 import org.apache.cassandra.streaming.messages.OutgoingFileMessage;
 import org.apache.cassandra.utils.Pair;
@@ -30,13 +32,13 @@ import org.apache.cassandra.utils.Pair;
  */
 public class StreamTransferTask extends StreamTask
 {
-private final ScheduledExecutorService timeoutExecutor = 
Executors.newSingleThreadScheduledExecutor();
+private static final ScheduledExecutorService timeoutExecutor = 
Executors.newSingleThreadScheduledExecutor(new 
NamedThreadFactory(StreamingTransferTaskTimeouts));
 
 private final AtomicInteger sequenceNumber = new AtomicInteger(0);
+private boolean aborted = false;
 
-private final MapInteger, OutgoingFileMessage files = new 
ConcurrentHashMap();
-
-private final MapInteger, ScheduledFuture timeoutTasks = new 
ConcurrentHashMap();
+private final MapInteger, OutgoingFileMessage files = new HashMap();
+private final MapInteger, ScheduledFuture timeoutTasks = new HashMap();
 
 private long totalSize;
 
@@ -45,7 +47,7 @@ public class StreamTransferTask extends StreamTask
 super(session, cfId);
 }
 
-public void addTransferFile(SSTableReader sstable, long estimatedKeys, 
ListPairLong, Long sections)
+public synchronized void addTransferFile(SSTableReader sstable, long 
estimatedKeys, ListPairLong, Long sections)
 {
 assert sstable != null  cfId.equals(sstable.metadata.cfId);
 OutgoingFileMessage message = new OutgoingFileMessage(sstable, 
sequenceNumber.getAndIncrement(), estimatedKeys, sections);
@@ -58,31 +60,42 @@ public class StreamTransferTask extends StreamTask
  *
  * @param sequenceNumber sequence number of file
  */
-public synchronized void complete(int sequenceNumber)
+public void complete(int sequenceNumber)
 {
-OutgoingFileMessage file = files.remove(sequenceNumber);
-if (file != null)
+boolean signalComplete;
+synchronized (this)
 {
-file.sstable.releaseReference();
-// all file sent, notify session this task is complete.
-if (files.isEmpty())
-{
-timeoutExecutor.shutdownNow();
-session.taskCompleted(this);
-}
+ScheduledFuture 

[3/8] cassandra git commit: Fix test build

2015-01-07 Thread yukim
Fix test build


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1eea3146
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1eea3146
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1eea3146

Branch: refs/heads/cassandra-2.1
Commit: 1eea314602d50491093cf63bef6a26eed9df3ceb
Parents: eeaa3e0
Author: Yuki Morishita yu...@apache.org
Authored: Wed Jan 7 18:19:28 2015 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Jan 7 18:19:28 2015 -0600

--
 .../org/apache/cassandra/streaming/StreamTransferTaskTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1eea3146/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java 
b/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
index 1c28cbd..94fa92c 100644
--- a/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
+++ b/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
@@ -64,7 +64,7 @@ public class StreamTransferTaskTest extends SchemaLoader
 {
 ListRangeToken ranges = new ArrayList();
 ranges.add(new Range(sstable.first.getToken(), 
sstable.last.getToken()));
-task.addTransferFile(sstable, 1, 
sstable.getPositionsForRanges(ranges), 0);
+task.addTransferFile(sstable, 1, 
sstable.getPositionsForRanges(ranges));
 }
 assertEquals(2, task.getTotalNumberOfFiles());
 



[5/8] cassandra git commit: Fix test build

2015-01-07 Thread yukim
Fix test build


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1eea3146
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1eea3146
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1eea3146

Branch: refs/heads/cassandra-2.0
Commit: 1eea314602d50491093cf63bef6a26eed9df3ceb
Parents: eeaa3e0
Author: Yuki Morishita yu...@apache.org
Authored: Wed Jan 7 18:19:28 2015 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Jan 7 18:19:28 2015 -0600

--
 .../org/apache/cassandra/streaming/StreamTransferTaskTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1eea3146/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java 
b/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
index 1c28cbd..94fa92c 100644
--- a/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
+++ b/test/unit/org/apache/cassandra/streaming/StreamTransferTaskTest.java
@@ -64,7 +64,7 @@ public class StreamTransferTaskTest extends SchemaLoader
 {
 ListRangeToken ranges = new ArrayList();
 ranges.add(new Range(sstable.first.getToken(), 
sstable.last.getToken()));
-task.addTransferFile(sstable, 1, 
sstable.getPositionsForRanges(ranges), 0);
+task.addTransferFile(sstable, 1, 
sstable.getPositionsForRanges(ranges));
 }
 assertEquals(2, task.getTotalNumberOfFiles());
 



[6/8] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-01-07 Thread yukim
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dcc90ef3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dcc90ef3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dcc90ef3

Branch: refs/heads/trunk
Commit: dcc90ef35c4efa99b3e1823950c4bd44f38265fc
Parents: 6afab52 1eea314
Author: Yuki Morishita yu...@apache.org
Authored: Wed Jan 7 18:24:09 2015 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Jan 7 18:24:09 2015 -0600

--

--




[1/2] cassandra git commit: Add an extra version check to MigrationTask

2015-01-07 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 68be72fdc - 561293d13


Add an extra version check to MigrationTask

patch by Aleksey Yeschenko; reviewed by Tyler Hobbs for CASSANDRA-8462


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8078a58f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8078a58f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8078a58f

Branch: refs/heads/cassandra-2.1
Commit: 8078a58f2ee625e497bd938ed35514bb003d03dc
Parents: 3679b1b
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Jan 7 22:39:00 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Jan 7 22:39:00 2015 +0300

--
 CHANGES.txt|  1 +
 .../org/apache/cassandra/service/MigrationManager.java |  2 +-
 .../org/apache/cassandra/service/MigrationTask.java| 13 ++---
 3 files changed, 12 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8078a58f/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7aad4c0..c1bb28c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.12:
+ * Add an extra version check to MigrationTask (CASSANDRA-8462)
  * Ensure SSTableWriter cleans up properly after failure (CASSANDRA-8499)
  * Increase bf true positive count on key cache hit (CASSANDRA-8525)
  * Move MeteredFlusher to its own thread (CASSANDRA-8485)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8078a58f/src/java/org/apache/cassandra/service/MigrationManager.java
--
diff --git a/src/java/org/apache/cassandra/service/MigrationManager.java 
b/src/java/org/apache/cassandra/service/MigrationManager.java
index b474bdc..f66b738 100644
--- a/src/java/org/apache/cassandra/service/MigrationManager.java
+++ b/src/java/org/apache/cassandra/service/MigrationManager.java
@@ -141,7 +141,7 @@ public class MigrationManager
 return StageManager.getStage(Stage.MIGRATION).submit(new 
MigrationTask(endpoint));
 }
 
-private static boolean shouldPullSchemaFrom(InetAddress endpoint)
+public static boolean shouldPullSchemaFrom(InetAddress endpoint)
 {
 /*
  * Don't request schema from nodes with a differnt or unknonw major 
version (may have incompatible schema)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8078a58f/src/java/org/apache/cassandra/service/MigrationTask.java
--
diff --git a/src/java/org/apache/cassandra/service/MigrationTask.java 
b/src/java/org/apache/cassandra/service/MigrationTask.java
index 93572f0..0944c55 100644
--- a/src/java/org/apache/cassandra/service/MigrationTask.java
+++ b/src/java/org/apache/cassandra/service/MigrationTask.java
@@ -48,7 +48,14 @@ class MigrationTask extends WrappedRunnable
 
 public void runMayThrow() throws Exception
 {
-MessageOut message = new 
MessageOut(MessagingService.Verb.MIGRATION_REQUEST, null, 
MigrationManager.MigrationsSerializer.instance);
+// There is a chance that quite some time could have passed between 
now and the MM#maybeScheduleSchemaPull(),
+// potentially enough for the endpoint node to restart - which is an 
issue if it does restart upgraded, with
+// a higher major.
+if (!MigrationManager.shouldPullSchemaFrom(endpoint))
+{
+logger.info(Skipped sending a migration request: node {} has a 
higher major version now., endpoint);
+return;
+}
 
 if (!FailureDetector.instance.isAlive(endpoint))
 {
@@ -56,9 +63,10 @@ class MigrationTask extends WrappedRunnable
 return;
 }
 
+MessageOut message = new 
MessageOut(MessagingService.Verb.MIGRATION_REQUEST, null, 
MigrationManager.MigrationsSerializer.instance);
+
 IAsyncCallbackCollectionRowMutation cb = new 
IAsyncCallbackCollectionRowMutation()
 {
-@Override
 public void response(MessageInCollectionRowMutation message)
 {
 try
@@ -75,7 +83,6 @@ class MigrationTask extends WrappedRunnable
 }
 }
 
-@Override
 public boolean isLatencyForSnitch()
 {
 return false;



[2/2] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-01-07 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
src/java/org/apache/cassandra/service/MigrationTask.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/561293d1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/561293d1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/561293d1

Branch: refs/heads/cassandra-2.1
Commit: 561293d132e3fa73d1e0f43d3bd0c54137f88a15
Parents: 68be72f 8078a58
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Jan 7 22:51:42 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Jan 7 22:51:42 2015 +0300

--
 CHANGES.txt |  3 +--
 .../org/apache/cassandra/service/MigrationManager.java  |  2 +-
 .../org/apache/cassandra/service/MigrationTask.java | 12 ++--
 3 files changed, 12 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/561293d1/CHANGES.txt
--
diff --cc CHANGES.txt
index 58d94ed,c1bb28c..dfed732
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,57 -1,5 +1,56 @@@
 -2.0.12:
 +2.1.3
 + * Fix case-sensitivity of index name on CREATE and DROP INDEX
 +   statements (CASSANDRA-8365)
 + * Better detection/logging for corruption in compressed sstables 
(CASSANDRA-8192)
 + * Use the correct repairedAt value when closing writer (CASSANDRA-8570)
 + * (cqlsh) Handle a schema mismatch being detected on startup (CASSANDRA-8512)
 + * Properly calculate expected write size during compaction (CASSANDRA-8532)
 + * Invalidate affected prepared statements when a table's columns
 +   are altered (CASSANDRA-7910)
 + * Stress - user defined writes should populate sequentally (CASSANDRA-8524)
 + * Fix regression in SSTableRewriter causing some rows to become unreadable 
 +   during compaction (CASSANDRA-8429)
 + * Run major compactions for repaired/unrepaired in parallel (CASSANDRA-8510)
 + * (cqlsh) Fix compression options in DESCRIBE TABLE output when compression
 +   is disabled (CASSANDRA-8288)
 + * (cqlsh) Fix DESCRIBE output after keyspaces are altered (CASSANDRA-7623)
 + * Make sure we set lastCompactedKey correctly (CASSANDRA-8463)
 + * (cqlsh) Fix output of CONSISTENCY command (CASSANDRA-8507)
 + * (cqlsh) Fixed the handling of LIST statements (CASSANDRA-8370)
 + * Make sstablescrub check leveled manifest again (CASSANDRA-8432)
 + * Check first/last keys in sstable when giving out positions (CASSANDRA-8458)
 + * Disable mmap on Windows (CASSANDRA-6993)
 + * Add missing ConsistencyLevels to cassandra-stress (CASSANDRA-8253)
 + * Add auth support to cassandra-stress (CASSANDRA-7985)
 + * Fix ArrayIndexOutOfBoundsException when generating error message
 +   for some CQL syntax errors (CASSANDRA-8455)
 + * Scale memtable slab allocation logarithmically (CASSANDRA-7882)
 + * cassandra-stress simultaneous inserts over same seed (CASSANDRA-7964)
 + * Reduce cassandra-stress sampling memory requirements (CASSANDRA-7926)
 + * Ensure memtable flush cannot expire commit log entries from its future 
(CASSANDRA-8383)
 + * Make read defrag async to reclaim memtables (CASSANDRA-8459)
 + * Remove tmplink files for offline compactions (CASSANDRA-8321)
 + * Reduce maxHintsInProgress (CASSANDRA-8415)
 + * BTree updates may call provided update function twice (CASSANDRA-8018)
 + * Release sstable references after anticompaction (CASSANDRA-8386)
 + * Handle abort() in SSTableRewriter properly (CASSANDRA-8320)
 + * Fix high size calculations for prepared statements (CASSANDRA-8231)
 + * Centralize shared executors (CASSANDRA-8055)
 + * Fix filtering for CONTAINS (KEY) relations on frozen collection
 +   clustering columns when the query is restricted to a single
 +   partition (CASSANDRA-8203)
 + * Do more aggressive entire-sstable TTL expiry checks (CASSANDRA-8243)
 + * Add more log info if readMeter is null (CASSANDRA-8238)
 + * add check of the system wall clock time at startup (CASSANDRA-8305)
 + * Support for frozen collections (CASSANDRA-7859)
 + * Fix overflow on histogram computation (CASSANDRA-8028)
 + * Have paxos reuse the timestamp generation of normal queries 
(CASSANDRA-7801)
 + * Fix incremental repair not remove parent session on remote (CASSANDRA-8291)
 + * Improve JBOD disk utilization (CASSANDRA-7386)
 + * Log failed host when preparing incremental repair (CASSANDRA-8228)
 + * Force config client mode in CQLSSTableWriter (CASSANDRA-8281)
 +Merged from 2.0:
- ===
- 2.0.12:
+  * Add an extra version check to MigrationTask (CASSANDRA-8462)
   * Ensure SSTableWriter cleans up properly after failure (CASSANDRA-8499)
   * Increase bf true positive count on key cache hit (CASSANDRA-8525)
   * Move MeteredFlusher to its own thread (CASSANDRA-8485)


cassandra git commit: Fix race condition in StreamTransferTask that could lead to infinite loops and premature sstable deletion

2015-01-07 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 8078a58f2 - eeaa3e012


Fix race condition in StreamTransferTask that could lead to
infinite loops and premature sstable deletion

patch by benedict; reviewed by yukim for CASSANDRA-7704


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/eeaa3e01
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/eeaa3e01
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/eeaa3e01

Branch: refs/heads/cassandra-2.0
Commit: eeaa3e01235c98421fecc46eaed877b207fb5a33
Parents: 8078a58
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Jan 7 19:44:00 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Jan 7 19:44:00 2015 +

--
 CHANGES.txt |  2 +
 .../cassandra/streaming/StreamTransferTask.java | 73 
 .../streaming/StreamTransferTaskTest.java   | 19 +++--
 3 files changed, 62 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/eeaa3e01/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index c1bb28c..9ccbf45 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.0.12:
+ * Fix race condition in StreamTransferTask that could lead to
+   infinite loops and premature sstable deletion (CASSANDRA-7704)
  * Add an extra version check to MigrationTask (CASSANDRA-8462)
  * Ensure SSTableWriter cleans up properly after failure (CASSANDRA-8499)
  * Increase bf true positive count on key cache hit (CASSANDRA-8525)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eeaa3e01/src/java/org/apache/cassandra/streaming/StreamTransferTask.java
--
diff --git a/src/java/org/apache/cassandra/streaming/StreamTransferTask.java 
b/src/java/org/apache/cassandra/streaming/StreamTransferTask.java
index a543d01..5b7 100644
--- a/src/java/org/apache/cassandra/streaming/StreamTransferTask.java
+++ b/src/java/org/apache/cassandra/streaming/StreamTransferTask.java
@@ -19,8 +19,10 @@ package org.apache.cassandra.streaming;
 
 import java.util.*;
 import java.util.concurrent.*;
+import java.util.concurrent.ScheduledFuture;
 import java.util.concurrent.atomic.AtomicInteger;
 
+import org.apache.cassandra.concurrent.NamedThreadFactory;
 import org.apache.cassandra.io.sstable.SSTableReader;
 import org.apache.cassandra.streaming.messages.OutgoingFileMessage;
 import org.apache.cassandra.utils.Pair;
@@ -30,13 +32,13 @@ import org.apache.cassandra.utils.Pair;
  */
 public class StreamTransferTask extends StreamTask
 {
-private final ScheduledExecutorService timeoutExecutor = 
Executors.newSingleThreadScheduledExecutor();
+private static final ScheduledExecutorService timeoutExecutor = 
Executors.newSingleThreadScheduledExecutor(new 
NamedThreadFactory(StreamingTransferTaskTimeouts));
 
 private final AtomicInteger sequenceNumber = new AtomicInteger(0);
+private boolean aborted = false;
 
-private final MapInteger, OutgoingFileMessage files = new 
ConcurrentHashMap();
-
-private final MapInteger, ScheduledFuture timeoutTasks = new 
ConcurrentHashMap();
+private final MapInteger, OutgoingFileMessage files = new HashMap();
+private final MapInteger, ScheduledFuture timeoutTasks = new HashMap();
 
 private long totalSize;
 
@@ -45,7 +47,7 @@ public class StreamTransferTask extends StreamTask
 super(session, cfId);
 }
 
-public void addTransferFile(SSTableReader sstable, long estimatedKeys, 
ListPairLong, Long sections)
+public synchronized void addTransferFile(SSTableReader sstable, long 
estimatedKeys, ListPairLong, Long sections)
 {
 assert sstable != null  cfId.equals(sstable.metadata.cfId);
 OutgoingFileMessage message = new OutgoingFileMessage(sstable, 
sequenceNumber.getAndIncrement(), estimatedKeys, sections);
@@ -58,31 +60,42 @@ public class StreamTransferTask extends StreamTask
  *
  * @param sequenceNumber sequence number of file
  */
-public synchronized void complete(int sequenceNumber)
+public void complete(int sequenceNumber)
 {
-OutgoingFileMessage file = files.remove(sequenceNumber);
-if (file != null)
+boolean signalComplete;
+synchronized (this)
 {
-file.sstable.releaseReference();
-// all file sent, notify session this task is complete.
-if (files.isEmpty())
-{
-timeoutExecutor.shutdownNow();
-session.taskCompleted(this);
-}
+ScheduledFuture timeout = timeoutTasks.remove(sequenceNumber);
+if (timeout != null)
+

cassandra git commit: Allow mixing token and partition key restrictions

2015-01-07 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 0a80fe4b5 - 493859bf6


Allow mixing token and partition key restrictions

Patch by Benjamin Lerer; reviewed by Tyler Hobbs for CASSANDRA-7016


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/493859bf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/493859bf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/493859bf

Branch: refs/heads/trunk
Commit: 493859bf617ac80f560d02ad6d471aefd6a0ef91
Parents: 0a80fe4
Author: blerer b_le...@hotmail.com
Authored: Wed Jan 7 14:05:38 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Wed Jan 7 14:06:28 2015 -0600

--
 CHANGES.txt |   1 +
 .../apache/cassandra/cql3/TokenRelation.java|   4 +-
 .../AbstractPrimaryKeyRestrictions.java |  12 +
 .../restrictions/MultiColumnRestriction.java|   4 +-
 .../SingleColumnPrimaryKeyRestrictions.java |  16 +-
 .../restrictions/StatementRestrictions.java |   5 +-
 .../cql3/restrictions/TokenFilter.java  | 237 +++
 .../cql3/restrictions/TokenRestriction.java |  57 +++--
 .../cql3/SelectWithTokenFunctionTest.java   | 139 ++-
 9 files changed, 439 insertions(+), 36 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/493859bf/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8ccc014..9f946a3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0
+ * Allow mixing token and partition key restrictions (CASSANDRA-7016)
  * Support index key/value entries on map collections (CASSANDRA-8473)
  * Modernize schema tables (CASSANDRA-8261)
  * Support for user-defined aggregation functions (CASSANDRA-8053)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/493859bf/src/java/org/apache/cassandra/cql3/TokenRelation.java
--
diff --git a/src/java/org/apache/cassandra/cql3/TokenRelation.java 
b/src/java/org/apache/cassandra/cql3/TokenRelation.java
index d1bd265..5896fae 100644
--- a/src/java/org/apache/cassandra/cql3/TokenRelation.java
+++ b/src/java/org/apache/cassandra/cql3/TokenRelation.java
@@ -69,7 +69,7 @@ public final class TokenRelation extends Relation
 {
 ListColumnDefinition columnDefs = getColumnDefinitions(cfm);
 Term term = toTerm(toReceivers(cfm, columnDefs), value, cfm.ksName, 
boundNames);
-return new TokenRestriction.EQ(columnDefs, term);
+return new TokenRestriction.EQ(cfm.getKeyValidatorAsCType(), 
columnDefs, term);
 }
 
 @Override
@@ -86,7 +86,7 @@ public final class TokenRelation extends Relation
 {
 ListColumnDefinition columnDefs = getColumnDefinitions(cfm);
 Term term = toTerm(toReceivers(cfm, columnDefs), value, cfm.ksName, 
boundNames);
-return new TokenRestriction.Slice(columnDefs, bound, inclusive, term);
+return new TokenRestriction.Slice(cfm.getKeyValidatorAsCType(), 
columnDefs, bound, inclusive, term);
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/cassandra/blob/493859bf/src/java/org/apache/cassandra/cql3/restrictions/AbstractPrimaryKeyRestrictions.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/restrictions/AbstractPrimaryKeyRestrictions.java
 
b/src/java/org/apache/cassandra/cql3/restrictions/AbstractPrimaryKeyRestrictions.java
index f137a77..0107603 100644
--- 
a/src/java/org/apache/cassandra/cql3/restrictions/AbstractPrimaryKeyRestrictions.java
+++ 
b/src/java/org/apache/cassandra/cql3/restrictions/AbstractPrimaryKeyRestrictions.java
@@ -17,11 +17,23 @@
  */
 package org.apache.cassandra.cql3.restrictions;
 
+import org.apache.cassandra.db.composites.CType;
+
 /**
  * Base class for codePrimaryKeyRestrictions/code.
  */
 abstract class AbstractPrimaryKeyRestrictions extends AbstractRestriction 
implements PrimaryKeyRestrictions
 {
+/**
+ * The composite type.
+ */
+protected final CType ctype;
+
+public AbstractPrimaryKeyRestrictions(CType ctype)
+{
+this.ctype = ctype;
+}
+
 @Override
 public final boolean isEmpty()
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/493859bf/src/java/org/apache/cassandra/cql3/restrictions/MultiColumnRestriction.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/restrictions/MultiColumnRestriction.java 
b/src/java/org/apache/cassandra/cql3/restrictions/MultiColumnRestriction.java
index e3b3c4c..2d6deeb 100644
--- 
a/src/java/org/apache/cassandra/cql3/restrictions/MultiColumnRestriction.java
+++ 

cassandra git commit: Remove ref counting in SSTableScanner, fix CompactionTask ordering

2015-01-07 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 ddca610c9 - bdbb071f4


Remove ref counting in SSTableScanner, fix CompactionTask ordering

Patch by jmckenzie; reviewed by belliottsmith as a follow-up for CASSANDRA-8399


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bdbb071f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bdbb071f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bdbb071f

Branch: refs/heads/cassandra-2.1
Commit: bdbb071f4f87131d6996aac52f2b75a5833d5238
Parents: ddca610
Author: Joshua McKenzie jmcken...@apache.org
Authored: Wed Jan 7 13:05:31 2015 -0600
Committer: Joshua McKenzie jmcken...@apache.org
Committed: Wed Jan 7 14:05:40 2015 -0600

--
 .../cassandra/db/compaction/CompactionTask.java | 82 ++--
 .../cassandra/io/sstable/SSTableScanner.java|  8 +-
 2 files changed, 45 insertions(+), 45 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bdbb071f/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
index 4885bc8..d215b4c 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
@@ -140,7 +140,6 @@ public class CompactionTask extends AbstractCompactionTask
 
 try (CompactionController controller = 
getCompactionController(sstables);)
 {
-
 SetSSTableReader actuallyCompact = Sets.difference(sstables, 
controller.getFullyExpiredSSTables());
 
 long estimatedTotalKeys = 
Math.max(cfs.metadata.getMinIndexInterval(), 
SSTableReader.getApproximateKeyCount(actuallyCompact));
@@ -149,11 +148,16 @@ public class CompactionTask extends AbstractCompactionTask
 long expectedSSTableSize = Math.min(getExpectedWriteSize(), 
strategy.getMaxSSTableBytes());
 logger.debug(Expected bloom filter size : {}, keysPerSSTable);
 
+ListSSTableReader newSStables;
+AbstractCompactionIterable ci;
+
+// SSTableScanners need to be closed before 
markCompactedSSTablesReplaced call as scanners contain references
+// to both ifile and dfile and SSTR will throw deletion errors on 
Windows if it tries to delete before scanner is closed.
+// See CASSANDRA-8019 and CASSANDRA-8399
 try (AbstractCompactionStrategy.ScannerList scanners = 
strategy.getScanners(actuallyCompact))
 {
-AbstractCompactionIterable ci = new 
CompactionIterable(compactionType, scanners.scanners, controller);
+ci = new CompactionIterable(compactionType, scanners.scanners, 
controller);
 IteratorAbstractCompactedRow iter = ci.iterator();
-ListSSTableReader newSStables;
 // we can't preheat until the tracker has been set. This 
doesn't happen until we tell the cfs to
 // replace the old entries.  Track entries to preheat here 
until then.
 long minRepairedAt = getMinRepairedAt(actuallyCompact);
@@ -215,44 +219,44 @@ public class CompactionTask extends AbstractCompactionTask
 if (collector != null)
 collector.finishCompaction(ci);
 }
+}
 
-CollectionSSTableReader oldSStables = this.sstables;
-if (!offline)
-
cfs.getDataTracker().markCompactedSSTablesReplaced(oldSStables, newSStables, 
compactionType);
-
-// log a bunch of statistics about the result and save to 
system table compaction_history
-long dTime = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - 
start);
-long startsize = SSTableReader.getTotalBytes(oldSStables);
-long endsize = SSTableReader.getTotalBytes(newSStables);
-double ratio = (double) endsize / (double) startsize;
-
-StringBuilder newSSTableNames = new StringBuilder();
-for (SSTableReader reader : newSStables)
-
newSSTableNames.append(reader.descriptor.baseFilename()).append(,);
-
-double mbps = dTime  0 ? (double) endsize / (1024 * 1024) / 
((double) dTime / 1000) : 0;
-long totalSourceRows = 0;
-long[] counts = ci.getMergedRowCounts();
-StringBuilder mergeSummary = new StringBuilder(counts.length * 
10);
-MapInteger, Long mergedRows = new HashMap();
-for (int i = 0; i  counts.length; i++)
-{
-

[1/2] cassandra git commit: Remove ref counting in SSTableScanner, fix CompactionTask ordering

2015-01-07 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/trunk 493859bf6 - 65ffc39b1


Remove ref counting in SSTableScanner, fix CompactionTask ordering

Patch by jmckenzie; reviewed by belliottsmith as a follow-up for CASSANDRA-8399


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bdbb071f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bdbb071f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bdbb071f

Branch: refs/heads/trunk
Commit: bdbb071f4f87131d6996aac52f2b75a5833d5238
Parents: ddca610
Author: Joshua McKenzie jmcken...@apache.org
Authored: Wed Jan 7 13:05:31 2015 -0600
Committer: Joshua McKenzie jmcken...@apache.org
Committed: Wed Jan 7 14:05:40 2015 -0600

--
 .../cassandra/db/compaction/CompactionTask.java | 82 ++--
 .../cassandra/io/sstable/SSTableScanner.java|  8 +-
 2 files changed, 45 insertions(+), 45 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bdbb071f/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
index 4885bc8..d215b4c 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
@@ -140,7 +140,6 @@ public class CompactionTask extends AbstractCompactionTask
 
 try (CompactionController controller = 
getCompactionController(sstables);)
 {
-
 SetSSTableReader actuallyCompact = Sets.difference(sstables, 
controller.getFullyExpiredSSTables());
 
 long estimatedTotalKeys = 
Math.max(cfs.metadata.getMinIndexInterval(), 
SSTableReader.getApproximateKeyCount(actuallyCompact));
@@ -149,11 +148,16 @@ public class CompactionTask extends AbstractCompactionTask
 long expectedSSTableSize = Math.min(getExpectedWriteSize(), 
strategy.getMaxSSTableBytes());
 logger.debug(Expected bloom filter size : {}, keysPerSSTable);
 
+ListSSTableReader newSStables;
+AbstractCompactionIterable ci;
+
+// SSTableScanners need to be closed before 
markCompactedSSTablesReplaced call as scanners contain references
+// to both ifile and dfile and SSTR will throw deletion errors on 
Windows if it tries to delete before scanner is closed.
+// See CASSANDRA-8019 and CASSANDRA-8399
 try (AbstractCompactionStrategy.ScannerList scanners = 
strategy.getScanners(actuallyCompact))
 {
-AbstractCompactionIterable ci = new 
CompactionIterable(compactionType, scanners.scanners, controller);
+ci = new CompactionIterable(compactionType, scanners.scanners, 
controller);
 IteratorAbstractCompactedRow iter = ci.iterator();
-ListSSTableReader newSStables;
 // we can't preheat until the tracker has been set. This 
doesn't happen until we tell the cfs to
 // replace the old entries.  Track entries to preheat here 
until then.
 long minRepairedAt = getMinRepairedAt(actuallyCompact);
@@ -215,44 +219,44 @@ public class CompactionTask extends AbstractCompactionTask
 if (collector != null)
 collector.finishCompaction(ci);
 }
+}
 
-CollectionSSTableReader oldSStables = this.sstables;
-if (!offline)
-
cfs.getDataTracker().markCompactedSSTablesReplaced(oldSStables, newSStables, 
compactionType);
-
-// log a bunch of statistics about the result and save to 
system table compaction_history
-long dTime = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - 
start);
-long startsize = SSTableReader.getTotalBytes(oldSStables);
-long endsize = SSTableReader.getTotalBytes(newSStables);
-double ratio = (double) endsize / (double) startsize;
-
-StringBuilder newSSTableNames = new StringBuilder();
-for (SSTableReader reader : newSStables)
-
newSSTableNames.append(reader.descriptor.baseFilename()).append(,);
-
-double mbps = dTime  0 ? (double) endsize / (1024 * 1024) / 
((double) dTime / 1000) : 0;
-long totalSourceRows = 0;
-long[] counts = ci.getMergedRowCounts();
-StringBuilder mergeSummary = new StringBuilder(counts.length * 
10);
-MapInteger, Long mergedRows = new HashMap();
-for (int i = 0; i  counts.length; i++)
-{
-long count = 

[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-01-07 Thread tylerhobbs
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2b7522f6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2b7522f6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2b7522f6

Branch: refs/heads/trunk
Commit: 2b7522f6678d45b6d9aa207c617cfafac4d5d591
Parents: 65ffc39 96365bf
Author: Tyler Hobbs ty...@datastax.com
Authored: Wed Jan 7 14:29:46 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Wed Jan 7 14:29:46 2015 -0600

--
 src/java/org/apache/cassandra/cql3/Cql.g  | 2 +-
 test/unit/org/apache/cassandra/cql3/UseStatementTest.java | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2b7522f6/src/java/org/apache/cassandra/cql3/Cql.g
--



[jira] [Updated] (CASSANDRA-8544) Cassandra could not start with NPE in ColumnFamilyStore.removeUnfinishedCompactionLeftovers

2015-01-07 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-8544:
---
Attachment: 8544_v1.txt

If turning off AV and Windows Search consistently fixes this I consider this an 
external/environment problem and not a problem with our code-base w/regards to 
stopping.  Pre 3.0 (and w/memory-mapped I/O when we go back down that route), 
file sharing violations like this from outside programs are, by default, a 
problem we're going to stop the database for rather than allowing failed 
deletions to potentially fill a drive on a node.  If you'd prefer to allow the 
system to continue you can reference the following: 
[one|http://www.datastax.com/documentation/cassandra/2.0/cassandra/configuration/configCassandra_yaml_r.html]
 - 
[two|http://www.datastax.com/dev/blog/handling-disk-failures-in-cassandra-1-2].

At the very least, you need to exclude your cassandra data and logs folder from 
AV scanning and also [remove it from 
indexing|http://windows.microsoft.com/en-us/windows/improve-windows-searches-using-index-faq#1TC=windows-7].

Attaching a v1 that wraps the NPE and instead throws an FSReadError and points 
to log file for more information, since our original NPE error isn't terribly 
useful.

 Cassandra could not start with NPE in 
 ColumnFamilyStore.removeUnfinishedCompactionLeftovers
 ---

 Key: CASSANDRA-8544
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8544
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows
Reporter: Leonid Shalupov
Assignee: Joshua McKenzie
  Labels: windows
 Fix For: 2.1.3

 Attachments: 8544_show_npe.txt, 8544_v1.txt


 It happens sometimes after restarts caused by undeletable files under Windows.
 {quote}
 Caused by: java.lang.NullPointerException
 at 
 org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:579)
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:232)
 at 
 org.apache.cassandra.service.CassandraDaemon.init(CassandraDaemon.java:377)
 at 
 com.jetbrains.cassandra.service.CassandraServiceMain.start(CassandraServiceMain.java:81)
 ... 6 more
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8544) Cassandra could not start with NPE in ColumnFamilyStore.removeUnfinishedCompactionLeftovers

2015-01-07 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-8544:
---
Priority: Minor  (was: Major)

 Cassandra could not start with NPE in 
 ColumnFamilyStore.removeUnfinishedCompactionLeftovers
 ---

 Key: CASSANDRA-8544
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8544
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows
Reporter: Leonid Shalupov
Assignee: Joshua McKenzie
Priority: Minor
  Labels: windows
 Fix For: 2.1.3

 Attachments: 8544_show_npe.txt, 8544_v1.txt


 It happens sometimes after restarts caused by undeletable files under Windows.
 {quote}
 Caused by: java.lang.NullPointerException
 at 
 org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:579)
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:232)
 at 
 org.apache.cassandra.service.CassandraDaemon.init(CassandraDaemon.java:377)
 at 
 com.jetbrains.cassandra.service.CassandraServiceMain.start(CassandraServiceMain.java:81)
 ... 6 more
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8390) The process cannot access the file because it is being used by another process

2015-01-07 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268185#comment-14268185
 ] 

Joshua McKenzie commented on CASSANDRA-8390:


In SSTableDeletingTask, we function under the assumption that once we're 
successfully able to delete the data file we should be able to delete the other 
components of an SSTable without issue as all internal handles will be closed 
at this time and mapped buffers unmapped.  The problem comes in when we have 
external applications (A/V, Windows Search, anything that provides a [file 
system 
driver|http://msdn.microsoft.com/en-us/library/windows/hardware/dn641617(v=vs.85).aspx]
 in the kernel) that hold handles to a file that we're trying to delete as our 
deletion could fail.  Given the import of failed file deletion (filling up a 
drive and killing a node), by default the disk_failure_policy in 
conf/cassandra.yaml is to stop.

File locks and deleting difficulties on a Cassandra cluster should not happen.  
This is why we've pursued CASSANDRA-4050 for the 3.0 release and have 
temporarily disabled memory-mapped I/O until we have a more robust solution to 
work around file deletion issues on Windows.

 The process cannot access the file because it is being used by another process
 --

 Key: CASSANDRA-8390
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8390
 Project: Cassandra
  Issue Type: Bug
Reporter: Ilya Komolkin
Assignee: Joshua McKenzie
 Fix For: 2.1.3

 Attachments: NoHostAvailableLogs.zip


 {code}21:46:27.810 [NonPeriodicTasks:1] ERROR o.a.c.service.CassandraDaemon - 
 Exception in thread Thread[NonPeriodicTasks:1,5,main]
 org.apache.cassandra.io.FSWriteError: java.nio.file.FileSystemException: 
 E:\Upsource_12391\data\cassandra\data\kernel\filechangehistory_t-a277b560764611e48c8e4915424c75fe\kernel-filechangehistory_t-ka-33-Index.db:
  The process cannot access the file because it is being used by another 
 process.
  
 at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:135) 
 ~[cassandra-all-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:121) 
 ~[cassandra-all-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTable.delete(SSTable.java:113) 
 ~[cassandra-all-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableDeletingTask.run(SSTableDeletingTask.java:94)
  ~[cassandra-all-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableReader$6.run(SSTableReader.java:664) 
 ~[cassandra-all-2.1.1.jar:2.1.1]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_71]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_71]
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
  ~[na:1.7.0_71]
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
  ~[na:1.7.0_71]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_71]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_71]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
 Caused by: java.nio.file.FileSystemException: 
 E:\Upsource_12391\data\cassandra\data\kernel\filechangehistory_t-a277b560764611e48c8e4915424c75fe\kernel-filechangehistory_t-ka-33-Index.db:
  The process cannot access the file because it is being used by another 
 process.
  
 at 
 sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) 
 ~[na:1.7.0_71]
 at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) 
 ~[na:1.7.0_71]
 at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) 
 ~[na:1.7.0_71]
 at 
 sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
  ~[na:1.7.0_71]
 at 
 sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
  ~[na:1.7.0_71]
 at java.nio.file.Files.delete(Files.java:1079) ~[na:1.7.0_71]
 at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:131) 
 ~[cassandra-all-2.1.1.jar:2.1.1]
 ... 11 common frames omitted{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7704) FileNotFoundException during STREAM-OUT triggers 100% CPU usage

2015-01-07 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-7704:

Fix Version/s: (was: 2.1.0)
   (was: 2.0.10)
   2.1.3
   2.0.12

 FileNotFoundException during STREAM-OUT triggers 100% CPU usage
 ---

 Key: CASSANDRA-7704
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7704
 Project: Cassandra
  Issue Type: Bug
Reporter: Rick Branson
Assignee: Benedict
 Fix For: 2.0.12, 2.1.3

 Attachments: 7704-2.1.txt, 7704.txt, backtrace.txt, other-errors.txt


 See attached backtrace which was what triggered this. This stream failed and 
 then ~12 seconds later it emitted that exception. At that point, all CPUs 
 went to 100%. A thread dump shows all the ReadStage threads stuck inside 
 IntervalTree.searchInternal inside of CFS.markReferenced().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8462) Upgrading a 2.0 to 2.1 breaks CFMetaData on 2.0 nodes

2015-01-07 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268117#comment-14268117
 ] 

Benedict commented on CASSANDRA-8462:
-

I just pushed a change to 2.0, 2.1 and trunk, and this change seemed not to 
have been merged with 2.1. It looks like the changes are minor, and may not 
even apply to 2.1. I backed them out to what 2.1 was prior to my merge. If 
somebody could have a look and check what should have been applied to 2.1, it 
would be appreciated.

 Upgrading a 2.0 to 2.1 breaks CFMetaData on 2.0 nodes
 -

 Key: CASSANDRA-8462
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8462
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Rick Branson
Assignee: Aleksey Yeschenko
 Fix For: 2.0.12, 2.1.3

 Attachments: 8462.txt


 Added a 2.1.2 node to a cluster running 2.0.11. Didn't make any schema 
 changes. When I tried to reboot one of the 2.0 nodes, it failed to boot with 
 this exception. Besides an obvious fix, any workarounds for this?
 {noformat}
 java.lang.IllegalArgumentException: No enum constant 
 org.apache.cassandra.config.CFMetaData.Caching.{keys:ALL, 
 rows_per_partition:NONE}
 at java.lang.Enum.valueOf(Enum.java:236)
 at 
 org.apache.cassandra.config.CFMetaData$Caching.valueOf(CFMetaData.java:286)
 at 
 org.apache.cassandra.config.CFMetaData.fromSchemaNoColumnsNoTriggers(CFMetaData.java:1713)
 at 
 org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1793)
 at 
 org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:307)
 at 
 org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:288)
 at 
 org.apache.cassandra.db.DefsTables.loadFromKeyspace(DefsTables.java:131)
 at 
 org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:529)
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:270)
 at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
 at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7704) FileNotFoundException during STREAM-OUT triggers 100% CPU usage

2015-01-07 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268118#comment-14268118
 ] 

Benedict commented on CASSANDRA-7704:
-

Somewhat embarassingly, I never actually pushed this to the repository. It 
still applied relatively cleanly (with a bit of git butchery), so I've simply 
pushed it. Updating fix version also.

 FileNotFoundException during STREAM-OUT triggers 100% CPU usage
 ---

 Key: CASSANDRA-7704
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7704
 Project: Cassandra
  Issue Type: Bug
Reporter: Rick Branson
Assignee: Benedict
 Fix For: 2.0.12, 2.1.3

 Attachments: 7704-2.1.txt, 7704.txt, backtrace.txt, other-errors.txt


 See attached backtrace which was what triggered this. This stream failed and 
 then ~12 seconds later it emitted that exception. At that point, all CPUs 
 went to 100%. A thread dump shows all the ReadStage threads stuck inside 
 IntervalTree.searchInternal inside of CFS.markReferenced().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7016) Allow mixing token and partition key restrictions

2015-01-07 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-7016:
---
Summary: Allow mixing token and partition key restrictions  (was: can't 
map/reduce over subset of rows with cql)

 Allow mixing token and partition key restrictions
 -

 Key: CASSANDRA-7016
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7016
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Hadoop
Reporter: Jonathan Halliday
Assignee: Benjamin Lerer
Priority: Minor
  Labels: cql, docs
 Fix For: 3.0

 Attachments: CASSANDRA-7016-V2.txt, CASSANDRA-7016-V3.txt, 
 CASSANDRA-7016-V4-trunk.txt, CASSANDRA-7016-V5-trunk.txt, CASSANDRA-7016.txt


 select ... where token(k)  x and token(k) = y and k in (a,b) allow 
 filtering;
 This fails on 2.0.6: can't restrict k by more than one relation.
 In the context of map/reduce (hence the token range) I want to map over only 
 a subset of the keys (hence the 'in').  Pushing the 'in' filter down to cql 
 is substantially cheaper than pulling all rows to the client and then 
 discarding most of them.
 Currently this is possible only if the hadoop integration code is altered to 
 apply the AND on the client side and use cql that contains only the resulting 
 filtered 'in' set.  The problem is not hadoop specific though, so IMO it 
 should really be solved in cql not the hadoop integration code.
 Most restrictions on cql syntax seem to exist to prevent unduly expensive 
 queries. This one seems to be doing the opposite.
 Edit: on further thought and with reference to the code in 
 SelectStatement$RawStatement, it seems to me that  token(k) and k should be 
 considered distinct entities for the purposes of processing restrictions. 
 That is, no restriction on the token should conflict with a restriction on 
 the raw key. That way any monolithic query in terms of k and be decomposed 
 into parallel chunks over the token range for the purposes of map/reduce 
 processing simply by appending a 'and where token(k)...' clause to the 
 exiting 'where k ...'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-8365) CamelCase name is used as index name instead of lowercase

2015-01-07 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs reopened CASSANDRA-8365:


[~blerer] it looks like the tests are breaking in trunk due to the keyspace not 
being handled properly: 
http://cassci.datastax.com/job/trunk_utest/1267/testReport/junit/org.apache.cassandra.cql3/CreateIndexStatementTest/.
  I took a look to see if it would be a quick fix, but it doesn't appear to be, 
so I've reopened this.

 CamelCase name is used as index name instead of lowercase
 -

 Key: CASSANDRA-8365
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8365
 Project: Cassandra
  Issue Type: Bug
Reporter: Pierre Laporte
Assignee: Benjamin Lerer
Priority: Minor
  Labels: cqlsh, docs
 Fix For: 2.1.3

 Attachments: CASSANDRA-8365-V2.txt, CASSANDRA-8365.txt


 In cqlsh, when I execute a CREATE INDEX FooBar ... statement, the CamelCase 
 name is used as index name, even though it is unquoted. Trying to quote the 
 index name results in a syntax error.
 However, when I try to delete the index, I have to quote the index name, 
 otherwise I get an invalid-query error telling me that the index (lowercase) 
 does not exist.
 This seems inconsistent.  Shouldn't the index name be lowercased before the 
 index is created ?
 Here is the code to reproduce the issue :
 {code}
 cqlsh:schemabuilderit CREATE TABLE IndexTest (a int primary key, b int);
 cqlsh:schemabuilderit CREATE INDEX FooBar on indextest (b);
 cqlsh:schemabuilderit DESCRIBE TABLE indextest ;
 CREATE TABLE schemabuilderit.indextest (
 a int PRIMARY KEY,
 b int
 ) ;
 CREATE INDEX FooBar ON schemabuilderit.indextest (b);
 cqlsh:schemabuilderit DROP INDEX FooBar;
 code=2200 [Invalid query] message=Index 'foobar' could not be found in any 
 of the tables of keyspace 'schemabuilderit'
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8457) nio MessagingService

2015-01-07 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268221#comment-14268221
 ] 

Ariel Weisberg commented on CASSANDRA-8457:
---

I wanted to account for the impact of coalescing at low concurrency. Low 
concurrency is not a recipe for great performance, but it is part of the out of 
the box experience and people do compare different databases at low concurrency.

In GCE coalescing provided a 12% increase in throughput in this specific 
message heavy high concurrency workload. The penalty is that at low concurrency 
there is an immediate loss of performance with any coalescing and a large 
window has a greater impact at low concurrency so there is tension there. The 
larger the window the better the performance bump.

Testing with 3 client threads each running on a dedicated client instance (3 
threads total). This is in GCE.

With TCP no delay on and coalescing
||Coalesce window microseconds|Throughput||
|0| 2191|
|6| 1910|
|12| 1873|
|25| 1867|
|50| 1779|
|100| 1667|
|150| 1566|
|200| 1491|

I also tried disabling coalescing when using HSHA and it didn't seem to make a 
difference. Surprising considering the impact of 25 microseconds of coalescing 
intra-cluster.

I also experimented with some other things. Binding interrupts cores 0 and 8 
and task setting C* off of those cores. I didn't see a big impact.

I did see a small positive impact using 3 clients 8 servers which means the 
measurements with 2 clients might be a little suspect. With 3 clients and 200 
microseconds of coalescing it peaked at 165k in GCE.

I also found out that banned CPUs in irqbalance is broken and has no effect and 
this has been the case for some time.

Right scale does not offer instances with enhanced networking. To find out 
whether coalescing provides real benefits in EC2/Xen or milder GCE like 
benefits I will have to get my hands on some.


 nio MessagingService
 

 Key: CASSANDRA-8457
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8457
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jonathan Ellis
Assignee: Ariel Weisberg
  Labels: performance
 Fix For: 3.0


 Thread-per-peer (actually two each incoming and outbound) is a big 
 contributor to context switching, especially for larger clusters.  Let's look 
 at switching to nio, possibly via Netty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8575) creation of StatusCompaction to view current state of autocompaction on node.

2015-01-07 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268116#comment-14268116
 ] 

Philip Thompson commented on CASSANDRA-8575:


What would this provide that nodetool compactionstats does not?

 creation of StatusCompaction to view current state of autocompaction on 
 node.  
 -

 Key: CASSANDRA-8575
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8575
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Sean Fuller
Priority: Minor

 request to create a nodetool quick display to check status of AutoCompaction 
 similar to the following status commands:
 {quote}
 status - Print cluster information (state, load, IDs, ...)
 statusbinary   - Status of native transport (binary protocol)
 statusgossip   - Status of gossip
 statusthrift   - Status of thrift server
 {quote}
 The output would just need to be simple as ON/OFF or similar TRUE/FALSE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-01-07 Thread aleksey
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6041d41c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6041d41c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6041d41c

Branch: refs/heads/trunk
Commit: 6041d41cd867a93d795a7c86f411b2159949073d
Parents: 34f9c97 561293d
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Jan 7 22:52:13 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Jan 7 22:52:13 2015 +0300

--
 CHANGES.txt |  3 +--
 .../org/apache/cassandra/service/MigrationManager.java  |  2 +-
 .../org/apache/cassandra/service/MigrationTask.java | 12 ++--
 3 files changed, 12 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6041d41c/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6041d41c/src/java/org/apache/cassandra/service/MigrationManager.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6041d41c/src/java/org/apache/cassandra/service/MigrationTask.java
--



[1/3] cassandra git commit: Add an extra version check to MigrationTask

2015-01-07 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 34f9c97a5 - 6041d41cd


Add an extra version check to MigrationTask

patch by Aleksey Yeschenko; reviewed by Tyler Hobbs for CASSANDRA-8462


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8078a58f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8078a58f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8078a58f

Branch: refs/heads/trunk
Commit: 8078a58f2ee625e497bd938ed35514bb003d03dc
Parents: 3679b1b
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Jan 7 22:39:00 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Jan 7 22:39:00 2015 +0300

--
 CHANGES.txt|  1 +
 .../org/apache/cassandra/service/MigrationManager.java |  2 +-
 .../org/apache/cassandra/service/MigrationTask.java| 13 ++---
 3 files changed, 12 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8078a58f/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 7aad4c0..c1bb28c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.12:
+ * Add an extra version check to MigrationTask (CASSANDRA-8462)
  * Ensure SSTableWriter cleans up properly after failure (CASSANDRA-8499)
  * Increase bf true positive count on key cache hit (CASSANDRA-8525)
  * Move MeteredFlusher to its own thread (CASSANDRA-8485)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8078a58f/src/java/org/apache/cassandra/service/MigrationManager.java
--
diff --git a/src/java/org/apache/cassandra/service/MigrationManager.java 
b/src/java/org/apache/cassandra/service/MigrationManager.java
index b474bdc..f66b738 100644
--- a/src/java/org/apache/cassandra/service/MigrationManager.java
+++ b/src/java/org/apache/cassandra/service/MigrationManager.java
@@ -141,7 +141,7 @@ public class MigrationManager
 return StageManager.getStage(Stage.MIGRATION).submit(new 
MigrationTask(endpoint));
 }
 
-private static boolean shouldPullSchemaFrom(InetAddress endpoint)
+public static boolean shouldPullSchemaFrom(InetAddress endpoint)
 {
 /*
  * Don't request schema from nodes with a differnt or unknonw major 
version (may have incompatible schema)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8078a58f/src/java/org/apache/cassandra/service/MigrationTask.java
--
diff --git a/src/java/org/apache/cassandra/service/MigrationTask.java 
b/src/java/org/apache/cassandra/service/MigrationTask.java
index 93572f0..0944c55 100644
--- a/src/java/org/apache/cassandra/service/MigrationTask.java
+++ b/src/java/org/apache/cassandra/service/MigrationTask.java
@@ -48,7 +48,14 @@ class MigrationTask extends WrappedRunnable
 
 public void runMayThrow() throws Exception
 {
-MessageOut message = new 
MessageOut(MessagingService.Verb.MIGRATION_REQUEST, null, 
MigrationManager.MigrationsSerializer.instance);
+// There is a chance that quite some time could have passed between 
now and the MM#maybeScheduleSchemaPull(),
+// potentially enough for the endpoint node to restart - which is an 
issue if it does restart upgraded, with
+// a higher major.
+if (!MigrationManager.shouldPullSchemaFrom(endpoint))
+{
+logger.info(Skipped sending a migration request: node {} has a 
higher major version now., endpoint);
+return;
+}
 
 if (!FailureDetector.instance.isAlive(endpoint))
 {
@@ -56,9 +63,10 @@ class MigrationTask extends WrappedRunnable
 return;
 }
 
+MessageOut message = new 
MessageOut(MessagingService.Verb.MIGRATION_REQUEST, null, 
MigrationManager.MigrationsSerializer.instance);
+
 IAsyncCallbackCollectionRowMutation cb = new 
IAsyncCallbackCollectionRowMutation()
 {
-@Override
 public void response(MessageInCollectionRowMutation message)
 {
 try
@@ -75,7 +83,6 @@ class MigrationTask extends WrappedRunnable
 }
 }
 
-@Override
 public boolean isLatencyForSnitch()
 {
 return false;



[jira] [Updated] (CASSANDRA-8577) Values of set types not loading correctly into Pig

2015-01-07 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8577:
---
Reproduced In: 2.1.2, 2.1.1
Fix Version/s: 2.1.3
 Assignee: Brandon Williams

 Values of set types not loading correctly into Pig
 --

 Key: CASSANDRA-8577
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8577
 Project: Cassandra
  Issue Type: Bug
Reporter: Oksana Danylyshyn
Assignee: Brandon Williams
 Fix For: 2.1.3


 Values of set types are not loading correctly from Cassandra (cql3 table, 
 Native protocol v3) into Pig using CqlNativeStorage. 
 When using Cassandra version 2.1.0 only empty values are loaded, and for 
 newer versions (2.1.1 and 2.1.2) the following error is received: 
 org.apache.cassandra.serializers.MarshalException: Unexpected extraneous 
 bytes after set value
 at 
 org.apache.cassandra.serializers.SetSerializer.deserializeForNativeProtocol(SetSerializer.java:94)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8371) DateTieredCompactionStrategy is always compacting

2015-01-07 Thread Jonathan Shook (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268134#comment-14268134
 ] 

Jonathan Shook commented on CASSANDRA-8371:
---

[~Bj0rn], [~michaelsembwever]
Is there any new data on this? Any changes to settings or observations since 
the last major update?


 DateTieredCompactionStrategy is always compacting 
 --

 Key: CASSANDRA-8371
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8371
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: mck
Assignee: Björn Hegerfors
  Labels: compaction, performance
 Attachments: java_gc_counts_rate-month.png, 
 read-latency-recommenders-adview.png, read-latency.png, 
 sstables-recommenders-adviews.png, sstables.png, vg2_iad-month.png


 Running 2.0.11 and having switched a table to 
 [DTCS|https://issues.apache.org/jira/browse/CASSANDRA-6602] we've seen that 
 disk IO and gc count increase, along with the number of reads happening in 
 the compaction hump of cfhistograms.
 Data, and generally performance, looks good, but compactions are always 
 happening, and pending compactions are building up.
 The schema for this is 
 {code}CREATE TABLE search (
   loginid text,
   searchid timeuuid,
   description text,
   searchkey text,
   searchurl text,
   PRIMARY KEY ((loginid), searchid)
 );{code}
 We're sitting on about 82G (per replica) across 6 nodes in 4 DCs.
 CQL executed against this keyspace, and traffic patterns, can be seen in 
 slides 7+8 of https://prezi.com/b9-aj6p2esft/
 Attached are sstables-per-read and read-latency graphs from cfhistograms, and 
 screenshots of our munin graphs as we have gone from STCS, to LCS (week ~44), 
 to DTCS (week ~46).
 These screenshots are also found in the prezi on slides 9-11.
 [~pmcfadin], [~Bj0rn], 
 Can this be a consequence of occasional deleted rows, as is described under 
 (3) in the description of CASSANDRA-6602 ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-01-07 Thread jmckenzie
Merge branch 'cassandra-2.1' into trunk

Conflicts:
src/java/org/apache/cassandra/db/compaction/CompactionTask.java
src/java/org/apache/cassandra/io/sstable/format/big/BigTableScanner.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/65ffc39b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/65ffc39b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/65ffc39b

Branch: refs/heads/trunk
Commit: 65ffc39b16c83c6319498874b5f0c56149371aef
Parents: 493859b bdbb071
Author: Joshua McKenzie jmcken...@apache.org
Authored: Wed Jan 7 14:11:04 2015 -0600
Committer: Joshua McKenzie jmcken...@apache.org
Committed: Wed Jan 7 14:11:04 2015 -0600

--
 .../cassandra/db/compaction/CompactionTask.java | 82 ++--
 .../io/sstable/format/big/BigTableScanner.java  |  8 +-
 2 files changed, 45 insertions(+), 45 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/65ffc39b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
--
diff --cc src/java/org/apache/cassandra/db/compaction/CompactionTask.java
index 2543f47,d215b4c..8b5058b
--- a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
@@@ -157,11 -148,16 +156,16 @@@ public class CompactionTask extends Abs
  long expectedSSTableSize = Math.min(getExpectedWriteSize(), 
strategy.getMaxSSTableBytes());
  logger.debug(Expected bloom filter size : {}, keysPerSSTable);
  
+ ListSSTableReader newSStables;
+ AbstractCompactionIterable ci;
+ 
+ // SSTableScanners need to be closed before 
markCompactedSSTablesReplaced call as scanners contain references
+ // to both ifile and dfile and SSTR will throw deletion errors on 
Windows if it tries to delete before scanner is closed.
+ // See CASSANDRA-8019 and CASSANDRA-8399
  try (AbstractCompactionStrategy.ScannerList scanners = 
strategy.getScanners(actuallyCompact))
  {
- AbstractCompactionIterable ci = new 
CompactionIterable(compactionType, scanners.scanners, controller, 
sstableFormat);
 -ci = new CompactionIterable(compactionType, 
scanners.scanners, controller);
++ci = new CompactionIterable(compactionType, 
scanners.scanners, controller, sstableFormat);
  IteratorAbstractCompactedRow iter = ci.iterator();
- ListSSTableReader newSStables;
  // we can't preheat until the tracker has been set. This 
doesn't happen until we tell the cfs to
  // replace the old entries.  Track entries to preheat here 
until then.
  long minRepairedAt = getMinRepairedAt(actuallyCompact);
@@@ -223,44 -219,44 +227,44 @@@
  if (collector != null)
  collector.finishCompaction(ci);
  }
+ }
  
- CollectionSSTableReader oldSStables = this.sstables;
- if (!offline)
- 
cfs.getDataTracker().markCompactedSSTablesReplaced(oldSStables, newSStables, 
compactionType);
- 
- // log a bunch of statistics about the result and save to 
system table compaction_history
- long dTime = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() 
- start);
- long startsize = SSTableReader.getTotalBytes(oldSStables);
- long endsize = SSTableReader.getTotalBytes(newSStables);
- double ratio = (double) endsize / (double) startsize;
- 
- StringBuilder newSSTableNames = new StringBuilder();
- for (SSTableReader reader : newSStables)
- 
newSSTableNames.append(reader.descriptor.baseFilename()).append(,);
- 
- double mbps = dTime  0 ? (double) endsize / (1024 * 1024) / 
((double) dTime / 1000) : 0;
- long totalSourceRows = 0;
- long[] counts = ci.getMergedRowCounts();
- StringBuilder mergeSummary = new StringBuilder(counts.length 
* 10);
- MapInteger, Long mergedRows = new HashMap();
- for (int i = 0; i  counts.length; i++)
- {
- long count = counts[i];
- if (count == 0)
- continue;
- 
- int rows = i + 1;
- totalSourceRows += rows * count;
- mergeSummary.append(String.format(%d:%d, , rows, 
count));
- mergedRows.put(rows, count);
- }
- 
- 

[jira] [Updated] (CASSANDRA-8577) Values of set types not loading correctly into Pig

2015-01-07 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8577:
---
Reproduced In: 2.1.2, 2.1.1  (was: 2.1.1, 2.1.2)
   Tester: Philip Thompson

 Values of set types not loading correctly into Pig
 --

 Key: CASSANDRA-8577
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8577
 Project: Cassandra
  Issue Type: Bug
Reporter: Oksana Danylyshyn
Assignee: Brandon Williams
 Fix For: 2.1.3


 Values of set types are not loading correctly from Cassandra (cql3 table, 
 Native protocol v3) into Pig using CqlNativeStorage. 
 When using Cassandra version 2.1.0 only empty values are loaded, and for 
 newer versions (2.1.1 and 2.1.2) the following error is received: 
 org.apache.cassandra.serializers.MarshalException: Unexpected extraneous 
 bytes after set value
 at 
 org.apache.cassandra.serializers.SetSerializer.deserializeForNativeProtocol(SetSerializer.java:94)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-01-07 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
src/java/org/apache/cassandra/service/MigrationTask.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/561293d1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/561293d1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/561293d1

Branch: refs/heads/trunk
Commit: 561293d132e3fa73d1e0f43d3bd0c54137f88a15
Parents: 68be72f 8078a58
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Jan 7 22:51:42 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Jan 7 22:51:42 2015 +0300

--
 CHANGES.txt |  3 +--
 .../org/apache/cassandra/service/MigrationManager.java  |  2 +-
 .../org/apache/cassandra/service/MigrationTask.java | 12 ++--
 3 files changed, 12 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/561293d1/CHANGES.txt
--
diff --cc CHANGES.txt
index 58d94ed,c1bb28c..dfed732
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,57 -1,5 +1,56 @@@
 -2.0.12:
 +2.1.3
 + * Fix case-sensitivity of index name on CREATE and DROP INDEX
 +   statements (CASSANDRA-8365)
 + * Better detection/logging for corruption in compressed sstables 
(CASSANDRA-8192)
 + * Use the correct repairedAt value when closing writer (CASSANDRA-8570)
 + * (cqlsh) Handle a schema mismatch being detected on startup (CASSANDRA-8512)
 + * Properly calculate expected write size during compaction (CASSANDRA-8532)
 + * Invalidate affected prepared statements when a table's columns
 +   are altered (CASSANDRA-7910)
 + * Stress - user defined writes should populate sequentally (CASSANDRA-8524)
 + * Fix regression in SSTableRewriter causing some rows to become unreadable 
 +   during compaction (CASSANDRA-8429)
 + * Run major compactions for repaired/unrepaired in parallel (CASSANDRA-8510)
 + * (cqlsh) Fix compression options in DESCRIBE TABLE output when compression
 +   is disabled (CASSANDRA-8288)
 + * (cqlsh) Fix DESCRIBE output after keyspaces are altered (CASSANDRA-7623)
 + * Make sure we set lastCompactedKey correctly (CASSANDRA-8463)
 + * (cqlsh) Fix output of CONSISTENCY command (CASSANDRA-8507)
 + * (cqlsh) Fixed the handling of LIST statements (CASSANDRA-8370)
 + * Make sstablescrub check leveled manifest again (CASSANDRA-8432)
 + * Check first/last keys in sstable when giving out positions (CASSANDRA-8458)
 + * Disable mmap on Windows (CASSANDRA-6993)
 + * Add missing ConsistencyLevels to cassandra-stress (CASSANDRA-8253)
 + * Add auth support to cassandra-stress (CASSANDRA-7985)
 + * Fix ArrayIndexOutOfBoundsException when generating error message
 +   for some CQL syntax errors (CASSANDRA-8455)
 + * Scale memtable slab allocation logarithmically (CASSANDRA-7882)
 + * cassandra-stress simultaneous inserts over same seed (CASSANDRA-7964)
 + * Reduce cassandra-stress sampling memory requirements (CASSANDRA-7926)
 + * Ensure memtable flush cannot expire commit log entries from its future 
(CASSANDRA-8383)
 + * Make read defrag async to reclaim memtables (CASSANDRA-8459)
 + * Remove tmplink files for offline compactions (CASSANDRA-8321)
 + * Reduce maxHintsInProgress (CASSANDRA-8415)
 + * BTree updates may call provided update function twice (CASSANDRA-8018)
 + * Release sstable references after anticompaction (CASSANDRA-8386)
 + * Handle abort() in SSTableRewriter properly (CASSANDRA-8320)
 + * Fix high size calculations for prepared statements (CASSANDRA-8231)
 + * Centralize shared executors (CASSANDRA-8055)
 + * Fix filtering for CONTAINS (KEY) relations on frozen collection
 +   clustering columns when the query is restricted to a single
 +   partition (CASSANDRA-8203)
 + * Do more aggressive entire-sstable TTL expiry checks (CASSANDRA-8243)
 + * Add more log info if readMeter is null (CASSANDRA-8238)
 + * add check of the system wall clock time at startup (CASSANDRA-8305)
 + * Support for frozen collections (CASSANDRA-7859)
 + * Fix overflow on histogram computation (CASSANDRA-8028)
 + * Have paxos reuse the timestamp generation of normal queries 
(CASSANDRA-7801)
 + * Fix incremental repair not remove parent session on remote (CASSANDRA-8291)
 + * Improve JBOD disk utilization (CASSANDRA-7386)
 + * Log failed host when preparing incremental repair (CASSANDRA-8228)
 + * Force config client mode in CQLSSTableWriter (CASSANDRA-8281)
 +Merged from 2.0:
- ===
- 2.0.12:
+  * Add an extra version check to MigrationTask (CASSANDRA-8462)
   * Ensure SSTableWriter cleans up properly after failure (CASSANDRA-8499)
   * Increase bf true positive count on key cache hit (CASSANDRA-8525)
   * Move MeteredFlusher to its own thread (CASSANDRA-8485)


[jira] [Updated] (CASSANDRA-8574) Gracefully degrade SELECT when there are lots of tombstones

2015-01-07 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8574:
---
Fix Version/s: 3.0

 Gracefully degrade SELECT when there are lots of tombstones
 ---

 Key: CASSANDRA-8574
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8574
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jens Rantil
 Fix For: 3.0


 *Background:* There's lots of tooling out there to do BigData analysis on 
 Cassandra clusters. Examples are Spark and Hadoop, which is offered by DSE. 
 The problem with both of these so far, is that a single partition key with 
 too many tombstones can make the query job fail hard.
 The describe scenario happens despite the user setting a rather small 
 PageSize. I assume this is a common scenario if you have a larger rows.
 *Proposal:* To allow a CQL SELECT to gracefully degrade to only return a 
 smaller batch of results if there are too many tombstones. The tombstones are 
 ordered according to clustering key and one should be able to page through 
 them. Potentially:
 SELECT * FROM mytable LIMIT 1000 TOMBSTONES;
 would page through maximum 1000 tombstones, _or_ 1000 (CQL) rows.
 I understand that this obviously would degrade performance, but it would at 
 least yield a result.
 Additional comment: I haven't dug into Cassandra code, but conceptually I 
 guess this would be doable. Let me know what you think.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Fix race condition in StreamTransferTask that could lead to infinite loops and premature sstable deletion

2015-01-07 Thread benedict
Fix race condition in StreamTransferTask that could lead to
infinite loops and premature sstable deletion

patch by benedict; reviewed by yukim for CASSANDRA-7704


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ddca610c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ddca610c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ddca610c

Branch: refs/heads/trunk
Commit: ddca610c9c82e9bf527d4f57a814af06ed2a7cb3
Parents: 561293d
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Jan 7 19:44:00 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Jan 7 20:03:17 2015 +

--
 CHANGES.txt |  2 +
 .../cassandra/streaming/StreamTransferTask.java | 83 +++-
 .../streaming/StreamTransferTaskTest.java   | 17 +++-
 3 files changed, 63 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ddca610c/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index dfed732..dac555b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -50,6 +50,8 @@
  * Log failed host when preparing incremental repair (CASSANDRA-8228)
  * Force config client mode in CQLSSTableWriter (CASSANDRA-8281)
 Merged from 2.0:
+ * Fix race condition in StreamTransferTask that could lead to
+   infinite loops and premature sstable deletion (CASSANDRA-7704)
  * Add an extra version check to MigrationTask (CASSANDRA-8462)
  * Ensure SSTableWriter cleans up properly after failure (CASSANDRA-8499)
  * Increase bf true positive count on key cache hit (CASSANDRA-8525)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ddca610c/src/java/org/apache/cassandra/streaming/StreamTransferTask.java
--
diff --git a/src/java/org/apache/cassandra/streaming/StreamTransferTask.java 
b/src/java/org/apache/cassandra/streaming/StreamTransferTask.java
index 48a7d89..b840ee5 100644
--- a/src/java/org/apache/cassandra/streaming/StreamTransferTask.java
+++ b/src/java/org/apache/cassandra/streaming/StreamTransferTask.java
@@ -19,9 +19,10 @@ package org.apache.cassandra.streaming;
 
 import java.util.*;
 import java.util.concurrent.*;
+import java.util.concurrent.ScheduledFuture;
 import java.util.concurrent.atomic.AtomicInteger;
-import java.util.concurrent.atomic.AtomicBoolean;
 
+import org.apache.cassandra.concurrent.NamedThreadFactory;
 import org.apache.cassandra.io.sstable.SSTableReader;
 import org.apache.cassandra.streaming.messages.OutgoingFileMessage;
 import org.apache.cassandra.utils.Pair;
@@ -31,14 +32,13 @@ import org.apache.cassandra.utils.Pair;
  */
 public class StreamTransferTask extends StreamTask
 {
-private final ScheduledExecutorService timeoutExecutor = 
Executors.newSingleThreadScheduledExecutor();
+private static final ScheduledExecutorService timeoutExecutor = 
Executors.newSingleThreadScheduledExecutor(new 
NamedThreadFactory(StreamingTransferTaskTimeouts));
 
 private final AtomicInteger sequenceNumber = new AtomicInteger(0);
-private AtomicBoolean aborted = new AtomicBoolean(false);
+private boolean aborted = false;
 
-private final MapInteger, OutgoingFileMessage files = new 
ConcurrentHashMap();
-
-private final MapInteger, ScheduledFuture timeoutTasks = new 
ConcurrentHashMap();
+private final MapInteger, OutgoingFileMessage files = new HashMap();
+private final MapInteger, ScheduledFuture timeoutTasks = new HashMap();
 
 private long totalSize;
 
@@ -47,7 +47,7 @@ public class StreamTransferTask extends StreamTask
 super(session, cfId);
 }
 
-public void addTransferFile(SSTableReader sstable, long estimatedKeys, 
ListPairLong, Long sections, long repairedAt)
+public synchronized void addTransferFile(SSTableReader sstable, long 
estimatedKeys, ListPairLong, Long sections, long repairedAt)
 {
 assert sstable != null  cfId.equals(sstable.metadata.cfId);
 OutgoingFileMessage message = new OutgoingFileMessage(sstable, 
sequenceNumber.getAndIncrement(), estimatedKeys, sections, repairedAt);
@@ -60,35 +60,42 @@ public class StreamTransferTask extends StreamTask
  *
  * @param sequenceNumber sequence number of file
  */
-public synchronized void complete(int sequenceNumber)
+public void complete(int sequenceNumber)
 {
-OutgoingFileMessage file = files.remove(sequenceNumber);
-if (file != null)
+boolean signalComplete;
+synchronized (this)
 {
-file.sstable.releaseReference();
-// all file sent, notify session this task is complete.
-if (files.isEmpty())
-{
-

[1/3] cassandra git commit: Fix race condition in StreamTransferTask that could lead to infinite loops and premature sstable deletion

2015-01-07 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 561293d13 - ddca610c9
  refs/heads/trunk 6041d41cd - 0a80fe4b5


Fix race condition in StreamTransferTask that could lead to
infinite loops and premature sstable deletion

patch by benedict; reviewed by yukim for CASSANDRA-7704


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ddca610c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ddca610c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ddca610c

Branch: refs/heads/cassandra-2.1
Commit: ddca610c9c82e9bf527d4f57a814af06ed2a7cb3
Parents: 561293d
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Jan 7 19:44:00 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Jan 7 20:03:17 2015 +

--
 CHANGES.txt |  2 +
 .../cassandra/streaming/StreamTransferTask.java | 83 +++-
 .../streaming/StreamTransferTaskTest.java   | 17 +++-
 3 files changed, 63 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ddca610c/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index dfed732..dac555b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -50,6 +50,8 @@
  * Log failed host when preparing incremental repair (CASSANDRA-8228)
  * Force config client mode in CQLSSTableWriter (CASSANDRA-8281)
 Merged from 2.0:
+ * Fix race condition in StreamTransferTask that could lead to
+   infinite loops and premature sstable deletion (CASSANDRA-7704)
  * Add an extra version check to MigrationTask (CASSANDRA-8462)
  * Ensure SSTableWriter cleans up properly after failure (CASSANDRA-8499)
  * Increase bf true positive count on key cache hit (CASSANDRA-8525)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ddca610c/src/java/org/apache/cassandra/streaming/StreamTransferTask.java
--
diff --git a/src/java/org/apache/cassandra/streaming/StreamTransferTask.java 
b/src/java/org/apache/cassandra/streaming/StreamTransferTask.java
index 48a7d89..b840ee5 100644
--- a/src/java/org/apache/cassandra/streaming/StreamTransferTask.java
+++ b/src/java/org/apache/cassandra/streaming/StreamTransferTask.java
@@ -19,9 +19,10 @@ package org.apache.cassandra.streaming;
 
 import java.util.*;
 import java.util.concurrent.*;
+import java.util.concurrent.ScheduledFuture;
 import java.util.concurrent.atomic.AtomicInteger;
-import java.util.concurrent.atomic.AtomicBoolean;
 
+import org.apache.cassandra.concurrent.NamedThreadFactory;
 import org.apache.cassandra.io.sstable.SSTableReader;
 import org.apache.cassandra.streaming.messages.OutgoingFileMessage;
 import org.apache.cassandra.utils.Pair;
@@ -31,14 +32,13 @@ import org.apache.cassandra.utils.Pair;
  */
 public class StreamTransferTask extends StreamTask
 {
-private final ScheduledExecutorService timeoutExecutor = 
Executors.newSingleThreadScheduledExecutor();
+private static final ScheduledExecutorService timeoutExecutor = 
Executors.newSingleThreadScheduledExecutor(new 
NamedThreadFactory(StreamingTransferTaskTimeouts));
 
 private final AtomicInteger sequenceNumber = new AtomicInteger(0);
-private AtomicBoolean aborted = new AtomicBoolean(false);
+private boolean aborted = false;
 
-private final MapInteger, OutgoingFileMessage files = new 
ConcurrentHashMap();
-
-private final MapInteger, ScheduledFuture timeoutTasks = new 
ConcurrentHashMap();
+private final MapInteger, OutgoingFileMessage files = new HashMap();
+private final MapInteger, ScheduledFuture timeoutTasks = new HashMap();
 
 private long totalSize;
 
@@ -47,7 +47,7 @@ public class StreamTransferTask extends StreamTask
 super(session, cfId);
 }
 
-public void addTransferFile(SSTableReader sstable, long estimatedKeys, 
ListPairLong, Long sections, long repairedAt)
+public synchronized void addTransferFile(SSTableReader sstable, long 
estimatedKeys, ListPairLong, Long sections, long repairedAt)
 {
 assert sstable != null  cfId.equals(sstable.metadata.cfId);
 OutgoingFileMessage message = new OutgoingFileMessage(sstable, 
sequenceNumber.getAndIncrement(), estimatedKeys, sections, repairedAt);
@@ -60,35 +60,42 @@ public class StreamTransferTask extends StreamTask
  *
  * @param sequenceNumber sequence number of file
  */
-public synchronized void complete(int sequenceNumber)
+public void complete(int sequenceNumber)
 {
-OutgoingFileMessage file = files.remove(sequenceNumber);
-if (file != null)
+boolean signalComplete;
+synchronized (this)
 {
-

[jira] [Comment Edited] (CASSANDRA-8462) Upgrading a 2.0 to 2.1 breaks CFMetaData on 2.0 nodes

2015-01-07 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268117#comment-14268117
 ] 

Benedict edited comment on CASSANDRA-8462 at 1/7/15 8:04 PM:
-

-I just pushed a change to 2.0, 2.1 and trunk, and this change seemed not to 
have been merged with 2.1. It looks like the changes are minor, and may not 
even apply to 2.1. I backed them out to what 2.1 was prior to my merge. If 
somebody could have a look and check what should have been applied to 2.1, it 
would be appreciated.-

Nevermind. We appear to have hit a lengthy race condition, and my upload was 
slow enough to be rejected.


was (Author: benedict):
I just pushed a change to 2.0, 2.1 and trunk, and this change seemed not to 
have been merged with 2.1. It looks like the changes are minor, and may not 
even apply to 2.1. I backed them out to what 2.1 was prior to my merge. If 
somebody could have a look and check what should have been applied to 2.1, it 
would be appreciated.

 Upgrading a 2.0 to 2.1 breaks CFMetaData on 2.0 nodes
 -

 Key: CASSANDRA-8462
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8462
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Rick Branson
Assignee: Aleksey Yeschenko
 Fix For: 2.0.12, 2.1.3

 Attachments: 8462.txt


 Added a 2.1.2 node to a cluster running 2.0.11. Didn't make any schema 
 changes. When I tried to reboot one of the 2.0 nodes, it failed to boot with 
 this exception. Besides an obvious fix, any workarounds for this?
 {noformat}
 java.lang.IllegalArgumentException: No enum constant 
 org.apache.cassandra.config.CFMetaData.Caching.{keys:ALL, 
 rows_per_partition:NONE}
 at java.lang.Enum.valueOf(Enum.java:236)
 at 
 org.apache.cassandra.config.CFMetaData$Caching.valueOf(CFMetaData.java:286)
 at 
 org.apache.cassandra.config.CFMetaData.fromSchemaNoColumnsNoTriggers(CFMetaData.java:1713)
 at 
 org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1793)
 at 
 org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:307)
 at 
 org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:288)
 at 
 org.apache.cassandra.db.DefsTables.loadFromKeyspace(DefsTables.java:131)
 at 
 org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:529)
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:270)
 at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
 at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-01-07 Thread benedict
Merge branch 'cassandra-2.1' into trunk

Conflicts:
src/java/org/apache/cassandra/streaming/StreamTransferTask.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0a80fe4b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0a80fe4b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0a80fe4b

Branch: refs/heads/trunk
Commit: 0a80fe4b57c4d56ec65be4c370674cece1283329
Parents: 6041d41 ddca610
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Jan 7 20:03:40 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Jan 7 20:03:40 2015 +

--
 CHANGES.txt | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0a80fe4b/CHANGES.txt
--



cassandra git commit: Ninja: fix minor unit test regression from CASSANDRA-8365

2015-01-07 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 bdbb071f4 - 96365bf55


Ninja: fix minor unit test regression from CASSANDRA-8365


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/96365bf5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/96365bf5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/96365bf5

Branch: refs/heads/cassandra-2.1
Commit: 96365bf551d4b0397e7689fc2bed33b86532f500
Parents: bdbb071
Author: Tyler Hobbs ty...@datastax.com
Authored: Wed Jan 7 14:29:07 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Wed Jan 7 14:29:07 2015 -0600

--
 src/java/org/apache/cassandra/cql3/Cql.g  | 2 +-
 test/unit/org/apache/cassandra/cql3/UseStatementTest.java | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/96365bf5/src/java/org/apache/cassandra/cql3/Cql.g
--
diff --git a/src/java/org/apache/cassandra/cql3/Cql.g 
b/src/java/org/apache/cassandra/cql3/Cql.g
index 1fcdb71..eda0529 100644
--- a/src/java/org/apache/cassandra/cql3/Cql.g
+++ b/src/java/org/apache/cassandra/cql3/Cql.g
@@ -854,7 +854,7 @@ ksName[KeyspaceElementName name]
 : t=IDENT  { $name.setKeyspace($t.text, false);}
 | t=QUOTED_NAME{ $name.setKeyspace($t.text, true);}
 | k=unreserved_keyword { $name.setKeyspace(k, false);}
-| QMARK {addRecognitionError(Bind variables cannot be used for 
keyspace);}
+| QMARK {addRecognitionError(Bind variables cannot be used for keyspace 
names);}
 ;
 
 cfName[CFName name]

http://git-wip-us.apache.org/repos/asf/cassandra/blob/96365bf5/test/unit/org/apache/cassandra/cql3/UseStatementTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/UseStatementTest.java 
b/test/unit/org/apache/cassandra/cql3/UseStatementTest.java
index 77ac8a7..66b4b42 100644
--- a/test/unit/org/apache/cassandra/cql3/UseStatementTest.java
+++ b/test/unit/org/apache/cassandra/cql3/UseStatementTest.java
@@ -24,6 +24,6 @@ public class UseStatementTest extends CQLTester
 @Test
 public void testUseStatementWithBindVariable() throws Throwable
 {
-assertInvalidSyntaxMessage(Bind variables cannot be used for keyspace 
or table names, USE ?);
+assertInvalidSyntaxMessage(Bind variables cannot be used for keyspace 
names, USE ?);
 }
 }



[1/2] cassandra git commit: Ninja: fix minor unit test regression from CASSANDRA-8365

2015-01-07 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 65ffc39b1 - 2b7522f66


Ninja: fix minor unit test regression from CASSANDRA-8365


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/96365bf5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/96365bf5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/96365bf5

Branch: refs/heads/trunk
Commit: 96365bf551d4b0397e7689fc2bed33b86532f500
Parents: bdbb071
Author: Tyler Hobbs ty...@datastax.com
Authored: Wed Jan 7 14:29:07 2015 -0600
Committer: Tyler Hobbs ty...@datastax.com
Committed: Wed Jan 7 14:29:07 2015 -0600

--
 src/java/org/apache/cassandra/cql3/Cql.g  | 2 +-
 test/unit/org/apache/cassandra/cql3/UseStatementTest.java | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/96365bf5/src/java/org/apache/cassandra/cql3/Cql.g
--
diff --git a/src/java/org/apache/cassandra/cql3/Cql.g 
b/src/java/org/apache/cassandra/cql3/Cql.g
index 1fcdb71..eda0529 100644
--- a/src/java/org/apache/cassandra/cql3/Cql.g
+++ b/src/java/org/apache/cassandra/cql3/Cql.g
@@ -854,7 +854,7 @@ ksName[KeyspaceElementName name]
 : t=IDENT  { $name.setKeyspace($t.text, false);}
 | t=QUOTED_NAME{ $name.setKeyspace($t.text, true);}
 | k=unreserved_keyword { $name.setKeyspace(k, false);}
-| QMARK {addRecognitionError(Bind variables cannot be used for 
keyspace);}
+| QMARK {addRecognitionError(Bind variables cannot be used for keyspace 
names);}
 ;
 
 cfName[CFName name]

http://git-wip-us.apache.org/repos/asf/cassandra/blob/96365bf5/test/unit/org/apache/cassandra/cql3/UseStatementTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/UseStatementTest.java 
b/test/unit/org/apache/cassandra/cql3/UseStatementTest.java
index 77ac8a7..66b4b42 100644
--- a/test/unit/org/apache/cassandra/cql3/UseStatementTest.java
+++ b/test/unit/org/apache/cassandra/cql3/UseStatementTest.java
@@ -24,6 +24,6 @@ public class UseStatementTest extends CQLTester
 @Test
 public void testUseStatementWithBindVariable() throws Throwable
 {
-assertInvalidSyntaxMessage(Bind variables cannot be used for keyspace 
or table names, USE ?);
+assertInvalidSyntaxMessage(Bind variables cannot be used for keyspace 
names, USE ?);
 }
 }



[jira] [Commented] (CASSANDRA-8390) The process cannot access the file because it is being used by another process

2015-01-07 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268189#comment-14268189
 ] 

Joshua McKenzie commented on CASSANDRA-8390:


Side note - for a list of file system drivers running on your machine, 
start-run-msinfo32, Software Environment-System Drivers, sort by Type and 
then State (control+click it).  Anything listed as Started=Yes and 
State=Running is active.

 The process cannot access the file because it is being used by another process
 --

 Key: CASSANDRA-8390
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8390
 Project: Cassandra
  Issue Type: Bug
Reporter: Ilya Komolkin
Assignee: Joshua McKenzie
 Fix For: 2.1.3

 Attachments: NoHostAvailableLogs.zip


 {code}21:46:27.810 [NonPeriodicTasks:1] ERROR o.a.c.service.CassandraDaemon - 
 Exception in thread Thread[NonPeriodicTasks:1,5,main]
 org.apache.cassandra.io.FSWriteError: java.nio.file.FileSystemException: 
 E:\Upsource_12391\data\cassandra\data\kernel\filechangehistory_t-a277b560764611e48c8e4915424c75fe\kernel-filechangehistory_t-ka-33-Index.db:
  The process cannot access the file because it is being used by another 
 process.
  
 at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:135) 
 ~[cassandra-all-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:121) 
 ~[cassandra-all-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTable.delete(SSTable.java:113) 
 ~[cassandra-all-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableDeletingTask.run(SSTableDeletingTask.java:94)
  ~[cassandra-all-2.1.1.jar:2.1.1]
 at 
 org.apache.cassandra.io.sstable.SSTableReader$6.run(SSTableReader.java:664) 
 ~[cassandra-all-2.1.1.jar:2.1.1]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_71]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_71]
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
  ~[na:1.7.0_71]
 at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
  ~[na:1.7.0_71]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_71]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_71]
 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
 Caused by: java.nio.file.FileSystemException: 
 E:\Upsource_12391\data\cassandra\data\kernel\filechangehistory_t-a277b560764611e48c8e4915424c75fe\kernel-filechangehistory_t-ka-33-Index.db:
  The process cannot access the file because it is being used by another 
 process.
  
 at 
 sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) 
 ~[na:1.7.0_71]
 at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) 
 ~[na:1.7.0_71]
 at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) 
 ~[na:1.7.0_71]
 at 
 sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
  ~[na:1.7.0_71]
 at 
 sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
  ~[na:1.7.0_71]
 at java.nio.file.Files.delete(Files.java:1079) ~[na:1.7.0_71]
 at 
 org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:131) 
 ~[cassandra-all-2.1.1.jar:2.1.1]
 ... 11 common frames omitted{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8457) nio MessagingService

2015-01-07 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268221#comment-14268221
 ] 

Ariel Weisberg edited comment on CASSANDRA-8457 at 1/7/15 9:11 PM:
---

In GCE coalescing provided a 12% increase in throughput in this specific 
message heavy high concurrency workload. The penalty is that at low concurrency 
there is an immediate loss of performance with any coalescing and a large 
window has a greater impact at low concurrency so there is tension there. The 
larger the window the better the performance bump.

Right scale does not offer instances with enhanced networking. To find out 
whether coalescing provides real benefits in EC2/Xen or milder GCE like 
benefits I will have to get my hands on some.

I wanted to account for the impact of coalescing at low concurrency. Low 
concurrency is not a recipe for great performance, but it is part of the out of 
the box experience and people do compare different databases at low concurrency.

Testing with 3 client threads each running on a dedicated client instance (3 
threads total). This is in GCE.

With TCP no delay on and coalescing
||Coalesce window microseconds|Throughput||
|0| 2191|
|6| 1910|
|12| 1873|
|25| 1867|
|50| 1779|
|100| 1667|
|150| 1566|
|200| 1491|

I also tried disabling coalescing when using HSHA and it didn't seem to make a 
difference. Surprising considering the impact of 25 microseconds of coalescing 
intra-cluster.

I also experimented with some other things. Binding interrupts cores 0 and 8 
and task setting C* off of those cores. I didn't see a big impact.

I did see a small positive impact using 3 clients 8 servers which means the 
measurements with 2 clients might be a little suspect. With 3 clients and 200 
microseconds of coalescing it peaked at 165k in GCE.

I also found out that banned CPUs in irqbalance is broken and has no effect and 
this has been the case for some time.



was (Author: aweisberg):
I wanted to account for the impact of coalescing at low concurrency. Low 
concurrency is not a recipe for great performance, but it is part of the out of 
the box experience and people do compare different databases at low concurrency.

In GCE coalescing provided a 12% increase in throughput in this specific 
message heavy high concurrency workload. The penalty is that at low concurrency 
there is an immediate loss of performance with any coalescing and a large 
window has a greater impact at low concurrency so there is tension there. The 
larger the window the better the performance bump.

Testing with 3 client threads each running on a dedicated client instance (3 
threads total). This is in GCE.

With TCP no delay on and coalescing
||Coalesce window microseconds|Throughput||
|0| 2191|
|6| 1910|
|12| 1873|
|25| 1867|
|50| 1779|
|100| 1667|
|150| 1566|
|200| 1491|

I also tried disabling coalescing when using HSHA and it didn't seem to make a 
difference. Surprising considering the impact of 25 microseconds of coalescing 
intra-cluster.

I also experimented with some other things. Binding interrupts cores 0 and 8 
and task setting C* off of those cores. I didn't see a big impact.

I did see a small positive impact using 3 clients 8 servers which means the 
measurements with 2 clients might be a little suspect. With 3 clients and 200 
microseconds of coalescing it peaked at 165k in GCE.

I also found out that banned CPUs in irqbalance is broken and has no effect and 
this has been the case for some time.

Right scale does not offer instances with enhanced networking. To find out 
whether coalescing provides real benefits in EC2/Xen or milder GCE like 
benefits I will have to get my hands on some.


 nio MessagingService
 

 Key: CASSANDRA-8457
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8457
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Jonathan Ellis
Assignee: Ariel Weisberg
  Labels: performance
 Fix For: 3.0


 Thread-per-peer (actually two each incoming and outbound) is a big 
 contributor to context switching, especially for larger clusters.  Let's look 
 at switching to nio, possibly via Netty.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7287) Pig CqlStorage test fails with IAE

2015-01-07 Thread Oksana Danylyshyn (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267720#comment-14267720
 ] 

Oksana Danylyshyn commented on CASSANDRA-7287:
--

[~slebresne],[~brandon.williams]
I am having issues with loading set types into Pig, and with reverting changes 
to this story, it works as expected.

Values of set types are not loading correctly from Cassandra (cql3 table, 
Native protocol v3) into Pig using CqlNativeStorage.  
When using Cassandra version 2.1.0 only empty values are loaded, and for newer 
versions (2.1.1 and 2.1.2) the following error is received: 
org.apache.cassandra.serializers.MarshalException: Unexpected extraneous bytes 
after set value
at 
org.apache.cassandra.serializers.SetSerializer.deserializeForNativeProtocol(SetSerializer.java:94)

Note, that it works correctly for version 2.1.0-rc4 and CqlStorage.

 Pig CqlStorage test fails with IAE
 --

 Key: CASSANDRA-7287
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7287
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop, Tests
Reporter: Brandon Williams
Assignee: Sylvain Lebresne
 Fix For: 2.1 rc1

 Attachments: 7287.txt


 {noformat}
 [junit] java.lang.IllegalArgumentException
 [junit] at java.nio.Buffer.limit(Buffer.java:267)
 [junit] at 
 org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:542)
 [junit] at 
 org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:117)
 [junit] at 
 org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:97)
 [junit] at 
 org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:28)
 [junit] at 
 org.apache.cassandra.serializers.CollectionSerializer.deserialize(CollectionSerializer.java:48)
 [junit] at 
 org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:66)
 [junit] at 
 org.apache.cassandra.hadoop.pig.AbstractCassandraStorage.cassandraToObj(AbstractCassandraStorage.java:792)
 [junit] at 
 org.apache.cassandra.hadoop.pig.CqlStorage.cqlColumnToObj(CqlStorage.java:195)
 [junit] at 
 org.apache.cassandra.hadoop.pig.CqlStorage.getNext(CqlStorage.java:118)
 {noformat}
 I'm guessing this is caused by CqlStorage passing an empty BB to BBU, but I 
 don't know if it's pig that's broken or is a deeper issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6983) DirectoriesTest fails when run as root

2015-01-07 Thread Alan Boudreault (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Boudreault updated CASSANDRA-6983:
---
Attachment: 6983-v2.patch

[~yukim] Sure. Here is another version of the patch.

 DirectoriesTest fails when run as root
 --

 Key: CASSANDRA-6983
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6983
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Brandon Williams
Assignee: Alan Boudreault
Priority: Minor
 Fix For: 2.0.12

 Attachments: 6983-v2.patch


 When you run the DirectoriesTest as a normal user, it passes because it fails 
 to create the 'bad' directory:
 {noformat}
 [junit] - Standard Error -
 [junit] ERROR 16:16:18,111 Failed to create 
 /tmp/cassandra4119802552776680052unittest/ks/bad directory
 [junit]  WARN 16:16:18,112 Blacklisting 
 /tmp/cassandra4119802552776680052unittest/ks/bad for writes
 [junit] -  ---
 {noformat}
 But when you run the test as root, it succeeds in making the directory, 
 causing an assertion failure that it's unwritable:
 {noformat}
 [junit] Testcase: 
 testDiskFailurePolicy_best_effort(org.apache.cassandra.db.DirectoriesTest):   
 FAILED
 [junit] 
 [junit] junit.framework.AssertionFailedError: 
 [junit] at 
 org.apache.cassandra.db.DirectoriesTest.testDiskFailurePolicy_best_effort(DirectoriesTest.java:199)
 {noformat}
 It seems to me that we shouldn't be relying on failing the make the 
 directory.  If we're just going to test a nonexistent dir, why try to make 
 one at all?  And if that is supposed to succeed, then we have a problem with 
 either the test or blacklisting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7768) Error when creating multiple CQLSSTableWriters for more than one column family in the same keyspace

2015-01-07 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267753#comment-14267753
 ] 

Benjamin Lerer commented on CASSANDRA-7768:
---

The problem come from a conflict between {{Keyspace.open}} and 
{{Config.setClientMode(true)}}. 
 When {{Config.setClientMode(true)}} is used a minimal configuration is used. 
That configuration has some missing data which trigger the 
{{NullPointerException}}. 

The patch for CASSANDRA-8280 add a {{Keyspace.open}} within 
{{UpdateStatement}}. Due to that on the latest 2.1 {{CQLSSTableWriter.addRow}} 
will throw a {{NullPointerException}} even if only one {{CQLSSTableWriter}} is 
used.

To avoid that problem you must not call  {{Config.setClientMode(true)}} before 
using {{CQLSSTableWriter}}.
Unfortunatly, you will then face CASSANDRA-8281.


 Error when creating multiple CQLSSTableWriters for more than one column 
 family in the same keyspace
 ---

 Key: CASSANDRA-7768
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7768
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Paul Pak
Assignee: Benjamin Lerer
Priority: Minor
  Labels: cql3, hadoop
 Attachments: trunk-7768-v1.txt


 The reason why this occurs is if the keyspace has already been loaded (due to 
 another column family being previously loaded in the same keyspace), 
 CQLSSTableWriter builder only loads the column family via 
 Schema.load(CFMetaData). However, Schema.load(CFMetaData) only adds to the 
 Schema.cfIdMap without making the proper addition to the CFMetaData map 
 belonging to the KSMetaData map.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7438) Serializing Row cache alternative (Fully off heap)

2015-01-07 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14267558#comment-14267558
 ] 

Robert Stupp commented on CASSANDRA-7438:
-

BTW: Is there any singe-node-cluster test that has been used to test the 'old' 
row cache or a test that runs against a single-node-cluster and verifies the 
data being written during a long run - i.e. several hours?

 Serializing Row cache alternative (Fully off heap)
 --

 Key: CASSANDRA-7438
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7438
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: Linux
Reporter: Vijay
Assignee: Robert Stupp
  Labels: performance
 Fix For: 3.0

 Attachments: 0001-CASSANDRA-7438.patch, tests.zip


 Currently SerializingCache is partially off heap, keys are still stored in 
 JVM heap as BB, 
 * There is a higher GC costs for a reasonably big cache.
 * Some users have used the row cache efficiently in production for better 
 results, but this requires careful tunning.
 * Overhead in Memory for the cache entries are relatively high.
 So the proposal for this ticket is to move the LRU cache logic completely off 
 heap and use JNI to interact with cache. We might want to ensure that the new 
 implementation match the existing API's (ICache), and the implementation 
 needs to have safe memory access, low overhead in memory and less memcpy's 
 (As much as possible).
 We might also want to make this cache configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8573) Lack of compaction tooling for LeveledCompactionStrategy

2015-01-07 Thread Jens Rantil (JIRA)
Jens Rantil created CASSANDRA-8573:
--

 Summary: Lack of compaction tooling for LeveledCompactionStrategy
 Key: CASSANDRA-8573
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8573
 Project: Cassandra
  Issue Type: Bug
Reporter: Jens Rantil


This is a highly frustration-driven ticket. Apologize for roughness in tone ;-)

*Background:* I happen to have a partition key with lots of tombstones. Sadly, 
I happen to run LeveledCompactionStrategy (LCS). Yes, it's probably my mistake 
to have put them there but running into tombstone issues seem to be common for 
Cassandra, so I don't think this ticket can be discarded as simply user error. 
In fact, I believe this could happen to the best of us. And when it does, there 
should be a quick way of correcting this.

*Problem:* How does one handle this? Well, for DTCS one could issue a 
compaction using `nodetool compact`, or one could use the 
forceUserDefinedCompaction MBean. Neither of these work for LCS (shall I also 
say DTCS?).

*Workaround:* The only options AFAIK are

 1. to lower gc_grace_seconds and wait it out until the Cassandra node(s) 
has garbage collected the sstables. This can take days.
 2. possibly lower `tombstone_threshold` to something tiny, optionally lowering 
`tombstone_compaction_interval ` (for recent deletes). This has the implication 
that nodes might start garbage collecting a ton of unrelated stuff.
 3. variations of delete some or all your sstables and run a full repair. 
Takes ages.

*Proposed solution:* Make forceUserDefinedCompaction support LCS, or create a 
similar endpoint that does something similar.

*Additional comments:* I read somewhere where someone proposed making LCS 
default compaction strategy. Before this ticket is fixed, I don't see that as 
an option.

Let me know what you think (or close if not relevant).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8573) Lack of compaction tooling for LeveledCompactionStrategy

2015-01-07 Thread Jens Rantil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Rantil updated CASSANDRA-8573:
---
Description: 
This is a highly frustration-driven ticket. Apologize for roughness in tone ;-)

*Background:* I happen to have a partition key with lots of tombstones. Sadly, 
I happen to run LeveledCompactionStrategy (LCS). Yes, it's probably my mistake 
to have put them there but running into tombstone issues seem to be common for 
Cassandra, so I don't think this ticket can be discarded as simply user error. 
In fact, I believe this could happen to the best of us. And when it does, there 
should be a quick way of correcting this.

*Problem:* How does one handle this? Well, for DTCS one could issue a 
compaction using `nodetool compact`, or one could use the 
forceUserDefinedCompaction MBean. Neither of these work for LCS (shall I also 
say DTCS?).

*Workaround:* The only options AFAIK are

 1. to lower gc_grace_seconds and wait it out until the Cassandra node(s) 
has garbage collected the sstables. This can take days.
 2. possibly lower `tombstone_threshold` to something tiny, optionally lowering 
`tombstone_compaction_interval ` (for recent deletes). This has the implication 
that nodes might start garbage collecting a ton of unrelated stuff.
 3. variations of delete some or all your sstables and run a full repair. 
Takes ages.

*Proposed solution:* Either
 - Make forceUserDefinedCompaction support LCS, or create a similar endpoint 
that does something similar.
 - make something similar to `nodetool compact` work with LCS.

*Additional comments:* I read somewhere where someone proposed making LCS 
default compaction strategy. Before this ticket is fixed, I don't see that as 
an option.

Let me know what you think (or close if not relevant).

  was:
This is a highly frustration-driven ticket. Apologize for roughness in tone ;-)

*Background:* I happen to have a partition key with lots of tombstones. Sadly, 
I happen to run LeveledCompactionStrategy (LCS). Yes, it's probably my mistake 
to have put them there but running into tombstone issues seem to be common for 
Cassandra, so I don't think this ticket can be discarded as simply user error. 
In fact, I believe this could happen to the best of us. And when it does, there 
should be a quick way of correcting this.

*Problem:* How does one handle this? Well, for DTCS one could issue a 
compaction using `nodetool compact`, or one could use the 
forceUserDefinedCompaction MBean. Neither of these work for LCS (shall I also 
say DTCS?).

*Workaround:* The only options AFAIK are

 1. to lower gc_grace_seconds and wait it out until the Cassandra node(s) 
has garbage collected the sstables. This can take days.
 2. possibly lower `tombstone_threshold` to something tiny, optionally lowering 
`tombstone_compaction_interval ` (for recent deletes). This has the implication 
that nodes might start garbage collecting a ton of unrelated stuff.
 3. variations of delete some or all your sstables and run a full repair. 
Takes ages.

*Proposed solution:* Make forceUserDefinedCompaction support LCS, or create a 
similar endpoint that does something similar.

*Additional comments:* I read somewhere where someone proposed making LCS 
default compaction strategy. Before this ticket is fixed, I don't see that as 
an option.

Let me know what you think (or close if not relevant).


 Lack of compaction tooling for LeveledCompactionStrategy
 

 Key: CASSANDRA-8573
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8573
 Project: Cassandra
  Issue Type: Bug
Reporter: Jens Rantil

 This is a highly frustration-driven ticket. Apologize for roughness in tone 
 ;-)
 *Background:* I happen to have a partition key with lots of tombstones. 
 Sadly, I happen to run LeveledCompactionStrategy (LCS). Yes, it's probably my 
 mistake to have put them there but running into tombstone issues seem to be 
 common for Cassandra, so I don't think this ticket can be discarded as simply 
 user error. In fact, I believe this could happen to the best of us. And when 
 it does, there should be a quick way of correcting this.
 *Problem:* How does one handle this? Well, for DTCS one could issue a 
 compaction using `nodetool compact`, or one could use the 
 forceUserDefinedCompaction MBean. Neither of these work for LCS (shall I also 
 say DTCS?).
 *Workaround:* The only options AFAIK are
  1. to lower gc_grace_seconds and wait it out until the Cassandra node(s) 
 has garbage collected the sstables. This can take days.
  2. possibly lower `tombstone_threshold` to something tiny, optionally 
 lowering `tombstone_compaction_interval ` (for recent deletes). This has the 
 implication that nodes might start garbage collecting a ton of unrelated 
 stuff.
  3. variations of delete some or 

[jira] [Updated] (CASSANDRA-8546) RangeTombstoneList becoming bottleneck on tombstone heavy tasks

2015-01-07 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-8546:
--
Assignee: Joshua McKenzie

 RangeTombstoneList becoming bottleneck on tombstone heavy tasks
 ---

 Key: CASSANDRA-8546
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8546
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: 2.0.11 / 2.1
Reporter: Dominic Letz
Assignee: Joshua McKenzie
 Fix For: 2.1.3

 Attachments: cassandra-2.0.11-8546.txt, cassandra-2.1-8546.txt, 
 rangetombstonelist_compaction.png, rangetombstonelist_mutation.png, 
 rangetombstonelist_read.png, tombstone_test.tgz


 I would like to propose a change of the data structure used in the 
 RangeTombstoneList to store and insert tombstone ranges to something with at 
 least O(log N) insert in the middle and at near O(1) and start AND end. Here 
 is why:
 When having tombstone heavy work-loads the current implementation of 
 RangeTombstoneList becomes a bottleneck with slice queries.
 Scanning the number of tombstones up to the default maximum (100k) can take 
 up to 3 minutes of how addInternal() scales on insertion of middle and start 
 elements.
 The attached test shows that with 50k deletes from both sides of a range.
 INSERT 1...11
 flush()
 DELETE 1...5
 DELETE 11...6
 While one direction performs ok (~400ms on my notebook):
 {code}
 SELECT * FROM timeseries WHERE name = 'a' ORDER BY timestamp DESC LIMIT 1
 {code}
 The other direction underperforms (~7seconds on my notebook)
 {code}
 SELECT * FROM timeseries WHERE name = 'a' ORDER BY timestamp ASC LIMIT 1
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7970) JSON support for CQL

2015-01-07 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14268341#comment-14268341
 ] 

Robert Stupp commented on CASSANDRA-7970:
-

[~thobbs] I was a bit confused about the discussion because I thought there 
were plans to allow string encoded numbers everywhere in CQL. Guess, now I got 
it :)

TLDR -  I'm okay with accepting numbers in strings in JSON:
I think that we should only use string representation where the JSON spec gives 
us no other option (that's on 'object names' / 'map keys'). OK, it's a 
different thing whether we _accept_ numbers in strings or whether we _produce_. 
IMO we should produce numbers (and not numbers as strings) for JSON object 
values. Unfortunately I've no better argument than follow the spec. But OTOH 
it does not matter whether number parsing in antlr fails or whether a 
Float.parseFloat fails and JSON spec does not force us to not encode numbers 
in strings.

 JSON support for CQL
 

 Key: CASSANDRA-7970
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7970
 Project: Cassandra
  Issue Type: New Feature
  Components: API
Reporter: Jonathan Ellis
Assignee: Tyler Hobbs
 Fix For: 3.0


 JSON is popular enough that not supporting it is becoming a competitive 
 weakness.  We can add JSON support in a way that is compatible with our 
 performance goals by *mapping* JSON to an existing schema: one JSON documents 
 maps to one CQL row.
 Thus, it is NOT a goal to support schemaless documents, which is a misfeature 
 [1] [2] [3].  Rather, it is to allow a convenient way to easily turn a JSON 
 document from a service or a user into a CQL row, with all the validation 
 that entails.
 Since we are not looking to support schemaless documents, we will not be 
 adding a JSON data type (CASSANDRA-6833) a la postgresql.  Rather, we will 
 map the JSON to UDT, collections, and primitive CQL types.
 Here's how this might look:
 {code}
 CREATE TYPE address (
   street text,
   city text,
   zip_code int,
   phones settext
 );
 CREATE TABLE users (
   id uuid PRIMARY KEY,
   name text,
   addresses maptext, address
 );
 INSERT INTO users JSON
 {‘id’: 4b856557-7153,
‘name’: ‘jbellis’,
‘address’: {“home”: {“street”: “123 Cassandra Dr”,
 “city”: “Austin”,
 “zip_code”: 78747,
 “phones”: [2101234567]}}};
 SELECT JSON id, address FROM users;
 {code}
 (We would also want to_json and from_json functions to allow mapping a single 
 column's worth of data.  These would not require extra syntax.)
 [1] http://rustyrazorblade.com/2014/07/the-myth-of-schema-less/
 [2] https://blog.compose.io/schema-less-is-usually-a-lie/
 [3] http://dl.acm.org/citation.cfm?id=2481247



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >