[jira] [Commented] (CASSANDRA-8313) cassandra-stress: No way to pass in data center hint for DCAwareRoundRobinPolicy

2014-11-14 Thread Bob Nilsen (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14213321#comment-14213321
 ] 

Bob Nilsen commented on CASSANDRA-8313:
---

You guys were right, it works just fine.

>From the way the help text is written, I thought that it was supposed to be 
>used like this:

-node host1,host2

instead of:

-node whitelist host1,host2

[whitelist] looked like  to me, if you get my meaning.

I wonder how many other people get confused by the non-conventional command 
line options for cassandra-stress.

Anyway, thanks a lot, sorry for the confusion, you can close this as "not a 
bug".

> cassandra-stress: No way to pass in data center hint for 
> DCAwareRoundRobinPolicy
> 
>
> Key: CASSANDRA-8313
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8313
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Bob Nilsen
>Assignee: T Jake Luciani
>
> When using cassandra-stress in a multiple datacenter configuration, we need 
> to be able to behave like the applications do and send traffic to nodes 
> co-located in the same data center.
> I can't for the life of me figure out how to pass in such a hint into the new 
> cassandra-stress.
> And passing in a local node into "-node" doesn't help.  Apparently, 
> cassandra-stress will *guess* the data center based on the order of the list 
> that it receives from the cluster.
> In my case, it seems to always pick 'DC2', no matter what I do.
> INFO  22:17:06 Using data-center name 'DC2' for DCAwareRoundRobinPolicy (if 
> this is incorrect, please provide the correct datacenter name with 
> DCAwareRoundRobinPolicy constructor)
> Could someone please add the ability to configure the DCAwareRoundRobinPolicy 
> with a data center hint?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8213) Grant Permission fails if permission had been revoked previously

2014-11-14 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8213:
---
Labels: qa-resolved  (was: )

> Grant Permission fails if permission had been revoked previously
> 
>
> Key: CASSANDRA-8213
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8213
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>Assignee: Aleksey Yeschenko
>  Labels: qa-resolved
> Fix For: 2.1.2
>
>
> The dtest auth_test.py:TestAuth.alter_cf_auth_test is failing. 
> {code}
> cassandra.execute("GRANT ALTER ON ks.cf TO cathy")
> cathy.execute("ALTER TABLE ks.cf ADD val int")
> cassandra.execute("REVOKE ALTER ON ks.cf FROM cathy")
> self.assertUnauthorized("User cathy has no ALTER permission on  ks.cf> or any of its parents",
> cathy, "CREATE INDEX ON ks.cf(val)")
> cassandra.execute("GRANT ALTER ON ks.cf TO cathy")
> cathy.execute("CREATE INDEX ON ks.cf(val)")
> {code}
> In this section of code, the user cathy is granted "ALTER" permissions on 
> 'ks.cf', then they are revoked, then granted again. Monitoring 
> system_auth.permissions during this section of code show that the permission 
> is added with the initial grant, and revoked properly, but the table remains 
> empty after the second grant.
> When the cathy user attempts to create an index, the following exception is 
> thrown:
> {code}
> Unauthorized: code=2100 [Unauthorized] message="User cathy has no ALTER 
> permission on  or any of its parents"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8216) Select Count with Limit returns wrong value

2014-11-14 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8216:
---
Labels: qa-resolved  (was: )

> Select Count with Limit returns wrong value
> ---
>
> Key: CASSANDRA-8216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8216
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>Assignee: Benjamin Lerer
>  Labels: qa-resolved
> Fix For: 3.0
>
>
> The dtest cql_tests.py:TestCQL.select_count_paging_test is failing on trunk 
> HEAD but not 2.1-HEAD.
> The query {code} select count(*) from test where field3 = false limit 1; 
> {code} is returning 2, where obviously it should only return 1 because of the 
> limit. This may end up having the same root cause of #8214, I will be 
> bisecting them both soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7563) UserType, TupleType and collections in UDFs

2014-11-14 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14213105#comment-14213105
 ] 

Tyler Hobbs commented on CASSANDRA-7563:


bq. Will try to improve readability of all tests in UFTest - some support in 
CQLTester would be nice - especially after CASSANDRA-7813.

Yes, feel free to add some utility functions, options, etc to CQLTester if 
those would make the tests clearer.

bq. isn't that problem a bit more complex? UDFs can get parameters from CQL 
statements as 'constants', from CQL bound variables (I think these depend on 
the protocol version) and from tables (guess these are always 
ProtocolVersion.NEWEST_SUPPORTED).

Literals/constants will end up as Lists.Value, Sets.Value, etc.  In the Value 
classes, {{getWithProtocolVersion()}} will use the current connection's 
protocol version to serialize the list as a ByteBuffer.  Somthing similar 
happens for bound variables.

When we fetch them from tables, they get serialized based on the current 
connection's protocol version _before_ functions are applied.  So the function 
needs to deserialize them using the same protocol version.

> UserType, TupleType and collections in UDFs
> ---
>
> Key: CASSANDRA-7563
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7563
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.0
>
> Attachments: 7563-7740.txt, 7563.txt, 7563v2.txt, 7563v3.txt
>
>
> * is Java Driver as a dependency required ?
> * is it possible to extract parts of the Java Driver for UDT/TT/coll support ?
> * CQL {{DROP TYPE}} must check UDFs
> * must check keyspace access permissions (if those exist)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7563) UserType, TupleType and collections in UDFs

2014-11-14 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14213070#comment-14213070
 ] 

Robert Stupp edited comment on CASSANDRA-7563 at 11/14/14 11:33 PM:


I've created the ticket (and updated JAVA-502 for that).
Will try to improve readability of all tests in {{UFTest}} - some support in 
CQLTester would be nice - especially after CASSANDRA-7813.
Another unit test is no problem :)

Regarding that protocol version issue. Maybe I'm a bit too tired, but isn't 
that problem a bit more complex? UDFs can get parameters from CQL statements as 
'constants', from CQL bound variables (I think these depend on the protocol 
version) and from tables (guess these are always 
{{ProtocolVersion.NEWEST_SUPPORTED}}).

Edit: Good catch with that protocol version thing :)


was (Author: snazy):
I've created the ticket (and updated JAVA-502 for that).
Will try to improve readability of all tests in {{UFTest}} - some support in 
CQLTester would be nice - especially after CASSANDRA-7813.
Another unit test is no problem :)

Regarding that protocol version issue. Maybe I'm a bit too tired, but isn't 
that problem a bit more complex? UDFs can get parameters from CQL statements as 
'constants', from CQL bound variables (I think these depend on the protocol 
version) and from tables (guess these are always 
{{ProtocolVersion.NEWEST_SUPPORTED}}).

> UserType, TupleType and collections in UDFs
> ---
>
> Key: CASSANDRA-7563
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7563
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.0
>
> Attachments: 7563-7740.txt, 7563.txt, 7563v2.txt, 7563v3.txt
>
>
> * is Java Driver as a dependency required ?
> * is it possible to extract parts of the Java Driver for UDT/TT/coll support ?
> * CQL {{DROP TYPE}} must check UDFs
> * must check keyspace access permissions (if those exist)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7563) UserType, TupleType and collections in UDFs

2014-11-14 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14213070#comment-14213070
 ] 

Robert Stupp commented on CASSANDRA-7563:
-

I've created the ticket (and updated JAVA-502 for that).
Will try to improve readability of all tests in {{UFTest}} - some support in 
CQLTester would be nice - especially after CASSANDRA-7813.
Another unit test is no problem :)

Regarding that protocol version issue. Maybe I'm a bit too tired, but isn't 
that problem a bit more complex? UDFs can get parameters from CQL statements as 
'constants', from CQL bound variables (I think these depend on the protocol 
version) and from tables (guess these are always 
{{ProtocolVersion.NEWEST_SUPPORTED}}).

> UserType, TupleType and collections in UDFs
> ---
>
> Key: CASSANDRA-7563
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7563
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.0
>
> Attachments: 7563-7740.txt, 7563.txt, 7563v2.txt, 7563v3.txt
>
>
> * is Java Driver as a dependency required ?
> * is it possible to extract parts of the Java Driver for UDT/TT/coll support ?
> * CQL {{DROP TYPE}} must check UDFs
> * must check keyspace access permissions (if those exist)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8323) Adapt UDF code after JAVA-502

2014-11-14 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14213056#comment-14213056
 ] 

Robert Stupp commented on CASSANDRA-8323:
-

Class {{org.apache.cassandra.cql3.functions.UDFunction}} needs to be changed.

> Adapt UDF code after JAVA-502
> -
>
> Key: CASSANDRA-8323
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8323
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.0
>
>
> In CASSANDRA-7563 support for user-types, tuple-types and collections is 
> added to C* using the Java Driver.
> The code in C* requires access to some functionality which is currently 
> performed using reflection/invoke-dynamic.
> This ticket is about to provide better/direct access to that functionality.
> I'll provide patches for Java Driver + C*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8323) Adapt UDF code after JAVA-502

2014-11-14 Thread Robert Stupp (JIRA)
Robert Stupp created CASSANDRA-8323:
---

 Summary: Adapt UDF code after JAVA-502
 Key: CASSANDRA-8323
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8323
 Project: Cassandra
  Issue Type: Improvement
Reporter: Robert Stupp
Assignee: Robert Stupp
 Fix For: 3.0


In CASSANDRA-7563 support for user-types, tuple-types and collections is added 
to C* using the Java Driver.

The code in C* requires access to some functionality which is currently 
performed using reflection/invoke-dynamic.

This ticket is about to provide better/direct access to that functionality.
I'll provide patches for Java Driver + C*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7563) UserType, TupleType and collections in UDFs

2014-11-14 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14213046#comment-14213046
 ] 

Tyler Hobbs commented on CASSANDRA-7563:


Good work so far!  Here's my preliminary feedback:
* Using {{ProtocolVersion.NEWEST_SUPPORTED}} for serializing and deserializing 
is not correct.  (I noticed a similiar issue when working on json functions.)  
The protocol version used by the current connection needs to be passed into 
function executions for proper serialization and deserialization.
* Can you add test cases for calling a UDF on a collection type with a null 
value (empty cell)?  The code looks like it should handle this, but it's good 
to test.
* The new unit tests are really hard to parse.  Do you have any ideas for 
making them a little more readable?
*  Do you already have a C* ticket open for replacing the reflection calls 
after JAVA-502 is done?  If not, can you go ahead and make that?

> UserType, TupleType and collections in UDFs
> ---
>
> Key: CASSANDRA-7563
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7563
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.0
>
> Attachments: 7563-7740.txt, 7563.txt, 7563v2.txt, 7563v3.txt
>
>
> * is Java Driver as a dependency required ?
> * is it possible to extract parts of the Java Driver for UDT/TT/coll support ?
> * CQL {{DROP TYPE}} must check UDFs
> * must check keyspace access permissions (if those exist)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7386) JBOD threshold to prevent unbalanced disk utilization

2014-11-14 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14213030#comment-14213030
 ] 

Robert Stupp commented on CASSANDRA-7386:
-

IMO that simple "choose disk by free disk space" algorithm" is manageable for 
2.0, 2.1 and trunk.
Going to implement it over the weekend.

> JBOD threshold to prevent unbalanced disk utilization
> -
>
> Key: CASSANDRA-7386
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7386
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Chris Lohfink
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 2.1.3
>
> Attachments: 7386-v1.patch, 7386v2.diff, Mappe1.ods, 
> mean-writevalue-7disks.png, patch_2_1_branch_proto.diff, 
> sstable-count-second-run.png
>
>
> Currently the pick the disks are picked first by number of current tasks, 
> then by free space.  This helps with performance but can lead to large 
> differences in utilization in some (unlikely but possible) scenarios.  Ive 
> seen 55% to 10% and heard reports of 90% to 10% on IRC.  With both LCS and 
> STCS (although my suspicion is that STCS makes it worse since harder to be 
> balanced).
> I purpose the algorithm change a little to have some maximum range of 
> utilization where it will pick by free space over load (acknowledging it can 
> be slower).  So if a disk A is 30% full and disk B is 5% full it will never 
> pick A over B until it balances out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8289) Allow users to debug/test UDF

2014-11-14 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212994#comment-14212994
 ] 

Robert Stupp edited comment on CASSANDRA-8289 at 11/14/14 10:44 PM:


Also added some functionality to ensure that a UDF
* is thread-safe (execute one UDF concurrently)
* is deterministic (execute one UDF several times)

As a side effect people can measure timings of UDF invocations (using metrics 
with {{System.nanoTime}} - not a micro benchmark).

Has anyone time and is in the mood to take a look at it? Would really 
appreciate any feedback about it. Code is on 
[github|https://github.com/snazy/cassandra/tree/8289-udftest] at 
{{tools/udftest}}.

In the end people should be able to build a JUnit/TestNG test that does this 
test automatically. (JUnit/TestNG is not required by the code - but can be used 
for assertions.) A unit test for this one is {{UDFTestTest}} with some code 
examples in it.

Technically the implementation takes C* as a dependency, uses the CQL parser 
code to produce a {{CreateFunctionStatement}} instance and lets it return an 
instance to {{UDFunction}}.


was (Author: snazy):
Also added some functionality to ensure that a UDF
* is thread-safe (execute one UDF concurrently)
* is deterministic (execute one UDF several times)
As a side effect people can measure timings of UDF invocations (using metrics 
with {{System.nanoTime}} - not a micro benchmark).

Has anyone time and is in the mood to take a look at it? Would really 
appreciate any feedback about it. Code is on 
[github|https://github.com/snazy/cassandra/tree/8289-udftest] at 
{{tools/udftest}}.

In the end people should be able to build a JUnit/TestNG test that does this 
test automatically. (JUnit/TestNG is not required by the code - but can be used 
for assertions.) A unit test for this one is {{UDFTestTest}} with some code 
examples in it.

Technically the implementation takes C* as a dependency, uses the CQL parser 
code to produce a {{CreateFunctionStatement}} instance and lets it return an 
instance to {{UDFunction}}.

> Allow users to debug/test UDF
> -
>
> Key: CASSANDRA-8289
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8289
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>  Labels: udf
> Fix For: 3.0
>
>
> Currently it's not possible to execute unit tests against UDFs nor is it 
> possible to debug them.
> Idea is to provide some kind of minimalistic "framework" to execute at least 
> scalar UDFs from a unit test.
> Basically that UDF-executor would take the information that 'CREATE FUNCTION' 
> takes, compiles that UDF and allows the user to call it using plain java 
> calls.
> In case of the Java language it could also generate Java source files to 
> enable users to set breakpoints.
> It could also check for timeouts to identify e.g. "endless loop" scenarios or 
> do some byte code analysis to check for "evil" package usage.
> For example:
> {code}
> import org.apache.cassandra.udfexec.*
> public class MyUnitTest {
>   @Test
>   public void testIt() {
> UDFExec sinExec = UDFExec.compile("sin", "java",
>   Double.class, // return type
>   Double.class  // argument type(s)
> );
> sinExec.call(2.0d);
> sinExec.call(null);
>   }
> }
> {code}
> Note: this one is not intended to do some "magic" to start a debugger on a C* 
> node and debug it there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8289) Allow users to debug/test UDF

2014-11-14 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212994#comment-14212994
 ] 

Robert Stupp commented on CASSANDRA-8289:
-

Also added some functionality to ensure that a UDF
* is thread-safe (execute one UDF concurrently)
* is deterministic (execute one UDF several times)
As a side effect people can measure timings of UDF invocations (using metrics 
with {{System.nanoTime}} - not a micro benchmark).

Has anyone time and is in the mood to take a look at it? Would really 
appreciate any feedback about it. Code is on 
[github|https://github.com/snazy/cassandra/tree/8289-udftest] at 
{{tools/udftest}}.

In the end people should be able to build a JUnit/TestNG test that does this 
test automatically. (JUnit/TestNG is not required by the code - but can be used 
for assertions.) A unit test for this one is {{UDFTestTest}} with some code 
examples in it.

Technically the implementation takes C* as a dependency, uses the CQL parser 
code to produce a {{CreateFunctionStatement}} instance and lets it return an 
instance to {{UDFunction}}.

> Allow users to debug/test UDF
> -
>
> Key: CASSANDRA-8289
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8289
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>  Labels: udf
> Fix For: 3.0
>
>
> Currently it's not possible to execute unit tests against UDFs nor is it 
> possible to debug them.
> Idea is to provide some kind of minimalistic "framework" to execute at least 
> scalar UDFs from a unit test.
> Basically that UDF-executor would take the information that 'CREATE FUNCTION' 
> takes, compiles that UDF and allows the user to call it using plain java 
> calls.
> In case of the Java language it could also generate Java source files to 
> enable users to set breakpoints.
> It could also check for timeouts to identify e.g. "endless loop" scenarios or 
> do some byte code analysis to check for "evil" package usage.
> For example:
> {code}
> import org.apache.cassandra.udfexec.*
> public class MyUnitTest {
>   @Test
>   public void testIt() {
> UDFExec sinExec = UDFExec.compile("sin", "java",
>   Double.class, // return type
>   Double.class  // argument type(s)
> );
> sinExec.call(2.0d);
> sinExec.call(null);
>   }
> }
> {code}
> Note: this one is not intended to do some "magic" to start a debugger on a C* 
> node and debug it there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8312) Use live sstables in snapshot repair if possible

2014-11-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Mårdell updated CASSANDRA-8312:
-
Attachment: cassandra-2.0-8312-1.txt

> Use live sstables in snapshot repair if possible
> 
>
> Key: CASSANDRA-8312
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8312
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jimmy Mårdell
>Priority: Minor
> Attachments: cassandra-2.0-8312-1.txt
>
>
> Snapshot repair can be very much slower than parallel repairs because of the 
> overhead of opening the SSTables in the snapshot. This is particular true 
> when using LCS, as you typically have many smaller SSTables then.
> I compared parallel and sequential repair on a small range on one of our 
> clusters (2*3 replicas). With parallel repair, this took 22 seconds. With 
> sequential repair (default in 2.0), the same range took 330 seconds! This is 
> an overhead of 330-22*6 = 198 seconds, just opening SSTables (there were 
> 1000+ sstables). Also, opening 1000 sstables for many smaller rangers surely 
> causes lots of memory churning.
> The idea would be to list the sstables in the snapshot, but use the 
> corresponding sstables in the live set if it's still available. For almost 
> all sstables, the original one should still exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8053) Support for user defined aggregate functions

2014-11-14 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-8053:

Attachment: 8053v1.txt

Attached patch for this one. Anyone time for a review?

> Support for user defined aggregate functions
> 
>
> Key: CASSANDRA-8053
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8053
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>  Labels: cql, udf
> Fix For: 3.0
>
> Attachments: 8053v1.txt
>
>
> CASSANDRA-4914 introduces aggregate functions.
> This ticket is about to decide how we can support "user defined aggregate 
> functions". UD aggregate functions should be supported for all UDF flavors 
> (class, java, jsr223).
> Things to consider:
> * Special implementations for each scripting language should be omitted
> * No exposure of internal APIs (e.g. {{AggregateFunction}} interface)
> * No need for users to deal with serializers / codecs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7563) UserType, TupleType and collections in UDFs

2014-11-14 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-7563:

Attachment: 7563v3.txt

:)
Attached v3 of the patch (rebased against current trunk).

> UserType, TupleType and collections in UDFs
> ---
>
> Key: CASSANDRA-7563
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7563
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.0
>
> Attachments: 7563-7740.txt, 7563.txt, 7563v2.txt, 7563v3.txt
>
>
> * is Java Driver as a dependency required ?
> * is it possible to extract parts of the Java Driver for UDT/TT/coll support ?
> * CQL {{DROP TYPE}} must check UDFs
> * must check keyspace access permissions (if those exist)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8228) Log malfunctioning host on prepareForRepair

2014-11-14 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212924#comment-14212924
 ] 

Yuki Morishita edited comment on CASSANDRA-8228 at 11/14/14 10:03 PM:
--

[~rnamboodiri] I suggest to build message for failed node inside {{onFailure}} 
method of IAsyncCallbackWithFailure (in thread-safe way).
So you can tell precisely which nodes are failed.


was (Author: yukim):
[~rnamboodiri] I suggest to build message for failed node inside ((onFailure}} 
method of IAsyncCallbackWithFailure (in thread-safe way).
So you can tell precisely which nodes are failed.

> Log malfunctioning host on prepareForRepair
> ---
>
> Key: CASSANDRA-8228
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8228
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Juho Mäkinen
>Assignee: Rajanarayanan Thottuvaikkatumana
>Priority: Trivial
>  Labels: lhf
> Attachments: cassandra-trunk-8228.txt
>
>
> Repair startup goes thru ActiveRepairService.prepareForRepair() which might 
> result with "Repair failed with error Did not get positive replies from all 
> endpoints." error, but there's no other logging regarding to this error.
> It seems that it would be trivial to modify the prepareForRepair() to log the 
> host address which caused the error, thus ease the debugging effort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8228) Log malfunctioning host on prepareForRepair

2014-11-14 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212924#comment-14212924
 ] 

Yuki Morishita commented on CASSANDRA-8228:
---

[~rnamboodiri] I suggest to build message for failed node inside ((onFailure}} 
method of IAsyncCallbackWithFailure (in thread-safe way).
So you can tell precisely which nodes are failed.

> Log malfunctioning host on prepareForRepair
> ---
>
> Key: CASSANDRA-8228
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8228
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Juho Mäkinen
>Assignee: Rajanarayanan Thottuvaikkatumana
>Priority: Trivial
>  Labels: lhf
> Attachments: cassandra-trunk-8228.txt
>
>
> Repair startup goes thru ActiveRepairService.prepareForRepair() which might 
> result with "Repair failed with error Did not get positive replies from all 
> endpoints." error, but there's no other logging regarding to this error.
> It seems that it would be trivial to modify the prepareForRepair() to log the 
> host address which caused the error, thus ease the debugging effort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7563) UserType, TupleType and collections in UDFs

2014-11-14 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212882#comment-14212882
 ] 

Tyler Hobbs commented on CASSANDRA-7563:


Sorry for the slow response, I have some time to review your patch now.

> UserType, TupleType and collections in UDFs
> ---
>
> Key: CASSANDRA-7563
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7563
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Robert Stupp
> Fix For: 3.0
>
> Attachments: 7563-7740.txt, 7563.txt, 7563v2.txt
>
>
> * is Java Driver as a dependency required ?
> * is it possible to extract parts of the Java Driver for UDT/TT/coll support ?
> * CQL {{DROP TYPE}} must check UDFs
> * must check keyspace access permissions (if those exist)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8302) Filtering for CONTAINS (KEY) on frozen collection clustering columns within a partition does not work

2014-11-14 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8302:
---
Priority: Minor  (was: Major)

> Filtering for CONTAINS (KEY) on frozen collection clustering columns within a 
> partition does not work
> -
>
> Key: CASSANDRA-8302
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8302
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 2.1.3
>
> Attachments: 8302.txt
>
>
> Create a table like this:
> {noformat}
> CREATE TABLE foo (
> a int,
> b int,
> c frozen>
> d int,
> PRIMARY KEY (a, b, c, d)
> )
> {noformat}
> and add an index on it:
> {noformat}
> CREATE INDEX ON foo(b)
> {noformat}
> A query across all partitions will work correctly:
> {noformat}
> cqlsh:ks1> insert into foo (a, b, c, d) VALUES (0, 0, {1, 2}, 0);
> cqlsh:ks1> SELECT * FROM foo WHERE b=0 AND c CONTAINS 2 and d=0 ALLOW 
> FILTERING;
>  a | b | c  | d
> ---+---++---
>  0 | 0 | {1, 2} | 0
> (1 rows)
> {noformat}
> But if the query is restricted to a single partition, it is considered 
> invalid (and the error message isn't great):
> {noformat}
> cqlsh:ks1> SELECT * FROM foo WHERE a=0 AND b=0 AND c CONTAINS 2 and d=0 ALLOW 
> FILTERING;
> code=2200 [Invalid query] message="No secondary indexes on the restricted 
> columns support the provided operators: "
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8302) Filtering for CONTAINS (KEY) on frozen collection clustering columns within a partition does not work

2014-11-14 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8302:
---
Attachment: 8302.txt

The attached patch corrently handles CONTAINS relations on clustering columns 
(requiring a secondary index plus filtering for their use) and adds a unit test.

> Filtering for CONTAINS (KEY) on frozen collection clustering columns within a 
> partition does not work
> -
>
> Key: CASSANDRA-8302
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8302
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
> Fix For: 2.1.3
>
> Attachments: 8302.txt
>
>
> Create a table like this:
> {noformat}
> CREATE TABLE foo (
> a int,
> b int,
> c frozen>
> d int,
> PRIMARY KEY (a, b, c, d)
> )
> {noformat}
> and add an index on it:
> {noformat}
> CREATE INDEX ON foo(b)
> {noformat}
> A query across all partitions will work correctly:
> {noformat}
> cqlsh:ks1> insert into foo (a, b, c, d) VALUES (0, 0, {1, 2}, 0);
> cqlsh:ks1> SELECT * FROM foo WHERE b=0 AND c CONTAINS 2 and d=0 ALLOW 
> FILTERING;
>  a | b | c  | d
> ---+---++---
>  0 | 0 | {1, 2} | 0
> (1 rows)
> {noformat}
> But if the query is restricted to a single partition, it is considered 
> invalid (and the error message isn't great):
> {noformat}
> cqlsh:ks1> SELECT * FROM foo WHERE a=0 AND b=0 AND c CONTAINS 2 and d=0 ALLOW 
> FILTERING;
> code=2200 [Invalid query] message="No secondary indexes on the restricted 
> columns support the provided operators: "
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8302) Filtering for CONTAINS (KEY) on frozen collection clustering columns within a partition does not work

2014-11-14 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8302:
---
Reviewer: Benjamin Lerer

> Filtering for CONTAINS (KEY) on frozen collection clustering columns within a 
> partition does not work
> -
>
> Key: CASSANDRA-8302
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8302
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
> Fix For: 2.1.3
>
>
> Create a table like this:
> {noformat}
> CREATE TABLE foo (
> a int,
> b int,
> c frozen>
> d int,
> PRIMARY KEY (a, b, c, d)
> )
> {noformat}
> and add an index on it:
> {noformat}
> CREATE INDEX ON foo(b)
> {noformat}
> A query across all partitions will work correctly:
> {noformat}
> cqlsh:ks1> insert into foo (a, b, c, d) VALUES (0, 0, {1, 2}, 0);
> cqlsh:ks1> SELECT * FROM foo WHERE b=0 AND c CONTAINS 2 and d=0 ALLOW 
> FILTERING;
>  a | b | c  | d
> ---+---++---
>  0 | 0 | {1, 2} | 0
> (1 rows)
> {noformat}
> But if the query is restricted to a single partition, it is considered 
> invalid (and the error message isn't great):
> {noformat}
> cqlsh:ks1> SELECT * FROM foo WHERE a=0 AND b=0 AND c CONTAINS 2 and d=0 ALLOW 
> FILTERING;
> code=2200 [Invalid query] message="No secondary indexes on the restricted 
> columns support the provided operators: "
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: fix test throw unknown keyspace error

2014-11-14 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/trunk 3ad82c730 -> 105360cb1


fix test throw unknown keyspace error


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/105360cb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/105360cb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/105360cb

Branch: refs/heads/trunk
Commit: 105360cb14513ee77b1d9145d7367690a3ad2dc4
Parents: 3ad82c7
Author: Yuki Morishita 
Authored: Fri Nov 14 15:36:04 2014 -0600
Committer: Yuki Morishita 
Committed: Fri Nov 14 15:36:04 2014 -0600

--
 test/unit/org/apache/cassandra/io/sstable/SSTableReaderTest.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/105360cb/test/unit/org/apache/cassandra/io/sstable/SSTableReaderTest.java
--
diff --git a/test/unit/org/apache/cassandra/io/sstable/SSTableReaderTest.java 
b/test/unit/org/apache/cassandra/io/sstable/SSTableReaderTest.java
index 8de2e75..cd85d78 100644
--- a/test/unit/org/apache/cassandra/io/sstable/SSTableReaderTest.java
+++ b/test/unit/org/apache/cassandra/io/sstable/SSTableReaderTest.java
@@ -215,13 +215,13 @@ public class SSTableReaderTest
 public void testReadRateTracking()
 {
 // try to make sure CASSANDRA-8239 never happens again
-Keyspace keyspace = Keyspace.open("Keyspace1");
+Keyspace keyspace = Keyspace.open(KEYSPACE1);
 ColumnFamilyStore store = keyspace.getColumnFamilyStore("Standard1");
 
 for (int j = 0; j < 10; j++)
 {
 ByteBuffer key = ByteBufferUtil.bytes(String.valueOf(j));
-Mutation rm = new Mutation("Keyspace1", key);
+Mutation rm = new Mutation(KEYSPACE1, key);
 rm.add("Standard1", cellname("0"), 
ByteBufferUtil.EMPTY_BYTE_BUFFER, j);
 rm.apply();
 }



[1/2] cassandra git commit: Fix CFMetaData compaction-related warnings

2014-11-14 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk cbbc1191c -> 3ad82c730


Fix CFMetaData compaction-related warnings


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1b21aef8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1b21aef8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1b21aef8

Branch: refs/heads/trunk
Commit: 1b21aef8152d96a180e75f2fcc5afad9ded6c595
Parents: abbcfc5
Author: Aleksey Yeschenko 
Authored: Sat Nov 15 00:06:59 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Sat Nov 15 00:06:59 2014 +0300

--
 src/java/org/apache/cassandra/config/CFMetaData.java | 8 +++-
 1 file changed, 3 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b21aef8/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index 0fff7d0..57f5757 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -1250,7 +1250,7 @@ public final class CFMetaData
 {
 className = className.contains(".") ? className : 
"org.apache.cassandra.db.compaction." + className;
 Class strategyClass = 
FBUtilities.classForName(className, "compaction strategy");
-if (strategyClass.equals(WrappingCompactionStrategy.class))
+if (className.equals(WrappingCompactionStrategy.class.getName()))
 throw new ConfigurationException("You can't set 
WrappingCompactionStrategy as the compaction strategy!");
 if (!AbstractCompactionStrategy.class.isAssignableFrom(strategyClass))
 throw new ConfigurationException(String.format("Specified 
compaction strategy class (%s) is not derived from 
AbstractReplicationStrategy", className));
@@ -1262,10 +1262,8 @@ public final class CFMetaData
 {
 try
 {
-Constructor constructor = 
compactionStrategyClass.getConstructor(new Class[] {
-ColumnFamilyStore.class,
-Map.class // options
-});
+Constructor constructor =
+
compactionStrategyClass.getConstructor(ColumnFamilyStore.class, Map.class);
 return constructor.newInstance(cfs, compactionStrategyOptions);
 }
 catch (NoSuchMethodException | IllegalAccessException | 
InvocationTargetException | InstantiationException e)



[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2014-11-14 Thread aleksey
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3ad82c73
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3ad82c73
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3ad82c73

Branch: refs/heads/trunk
Commit: 3ad82c730e468e32a2de9fe1eeec80fbf54326c4
Parents: cbbc119 1b21aef
Author: Aleksey Yeschenko 
Authored: Sat Nov 15 00:07:16 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Sat Nov 15 00:07:16 2014 +0300

--
 src/java/org/apache/cassandra/config/CFMetaData.java | 8 +++-
 1 file changed, 3 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ad82c73/src/java/org/apache/cassandra/config/CFMetaData.java
--



cassandra git commit: Fix CFMetaData compaction-related warnings

2014-11-14 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 abbcfc5f7 -> 1b21aef81


Fix CFMetaData compaction-related warnings


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1b21aef8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1b21aef8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1b21aef8

Branch: refs/heads/cassandra-2.1
Commit: 1b21aef8152d96a180e75f2fcc5afad9ded6c595
Parents: abbcfc5
Author: Aleksey Yeschenko 
Authored: Sat Nov 15 00:06:59 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Sat Nov 15 00:06:59 2014 +0300

--
 src/java/org/apache/cassandra/config/CFMetaData.java | 8 +++-
 1 file changed, 3 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1b21aef8/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index 0fff7d0..57f5757 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -1250,7 +1250,7 @@ public final class CFMetaData
 {
 className = className.contains(".") ? className : 
"org.apache.cassandra.db.compaction." + className;
 Class strategyClass = 
FBUtilities.classForName(className, "compaction strategy");
-if (strategyClass.equals(WrappingCompactionStrategy.class))
+if (className.equals(WrappingCompactionStrategy.class.getName()))
 throw new ConfigurationException("You can't set 
WrappingCompactionStrategy as the compaction strategy!");
 if (!AbstractCompactionStrategy.class.isAssignableFrom(strategyClass))
 throw new ConfigurationException(String.format("Specified 
compaction strategy class (%s) is not derived from 
AbstractReplicationStrategy", className));
@@ -1262,10 +1262,8 @@ public final class CFMetaData
 {
 try
 {
-Constructor constructor = 
compactionStrategyClass.getConstructor(new Class[] {
-ColumnFamilyStore.class,
-Map.class // options
-});
+Constructor constructor =
+
compactionStrategyClass.getConstructor(ColumnFamilyStore.class, Map.class);
 return constructor.newInstance(cfs, compactionStrategyOptions);
 }
 catch (NoSuchMethodException | IllegalAccessException | 
InvocationTargetException | InstantiationException e)



[jira] [Commented] (CASSANDRA-8213) Grant Permission fails if permission had been revoked previously

2014-11-14 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212814#comment-14212814
 ] 

Philip Thompson commented on CASSANDRA-8213:


Examining jenkins logs, this appears to be have caused by CASSANDRA-8139 and 
fixed by CASSANDRA-8246.

> Grant Permission fails if permission had been revoked previously
> 
>
> Key: CASSANDRA-8213
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8213
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>Assignee: Aleksey Yeschenko
> Fix For: 2.1.2
>
>
> The dtest auth_test.py:TestAuth.alter_cf_auth_test is failing. 
> {code}
> cassandra.execute("GRANT ALTER ON ks.cf TO cathy")
> cathy.execute("ALTER TABLE ks.cf ADD val int")
> cassandra.execute("REVOKE ALTER ON ks.cf FROM cathy")
> self.assertUnauthorized("User cathy has no ALTER permission on  ks.cf> or any of its parents",
> cathy, "CREATE INDEX ON ks.cf(val)")
> cassandra.execute("GRANT ALTER ON ks.cf TO cathy")
> cathy.execute("CREATE INDEX ON ks.cf(val)")
> {code}
> In this section of code, the user cathy is granted "ALTER" permissions on 
> 'ks.cf', then they are revoked, then granted again. Monitoring 
> system_auth.permissions during this section of code show that the permission 
> is added with the initial grant, and revoked properly, but the table remains 
> empty after the second grant.
> When the cathy user attempts to create an index, the following exception is 
> thrown:
> {code}
> Unauthorized: code=2100 [Unauthorized] message="User cathy has no ALTER 
> permission on  or any of its parents"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8213) Grant Permission fails if permission had been revoked previously

2014-11-14 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8213:
---
Fix Version/s: (was: 2.1.3)
   2.1.2

> Grant Permission fails if permission had been revoked previously
> 
>
> Key: CASSANDRA-8213
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8213
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>Assignee: Aleksey Yeschenko
> Fix For: 2.1.2
>
>
> The dtest auth_test.py:TestAuth.alter_cf_auth_test is failing. 
> {code}
> cassandra.execute("GRANT ALTER ON ks.cf TO cathy")
> cathy.execute("ALTER TABLE ks.cf ADD val int")
> cassandra.execute("REVOKE ALTER ON ks.cf FROM cathy")
> self.assertUnauthorized("User cathy has no ALTER permission on  ks.cf> or any of its parents",
> cathy, "CREATE INDEX ON ks.cf(val)")
> cassandra.execute("GRANT ALTER ON ks.cf TO cathy")
> cathy.execute("CREATE INDEX ON ks.cf(val)")
> {code}
> In this section of code, the user cathy is granted "ALTER" permissions on 
> 'ks.cf', then they are revoked, then granted again. Monitoring 
> system_auth.permissions during this section of code show that the permission 
> is added with the initial grant, and revoked properly, but the table remains 
> empty after the second grant.
> When the cathy user attempts to create an index, the following exception is 
> thrown:
> {code}
> Unauthorized: code=2100 [Unauthorized] message="User cathy has no ALTER 
> permission on  or any of its parents"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8213) Grant Permission fails if permission had been revoked previously

2014-11-14 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-8213.

Resolution: Fixed

> Grant Permission fails if permission had been revoked previously
> 
>
> Key: CASSANDRA-8213
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8213
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>Assignee: Aleksey Yeschenko
> Fix For: 2.1.3
>
>
> The dtest auth_test.py:TestAuth.alter_cf_auth_test is failing. 
> {code}
> cassandra.execute("GRANT ALTER ON ks.cf TO cathy")
> cathy.execute("ALTER TABLE ks.cf ADD val int")
> cassandra.execute("REVOKE ALTER ON ks.cf FROM cathy")
> self.assertUnauthorized("User cathy has no ALTER permission on  ks.cf> or any of its parents",
> cathy, "CREATE INDEX ON ks.cf(val)")
> cassandra.execute("GRANT ALTER ON ks.cf TO cathy")
> cathy.execute("CREATE INDEX ON ks.cf(val)")
> {code}
> In this section of code, the user cathy is granted "ALTER" permissions on 
> 'ks.cf', then they are revoked, then granted again. Monitoring 
> system_auth.permissions during this section of code show that the permission 
> is added with the initial grant, and revoked properly, but the table remains 
> empty after the second grant.
> When the cathy user attempts to create an index, the following exception is 
> thrown:
> {code}
> Unauthorized: code=2100 [Unauthorized] message="User cathy has no ALTER 
> permission on  or any of its parents"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8212) Archive Commitlog Test Failing

2014-11-14 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8212:
---
Fix Version/s: 2.0.12

> Archive Commitlog Test Failing
> --
>
> Key: CASSANDRA-8212
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8212
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
> Fix For: 2.0.12
>
>
> The test snapshot_test.TestArchiveCommitlog.test_archive_commitlog is failing 
> on 2.0.11, but not 2.1.1. We attempt to replay 65000 rows, but in 2.0.11 only 
> 63000 rows succeed. URL for test output:
> http://cassci.datastax.com/job/cassandra-2.0_dtest/lastCompletedBuild/testReport/snapshot_test/TestArchiveCommitlog/test_archive_commitlog/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8322) Use nodetool live size calculation in simple_bootstrap_test

2014-11-14 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-8322:
--

 Summary: Use nodetool live size calculation in 
simple_bootstrap_test
 Key: CASSANDRA-8322
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8322
 Project: Cassandra
  Issue Type: Test
Reporter: Philip Thompson
Assignee: Philip Thompson
Priority: Minor


simple_bootstrap_test is calculating data size for each node by summing file 
sizes of data on disk. We should use the nodetool calculation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-5483) Repair tracing

2014-11-14 Thread Ben Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212771#comment-14212771
 ] 

Ben Chan commented on CASSANDRA-5483:
-

Updated https://github.com/usrbincc/cassandra/tree/5483-review (currently at 
commit 85d974753dc7f).

So a few changes to tracing got merged in on Nov 12, making a cut-and-dried 
manual rebase impossible.

- Use a default ttl on the system_traces.* CFs. Use that to clean up the code 
in Tracing, since you no longer have to specify ttls.
- Move a lot of row-insertion code out of Tracing into a new TraceKeyspace 
class.
- Since there is no more need to specify ttl, use CFRowAdder (which doesn't do 
ttl) for convenience.

Since the repair tracing patch requires a user-configurable ttl (or at the very 
least, a different ttl for repair tracing and query tracing), I needed to 
re-add ttl specification. Since it wasn't much more code, I decided to only use 
explicitly-expiring cells if the ttl didn't match the default ttl.  
  
This appears to save two native ints per column from a quick skim of the source 
code. Not sure if that's really enough to care about (especially since repair 
tracing, which is likely to insert more rows, has a ttl different from the 
default), but it was a simple enough change.



Merge conflicts are a real pain point with git. I attempted to make reviewing 
the merge conflict resolution changes easier by inserting an intermediate 
commit that includes the conflict markers from git, unmodified. Feel free to 
hide all the messiness with a {{git merge --squash}}.

- Fixed a race condition with repair trace sessions sometimes not being 
properly stopped (see commit 801b6fbf56771).
- Possible code no-no to note: I made two private fields (SESSIONS_TABLE and 
EVENTS_TABLE) in TraceKeyspace public. I only use EVENTS_TABLE, but made them 
both public for symmetry (good? bad? don't know). They're final fields, if 
that's any consolation.


> Repair tracing
> --
>
> Key: CASSANDRA-5483
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5483
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Yuki Morishita
>Assignee: Ben Chan
>Priority: Minor
>  Labels: repair
> Fix For: 3.0
>
> Attachments: 5483-full-trunk.txt, 
> 5483-v06-04-Allow-tracing-ttl-to-be-configured.patch, 
> 5483-v06-05-Add-a-command-column-to-system_traces.events.patch, 
> 5483-v06-06-Fix-interruption-in-tracestate-propagation.patch, 
> 5483-v07-07-Better-constructor-parameters-for-DebuggableThreadPoolExecutor.patch,
>  5483-v07-08-Fix-brace-style.patch, 
> 5483-v07-09-Add-trace-option-to-a-more-complete-set-of-repair-functions.patch,
>  5483-v07-10-Correct-name-of-boolean-repairedAt-to-fullRepair.patch, 
> 5483-v08-11-Shorten-trace-messages.-Use-Tracing-begin.patch, 
> 5483-v08-12-Trace-streaming-in-Differencer-StreamingRepairTask.patch, 
> 5483-v08-13-sendNotification-of-local-traces-back-to-nodetool.patch, 
> 5483-v08-14-Poll-system_traces.events.patch, 
> 5483-v08-15-Limit-trace-notifications.-Add-exponential-backoff.patch, 
> 5483-v09-16-Fix-hang-caused-by-incorrect-exit-code.patch, 
> 5483-v10-17-minor-bugfixes-and-changes.patch, 
> 5483-v10-rebased-and-squashed-471f5cc.patch, 5483-v11-01-squashed.patch, 
> 5483-v11-squashed-nits.patch, 5483-v12-02-cassandra-yaml-ttl-doc.patch, 
> 5483-v13-608fb03-May-14-trace-formatting-changes.patch, 
> 5483-v14-01-squashed.patch, 
> 5483-v15-02-Hook-up-exponential-backoff-functionality.patch, 
> 5483-v15-03-Exact-doubling-for-exponential-backoff.patch, 
> 5483-v15-04-Re-add-old-StorageService-JMX-signatures.patch, 
> 5483-v15-05-Move-command-column-to-system_traces.sessions.patch, 
> 5483-v15.patch, 5483-v17-00.patch, 5483-v17-01.patch, 5483-v17.patch, 
> ccm-repair-test, cqlsh-left-justify-text-columns.patch, 
> prerepair-vs-postbuggedrepair.diff, test-5483-system_traces-events.txt, 
> trunk@4620823-5483-v02-0001-Trace-filtering-and-tracestate-propagation.patch, 
> trunk@4620823-5483-v02-0002-Put-a-few-traces-parallel-to-the-repair-logging.patch,
>  tr...@8ebeee1-5483-v01-001-trace-filtering-and-tracestate-propagation.txt, 
> tr...@8ebeee1-5483-v01-002-simple-repair-tracing.txt, 
> v02p02-5483-v03-0003-Make-repair-tracing-controllable-via-nodetool.patch, 
> v02p02-5483-v04-0003-This-time-use-an-EnumSet-to-pass-boolean-repair-options.patch,
>  v02p02-5483-v05-0003-Use-long-instead-of-EnumSet-to-work-with-JMX.patch
>
>
> I think it would be nice to log repair stats and results like query tracing 
> stores traces to system keyspace. With it, you don't have to lookup each log 
> file to see what was the status and how it performed the repair you invoked. 
> Instead, you can query the repair log with session ID to see the stat

[jira] [Updated] (CASSANDRA-7983) nodetool repair triggers OOM

2014-11-14 Thread Jose Martinez Poblete (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jose Martinez Poblete updated CASSANDRA-7983:
-
Environment: 
 DSE version: 4.5.0 Cassandra 2.0.8


  was:
 
{noformat}
 INFO [main] 2014-09-16 14:23:14,621 DseDaemon.java (line 368) DSE version: 
4.5.0
 INFO [main] 2014-09-16 14:23:14,622 DseDaemon.java (line 369) Hadoop version: 
1.0.4.13
 INFO [main] 2014-09-16 14:23:14,627 DseDaemon.java (line 370) Hive version: 
0.12.0.3
 INFO [main] 2014-09-16 14:23:14,628 DseDaemon.java (line 371) Pig version: 
0.10.1
 INFO [main] 2014-09-16 14:23:14,629 DseDaemon.java (line 372) Solr version: 
4.6.0.2.4
 INFO [main] 2014-09-16 14:23:14,630 DseDaemon.java (line 373) Sqoop version: 
1.4.4.14.1
 INFO [main] 2014-09-16 14:23:14,630 DseDaemon.java (line 374) Mahout version: 
0.8
 INFO [main] 2014-09-16 14:23:14,631 DseDaemon.java (line 375) Appender 
version: 3.0.2
 INFO [main] 2014-09-16 14:23:14,632 DseDaemon.java (line 376) Spark version: 
0.9.1
 INFO [main] 2014-09-16 14:23:14,632 DseDaemon.java (line 377) Shark version: 
0.9.1.1
 INFO [main] 2014-09-16 14:23:20,270 CassandraDaemon.java (line 160) JVM 
vendor/version: Java HotSpot(TM) 64-Bit Server VM/1.7.0_51
 INFO [main] 2014-09-16 14:23:20,270 CassandraDaemon.java (line 188) Heap size: 
6316621824/6316621824
{noformat}


> nodetool repair triggers OOM
> 
>
> Key: CASSANDRA-7983
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7983
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment:  DSE version: 4.5.0 Cassandra 2.0.8
>Reporter: Jose Martinez Poblete
>Assignee: Jimmy Mårdell
> Fix For: 2.0.11, 2.1.1
>
> Attachments: 7983-v1.patch, gc.log.0, nbcqa-chc-a01_systemlog.tar.Z, 
> nbcqa-chc-a03_systemlog.tar.Z, system.log
>
>
> Customer has a 3 node cluster with 500Mb data on each node
> {noformat}
> [cassandra@nbcqa-chc-a02 ~]$ nodetool status
> Note: Ownership information does not include topology; for complete 
> information, specify a keyspace
> Datacenter: CH2
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens  Owns   Host ID  
>  Rack
> UN  162.150.4.234  255.26 MB  256 33.2%  
> 4ad1b6a8-8759-4920-b54a-f059126900df  RAC1
> UN  162.150.4.235  318.37 MB  256 32.6%  
> 3eb0ec58-4b81-442e-bee5-4c91da447f38  RAC1
> UN  162.150.4.167  243.7 MB   256 34.2%  
> 5b2c1900-bf03-41c1-bb4e-82df1655b8d8  RAC1
> [cassandra@nbcqa-chc-a02 ~]$
> {noformat}
> After we run repair command, system runs into OOM after some 45 minutes
> Nothing else is running
> {noformat}
> [cassandra@nbcqa-chc-a02 ~]$ date
> Fri Sep 19 15:55:33 UTC 2014
> [cassandra@nbcqa-chc-a02 ~]$ nodetool repair -st -9220354588320251877 -et 
> -9220354588320251873
> Sep 19, 2014 4:06:08 PM ClientCommunicatorAdmin Checker-run
> WARNING: Failed to check the connection: java.net.SocketTimeoutException: 
> Read timed out
> {noformat}
> Here is when we run OOM
> {noformat}
> ERROR [ReadStage:28914] 2014-09-19 16:34:50,381 CassandraDaemon.java (line 
> 199) Exception in thread Thread[ReadStage:28914,5,main]
> java.lang.OutOfMemoryError: Java heap space
> at 
> org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:69)
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.(CompressedRandomAccessReader.java:76)
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:43)
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.createReader(CompressedPoolingSegmentedFile.java:48)
> at 
> org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:39)
> at 
> org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:1195)
> at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.(SimpleSliceReader.java:57)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:65)
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.(SSTableSliceIterator.java:42)
> at 
> org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:167)
> at 
> org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:250)
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1547)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColum

[jira] [Comment Edited] (CASSANDRA-8318) Unable to replace a node

2014-11-14 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212751#comment-14212751
 ] 

Jens Rantil edited comment on CASSANDRA-8318 at 11/14/14 8:04 PM:
--

bq. What actual token does .51 have?

It has a token of -2.

{noformat}
$ nodetool ring
Note: Ownership information does not include topology; for complete 
information, specify a keyspace

Datacenter: Analytics
==
Address RackStatus State   LoadOwnsToken
   
614891469123651
X.X.X.52  rack1   Up Normal  18.34 GB0.02%   
-922337203685477
X.X.X.50  rack1   Up Normal  18.36 GB0.24%   
-614891469123651
X.X.X.55  rack1   Up Normal  18.51 GB0.19%   
-307445734561825
X.X.X.51  rack1   Down   Normal  195.67 KB   0.02%   -2
X.X.X.54  rack1   Up Normal  19.09 GB0.04%   
3074457345618258600
X.X.X.53  rack1   Up Normal  18.5 GB 0.07%   
614891469123651

Datacenter: Cassandra
==
Address RackStatus State   LoadOwnsToken
   
9219239585832170071
...

  Warning: "nodetool ring" is used to output all the tokens of a node.
  To view status related info of a node use "nodetool status" instead.
{noformat}


was (Author: ztyx):
> What actual token does .51 have?

It has a token of -2.

{noformat}
$ nodetool ring
Note: Ownership information does not include topology; for complete 
information, specify a keyspace

Datacenter: Analytics
==
Address RackStatus State   LoadOwnsToken
   
614891469123651
X.X.X.52  rack1   Up Normal  18.34 GB0.02%   
-922337203685477
X.X.X.50  rack1   Up Normal  18.36 GB0.24%   
-614891469123651
X.X.X.55  rack1   Up Normal  18.51 GB0.19%   
-307445734561825
X.X.X.51  rack1   Down   Normal  195.67 KB   0.02%   -2
X.X.X.54  rack1   Up Normal  19.09 GB0.04%   
3074457345618258600
X.X.X.53  rack1   Up Normal  18.5 GB 0.07%   
614891469123651

Datacenter: Cassandra
==
Address RackStatus State   LoadOwnsToken
   
9219239585832170071
...

  Warning: "nodetool ring" is used to output all the tokens of a node.
  To view status related info of a node use "nodetool status" instead.
{noformat}

> Unable to replace a node
> 
>
> Key: CASSANDRA-8318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8318
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.0.8.39 (Datastax DSE 4.5.3)
>Reporter: Jens Rantil
> Attachments: X.X.X.56.log
>
>
> Had a hardware failure of a node. I followed the Datastax documentation[1] on 
> how to replace the node X.X.X.51 using a brand new node with the same IP. 
> Since it didn't come up after waiting for ~5 minutes or so, I decided to 
> replace X.X.X.51 with a brand new unused IP X.X.X.56 instead. It now seems 
> like my gossip is some weird state. When I start the replacement node I see 
> line like
> {noformat}
>  INFO [GossipStage:1] 2014-11-14 14:57:03,025 Gossiper.java (line 901) 
> InetAddress /X.X.X.51 is now DOWN
>  INFO [GossipStage:1] 2014-11-14 14:57:03,042 Gossiper.java (line 901) 
> InetAddress /X.X.X.56 is now DOWN
> {noformat}
> . The latter is somewhat surprising since that is the IP of the actual 
> replacement node. It doesn't surprise me it can't talk to itself if it hasn't 
> started!
> Eventually the replacement node shuts down with
> {noformat}
> ERROR [main] 2014-11-14 14:58:06,031 CassandraDaemon.java (line 513) 
> Exception encountered during startup
> java.lang.UnsupportedOperationException: Cannot replace token -2 which does 
> not exist!
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:782)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:614)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
>   at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:374)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
>   at com.datastax.bdp.server.DseDaemon.main

[jira] [Commented] (CASSANDRA-8318) Unable to replace a node

2014-11-14 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212751#comment-14212751
 ] 

Jens Rantil commented on CASSANDRA-8318:


> What actual token does .51 have?

It has a token of -2.

{noformat}
$ nodetool ring
Note: Ownership information does not include topology; for complete 
information, specify a keyspace

Datacenter: Analytics
==
Address RackStatus State   LoadOwnsToken
   
614891469123651
X.X.X.52  rack1   Up Normal  18.34 GB0.02%   
-922337203685477
X.X.X.50  rack1   Up Normal  18.36 GB0.24%   
-614891469123651
X.X.X.55  rack1   Up Normal  18.51 GB0.19%   
-307445734561825
X.X.X.51  rack1   Down   Normal  195.67 KB   0.02%   -2
X.X.X.54  rack1   Up Normal  19.09 GB0.04%   
3074457345618258600
X.X.X.53  rack1   Up Normal  18.5 GB 0.07%   
614891469123651

Datacenter: Cassandra
==
Address RackStatus State   LoadOwnsToken
   
9219239585832170071
...

  Warning: "nodetool ring" is used to output all the tokens of a node.
  To view status related info of a node use "nodetool status" instead.
{noformat}

> Unable to replace a node
> 
>
> Key: CASSANDRA-8318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8318
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.0.8.39 (Datastax DSE 4.5.3)
>Reporter: Jens Rantil
> Attachments: X.X.X.56.log
>
>
> Had a hardware failure of a node. I followed the Datastax documentation[1] on 
> how to replace the node X.X.X.51 using a brand new node with the same IP. 
> Since it didn't come up after waiting for ~5 minutes or so, I decided to 
> replace X.X.X.51 with a brand new unused IP X.X.X.56 instead. It now seems 
> like my gossip is some weird state. When I start the replacement node I see 
> line like
> {noformat}
>  INFO [GossipStage:1] 2014-11-14 14:57:03,025 Gossiper.java (line 901) 
> InetAddress /X.X.X.51 is now DOWN
>  INFO [GossipStage:1] 2014-11-14 14:57:03,042 Gossiper.java (line 901) 
> InetAddress /X.X.X.56 is now DOWN
> {noformat}
> . The latter is somewhat surprising since that is the IP of the actual 
> replacement node. It doesn't surprise me it can't talk to itself if it hasn't 
> started!
> Eventually the replacement node shuts down with
> {noformat}
> ERROR [main] 2014-11-14 14:58:06,031 CassandraDaemon.java (line 513) 
> Exception encountered during startup
> java.lang.UnsupportedOperationException: Cannot replace token -2 which does 
> not exist!
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:782)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:614)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
>   at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:374)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
>   at com.datastax.bdp.server.DseDaemon.main(DseDaemon.java:615)
>  INFO [Thread-2] 2014-11-14 14:58:06,035 DseDaemon.java (line 461) DSE 
> shutting down...
>  INFO [StorageServiceShutdownHook] 2014-11-14 14:58:06,037 Gossiper.java 
> (line 1307) Announcing shutdown
>  INFO [Thread-2] 2014-11-14 14:58:06,046 PluginManager.java (line 355) All 
> plugins are stopped.
>  INFO [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 463) 
> Cassandra shutting down...
> ERROR [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 199) 
> Exception in thread Thread[Thread-2,5,main]
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.service.CassandraDaemon.stop(CassandraDaemon.java:464)
>   at com.datastax.bdp.server.DseDaemon.stop(DseDaemon.java:464)
>   at com.datastax.bdp.server.DseDaemon$1.run(DseDaemon.java:364){noformat}
> All nodes are showing
> {noformat}
> root@machine-2:~# nodetool status company
> Datacenter: Analytics
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address Load   Tokens  Owns (effective)  Host ID  
>  Rack
> UN  X.X.X.50  18.35 GB   1   16.7% 
> 25efdbcd-14d3-4e9c-803a-3db5795d4efa  rack1
> DN  X.X.X.51  195.67 KB  1   16.7% 
> d97cf86f-bfaf-4488-b716-26d71635a8fc  rack1
> UN  X.X.X.52  18.7 GB1   16.7% 
> caa32f68-5a6b-4d87-80

[jira] [Created] (CASSANDRA-8321) SStablesplit behavior changed

2014-11-14 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-8321:
--

 Summary: SStablesplit behavior changed
 Key: CASSANDRA-8321
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8321
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.1.3


The dtest sstablesplit_test.py has begun failing due to an incorrect number of 
sstables being created after running sstablesplit.

http://cassci.datastax.com/job/cassandra-2.1_dtest/559/changes#detail1
is the run where the failure began.

In 2.1.x, the test expects 7 sstables to be created after split, but instead 12 
are being created. All of the data is there, and the sstables add up to the 
expected size, so this simply may be a change in default behavior. The test 
runs sstablesplit without the --size argument, and the default has not changed, 
so it is unexpected that the behavior would change in a minor point release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8245) Cassandra nodes periodically die in 2-DC configuration

2014-11-14 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212700#comment-14212700
 ] 

Brandon Williams commented on CASSANDRA-8245:
-

All of these dumps show the nodes blocked on flushing the system table as a 
result of processing a new node joining, which means that a lot of nodes must 
be joining.  The interesting message here is:

{noformat}
DEBUG [GossipStage:1] 2014-08-12 11:33:18,801 FailureDetector.java (line 338) 
Ignoring interval time of 2085963047
{noformat}

Which means it didn't see a heartbeat from that node for *241 days*, which 
almost certainly points to a system clock problem of some sort.  I strongly 
suspect this is environmental, not a bug in Cassandra.

> Cassandra nodes periodically die in 2-DC configuration
> --
>
> Key: CASSANDRA-8245
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8245
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Scientific Linux release 6.5
> java version "1.7.0_51"
> Cassandra 2.0.9
>Reporter: Oleg Poleshuk
>Assignee: Brandon Williams
>Priority: Minor
> Attachments: stack1.txt, stack2.txt, stack3.txt, stack4.txt, 
> stack5.txt
>
>
> We have 2 DCs with 3 nodes in each.
> Second DC periodically has 1-2 nodes down.
> Looks like it looses connectivity with another nodes and then Gossiper starts 
> to accumulate tasks until Cassandra dies with OOM.
> WARN [MemoryMeter:1] 2014-08-12 14:34:59,803 Memtable.java (line 470) setting 
> live ratio to maximum of 64.0 instead of Infinity
>  WARN [GossipTasks:1] 2014-08-12 14:44:34,866 Gossiper.java (line 637) Gossip 
> stage has 1 pending tasks; skipping status check (no nodes will be marked 
> down)
>  WARN [GossipTasks:1] 2014-08-12 14:44:35,968 Gossiper.java (line 637) Gossip 
> stage has 4 pending tasks; skipping status check (no nodes will be marked 
> down)
>  WARN [GossipTasks:1] 2014-08-12 14:44:37,070 Gossiper.java (line 637) Gossip 
> stage has 8 pending tasks; skipping status check (no nodes will be marked 
> down)
>  WARN [GossipTasks:1] 2014-08-12 14:44:38,171 Gossiper.java (line 637) Gossip 
> stage has 11 pending tasks; skipping status check (no nodes will be marked 
> down)
> ...
> WARN [GossipTasks:1] 2014-10-06 21:42:51,575 Gossiper.java (line 637) Gossip 
> stage has 1014764 pending tasks; skipping status check (no nodes will be 
> marked down)
>  WARN [New I/O worker #13] 2014-10-06 21:54:27,010 Slf4JLogger.java (line 76) 
> Unexpected exception in the selector loop.
> java.lang.OutOfMemoryError: Java heap space
> Also those lines but not sure it is relevant:
> DEBUG [GossipStage:1] 2014-08-12 11:33:18,801 FailureDetector.java (line 338) 
> Ignoring interval time of 2085963047



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8306) exception in nodetool enablebinary

2014-11-14 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch updated CASSANDRA-8306:
--
Assignee: (was: Russ Hatch)

> exception in nodetool enablebinary
> --
>
> Key: CASSANDRA-8306
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8306
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Rafał Furmański
> Attachments: system.log.zip
>
>
> I was trying to add new node (db4) to existing cluster - with no luck. I 
> can't see any errors in system.log. nodetool status shows, that node is 
> joining into cluster (for many hours). Attaching error and cluster info:
> {code}
> root@db4:~# nodetool enablebinary
> error: Error starting native transport: null
> -- StackTrace --
> java.lang.RuntimeException: Error starting native transport: null
>   at 
> org.apache.cassandra.service.StorageService.startNativeTransport(StorageService.java:350)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
>   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
>   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
>   at sun.rmi.transport.Transport$1.run(Transport.java:177)
>   at sun.rmi.transport.Transport$1.run(Transport.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> {code}
> root@db4:~# nodetool describecluster
> Cluster Information:
>   Name: Production Cluster
>   Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
>   Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
>   Schema versions:
>   b7e98bb9-717f-3f59-bac4-84bc19544e90: [10.195.15.163, 
> 10.195.15.162, 10.195.15.167, 10.195.15.166]
> {code}
> {code}
> root@db4:~# nodetool status
> Datacenter: Ashburn
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens  OwnsHost ID 
>   Rack
> UN  10.195.15.163  12.05 GB   256 ?   
> 0a9f478c-80b5-4c15-8b2e-e27df6684c69  RAC1
> UN  10.195.15.162  12.8 GB256 ?   
> c18d2218-ef84-4165-9c3a-05f592f512e9  RAC1
> UJ  10.195.15.167  18.61 GB   256 ?   

[jira] [Commented] (CASSANDRA-8306) exception in nodetool enablebinary

2014-11-14 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212694#comment-14212694
 ] 

Michael Shuler commented on CASSANDRA-8306:
---

Could you explain exactly your process of how you were adding this node to your 
cluster?  The provided log is 171480 lines of tombstone processing and only 54 
lines of INFO,WARN normal log entries.

> exception in nodetool enablebinary
> --
>
> Key: CASSANDRA-8306
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8306
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Rafał Furmański
>Assignee: Russ Hatch
> Attachments: system.log.zip
>
>
> I was trying to add new node (db4) to existing cluster - with no luck. I 
> can't see any errors in system.log. nodetool status shows, that node is 
> joining into cluster (for many hours). Attaching error and cluster info:
> {code}
> root@db4:~# nodetool enablebinary
> error: Error starting native transport: null
> -- StackTrace --
> java.lang.RuntimeException: Error starting native transport: null
>   at 
> org.apache.cassandra.service.StorageService.startNativeTransport(StorageService.java:350)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
>   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
>   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
>   at sun.rmi.transport.Transport$1.run(Transport.java:177)
>   at sun.rmi.transport.Transport$1.run(Transport.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> {code}
> root@db4:~# nodetool describecluster
> Cluster Information:
>   Name: Production Cluster
>   Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
>   Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
>   Schema versions:
>   b7e98bb9-717f-3f59-bac4-84bc19544e90: [10.195.15.163, 
> 10.195.15.162, 10.195.15.167, 10.195.15.166]
> {code}
> {code}
> root@db4:~# nodetool status
> Datacenter: Ashburn
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens  OwnsHost ID   

[jira] [Commented] (CASSANDRA-8318) Unable to replace a node

2014-11-14 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212627#comment-14212627
 ] 

Brandon Williams commented on CASSANDRA-8318:
-

bq. The latter is somewhat surprising since that is the IP of the actual 
replacement node. It doesn't surprise me it can't talk to itself if it hasn't 
started!

That's a perfectly normal message to see during replacement, since we consider 
the replacee to be 'down' while it's performing the replace.

bq. Should I expect it do be there?

No, but it shouldn't hurt, either.  Numerous bugs around this have been fixed 
since 2.0.8.  What actualy token does .51 have?

> Unable to replace a node
> 
>
> Key: CASSANDRA-8318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8318
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.0.8.39 (Datastax DSE 4.5.3)
>Reporter: Jens Rantil
> Attachments: X.X.X.56.log
>
>
> Had a hardware failure of a node. I followed the Datastax documentation[1] on 
> how to replace the node X.X.X.51 using a brand new node with the same IP. 
> Since it didn't come up after waiting for ~5 minutes or so, I decided to 
> replace X.X.X.51 with a brand new unused IP X.X.X.56 instead. It now seems 
> like my gossip is some weird state. When I start the replacement node I see 
> line like
> {noformat}
>  INFO [GossipStage:1] 2014-11-14 14:57:03,025 Gossiper.java (line 901) 
> InetAddress /X.X.X.51 is now DOWN
>  INFO [GossipStage:1] 2014-11-14 14:57:03,042 Gossiper.java (line 901) 
> InetAddress /X.X.X.56 is now DOWN
> {noformat}
> . The latter is somewhat surprising since that is the IP of the actual 
> replacement node. It doesn't surprise me it can't talk to itself if it hasn't 
> started!
> Eventually the replacement node shuts down with
> {noformat}
> ERROR [main] 2014-11-14 14:58:06,031 CassandraDaemon.java (line 513) 
> Exception encountered during startup
> java.lang.UnsupportedOperationException: Cannot replace token -2 which does 
> not exist!
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:782)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:614)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
>   at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:374)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
>   at com.datastax.bdp.server.DseDaemon.main(DseDaemon.java:615)
>  INFO [Thread-2] 2014-11-14 14:58:06,035 DseDaemon.java (line 461) DSE 
> shutting down...
>  INFO [StorageServiceShutdownHook] 2014-11-14 14:58:06,037 Gossiper.java 
> (line 1307) Announcing shutdown
>  INFO [Thread-2] 2014-11-14 14:58:06,046 PluginManager.java (line 355) All 
> plugins are stopped.
>  INFO [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 463) 
> Cassandra shutting down...
> ERROR [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 199) 
> Exception in thread Thread[Thread-2,5,main]
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.service.CassandraDaemon.stop(CassandraDaemon.java:464)
>   at com.datastax.bdp.server.DseDaemon.stop(DseDaemon.java:464)
>   at com.datastax.bdp.server.DseDaemon$1.run(DseDaemon.java:364){noformat}
> All nodes are showing
> {noformat}
> root@machine-2:~# nodetool status company
> Datacenter: Analytics
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address Load   Tokens  Owns (effective)  Host ID  
>  Rack
> UN  X.X.X.50  18.35 GB   1   16.7% 
> 25efdbcd-14d3-4e9c-803a-3db5795d4efa  rack1
> DN  X.X.X.51  195.67 KB  1   16.7% 
> d97cf86f-bfaf-4488-b716-26d71635a8fc  rack1
> UN  X.X.X.52  18.7 GB1   16.7% 
> caa32f68-5a6b-4d87-80bd-baa66a9b61ce  rack1
> UN  X.X.X.53  18.56 GB   1   16.7% 
> e219321e-a6d5-48c4-9bad-d2e25429b1d2  rack1
> UN  X.X.X.54  19.69 GB   1   16.7% 
> 3cd36895-ee47-41c1-a5f5-41cb0f8526a6  rack1
> UN  X.X.X.55  18.88 GB   1   16.7% 
> 7d3f73c4-724e-45a6-bcf9-d3072dfc157f  rack1
> Datacenter: Cassandra
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address Load   Tokens  Owns (effective)  Host ID  
>  Rack
> UN  X.X.X.33  128.95 GB  256 100.0%
> 871968c9-1d6b-4f06-ba90-8b3a8d92dcf0  rack1
> UN  X.X.X.32  115.3 GB   256 100.0%
> d7cacd89-8613-4de5-8a5e-a2c53c41ea45  rack1
> UN  X.X.X.31  130.45 GB  256 100.0%
> 48cb0782-6c9a-4805-9330-38e192b6b680  rack1
> {nofo

[jira] [Comment Edited] (CASSANDRA-8318) Unable to replace a node

2014-11-14 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212627#comment-14212627
 ] 

Brandon Williams edited comment on CASSANDRA-8318 at 11/14/14 6:55 PM:
---

bq. The latter is somewhat surprising since that is the IP of the actual 
replacement node. It doesn't surprise me it can't talk to itself if it hasn't 
started!

That's a perfectly normal message to see during replacement, since we consider 
the replacee to be 'down' while it's performing the replace.

bq. Should I expect it do be there?

No, but it shouldn't hurt, either.  Numerous bugs around this have been fixed 
since 2.0.8.  What actual token does .51 have?


was (Author: brandon.williams):
bq. The latter is somewhat surprising since that is the IP of the actual 
replacement node. It doesn't surprise me it can't talk to itself if it hasn't 
started!

That's a perfectly normal message to see during replacement, since we consider 
the replacee to be 'down' while it's performing the replace.

bq. Should I expect it do be there?

No, but it shouldn't hurt, either.  Numerous bugs around this have been fixed 
since 2.0.8.  What actualy token does .51 have?

> Unable to replace a node
> 
>
> Key: CASSANDRA-8318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8318
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.0.8.39 (Datastax DSE 4.5.3)
>Reporter: Jens Rantil
> Attachments: X.X.X.56.log
>
>
> Had a hardware failure of a node. I followed the Datastax documentation[1] on 
> how to replace the node X.X.X.51 using a brand new node with the same IP. 
> Since it didn't come up after waiting for ~5 minutes or so, I decided to 
> replace X.X.X.51 with a brand new unused IP X.X.X.56 instead. It now seems 
> like my gossip is some weird state. When I start the replacement node I see 
> line like
> {noformat}
>  INFO [GossipStage:1] 2014-11-14 14:57:03,025 Gossiper.java (line 901) 
> InetAddress /X.X.X.51 is now DOWN
>  INFO [GossipStage:1] 2014-11-14 14:57:03,042 Gossiper.java (line 901) 
> InetAddress /X.X.X.56 is now DOWN
> {noformat}
> . The latter is somewhat surprising since that is the IP of the actual 
> replacement node. It doesn't surprise me it can't talk to itself if it hasn't 
> started!
> Eventually the replacement node shuts down with
> {noformat}
> ERROR [main] 2014-11-14 14:58:06,031 CassandraDaemon.java (line 513) 
> Exception encountered during startup
> java.lang.UnsupportedOperationException: Cannot replace token -2 which does 
> not exist!
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:782)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:614)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
>   at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:374)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
>   at com.datastax.bdp.server.DseDaemon.main(DseDaemon.java:615)
>  INFO [Thread-2] 2014-11-14 14:58:06,035 DseDaemon.java (line 461) DSE 
> shutting down...
>  INFO [StorageServiceShutdownHook] 2014-11-14 14:58:06,037 Gossiper.java 
> (line 1307) Announcing shutdown
>  INFO [Thread-2] 2014-11-14 14:58:06,046 PluginManager.java (line 355) All 
> plugins are stopped.
>  INFO [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 463) 
> Cassandra shutting down...
> ERROR [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 199) 
> Exception in thread Thread[Thread-2,5,main]
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.service.CassandraDaemon.stop(CassandraDaemon.java:464)
>   at com.datastax.bdp.server.DseDaemon.stop(DseDaemon.java:464)
>   at com.datastax.bdp.server.DseDaemon$1.run(DseDaemon.java:364){noformat}
> All nodes are showing
> {noformat}
> root@machine-2:~# nodetool status company
> Datacenter: Analytics
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address Load   Tokens  Owns (effective)  Host ID  
>  Rack
> UN  X.X.X.50  18.35 GB   1   16.7% 
> 25efdbcd-14d3-4e9c-803a-3db5795d4efa  rack1
> DN  X.X.X.51  195.67 KB  1   16.7% 
> d97cf86f-bfaf-4488-b716-26d71635a8fc  rack1
> UN  X.X.X.52  18.7 GB1   16.7% 
> caa32f68-5a6b-4d87-80bd-baa66a9b61ce  rack1
> UN  X.X.X.53  18.56 GB   1   16.7% 
> e219321e-a6d5-48c4-9bad-d2e25429b1d2  rack1
> UN  X.X.X.54  19.69 GB   1   16.7% 
> 3cd36895-ee47-41c1-a5f5-41cb0f8526a6  rack1
> UN  X.X.X.55  18.88 GB   1   16.7% 
> 7d3f7

[jira] [Updated] (CASSANDRA-8313) cassandra-stress: No way to pass in data center hint for DCAwareRoundRobinPolicy

2014-11-14 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-8313:
--
Assignee: T Jake Luciani

[~tjake] could you take a stab at this?

> cassandra-stress: No way to pass in data center hint for 
> DCAwareRoundRobinPolicy
> 
>
> Key: CASSANDRA-8313
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8313
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Bob Nilsen
>Assignee: T Jake Luciani
>
> When using cassandra-stress in a multiple datacenter configuration, we need 
> to be able to behave like the applications do and send traffic to nodes 
> co-located in the same data center.
> I can't for the life of me figure out how to pass in such a hint into the new 
> cassandra-stress.
> And passing in a local node into "-node" doesn't help.  Apparently, 
> cassandra-stress will *guess* the data center based on the order of the list 
> that it receives from the cluster.
> In my case, it seems to always pick 'DC2', no matter what I do.
> INFO  22:17:06 Using data-center name 'DC2' for DCAwareRoundRobinPolicy (if 
> this is incorrect, please provide the correct datacenter name with 
> DCAwareRoundRobinPolicy constructor)
> Could someone please add the ability to configure the DCAwareRoundRobinPolicy 
> with a data center hint?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8305) add check of the system wall clock time at startup

2014-11-14 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212619#comment-14212619
 ] 

Brandon Williams commented on CASSANDRA-8305:
-

Well, it's a sanity check, if it's going to be wrong it's going to be *really* 
wrong, like CASSANDRA-8296 :)

> add check of the system wall clock time at startup
> --
>
> Key: CASSANDRA-8305
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8305
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Minor
>  Labels: clock, gossip
> Fix For: 3.0, 2.1.3
>
> Attachments: v1.txt
>
>
> Related to CASSANDRA-8296, we should add a check of the system wall clock at 
> startup to make sure it 'looks' reasonable. This check will prevent a node 
> from starting with a bad generation in it's gossip metadata (and causing many 
> problems downstream of that).
> Note that this is intended as a simple check of the clock at startup and not 
> a  comprehensive, ongoing check of clocks during the running of the process. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8305) add check of the system wall clock time at startup

2014-11-14 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212614#comment-14212614
 ] 

sankalp kohli edited comment on CASSANDRA-8305 at 11/14/14 6:47 PM:


" Date roughly taken from 
http://perspectives.mvdirona.com/2008/07/12/FacebookReleasesCassandraAsOpenSource.aspx";

[~jasobrown]
lolWe can take yesterdays date as this code is checked in yesterday :)  

You cannot run a code before it was written :)


was (Author: kohlisankalp):
" Date roughly taken from 
http://perspectives.mvdirona.com/2008/07/12/FacebookReleasesCassandraAsOpenSource.aspx";

[~jasobrown]
lolWe can take yesterdays date as this code is checked in yesterday :)  

> add check of the system wall clock time at startup
> --
>
> Key: CASSANDRA-8305
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8305
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Minor
>  Labels: clock, gossip
> Fix For: 3.0, 2.1.3
>
> Attachments: v1.txt
>
>
> Related to CASSANDRA-8296, we should add a check of the system wall clock at 
> startup to make sure it 'looks' reasonable. This check will prevent a node 
> from starting with a bad generation in it's gossip metadata (and causing many 
> problems downstream of that).
> Note that this is intended as a simple check of the clock at startup and not 
> a  comprehensive, ongoing check of clocks during the running of the process. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8313) cassandra-stress: No way to pass in data center hint for DCAwareRoundRobinPolicy

2014-11-14 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212615#comment-14212615
 ] 

Brandon Williams commented on CASSANDRA-8313:
-

I think what you want is the 'whitelist' option with only the DC1 nodes 
specified.

> cassandra-stress: No way to pass in data center hint for 
> DCAwareRoundRobinPolicy
> 
>
> Key: CASSANDRA-8313
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8313
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Bob Nilsen
>
> When using cassandra-stress in a multiple datacenter configuration, we need 
> to be able to behave like the applications do and send traffic to nodes 
> co-located in the same data center.
> I can't for the life of me figure out how to pass in such a hint into the new 
> cassandra-stress.
> And passing in a local node into "-node" doesn't help.  Apparently, 
> cassandra-stress will *guess* the data center based on the order of the list 
> that it receives from the cluster.
> In my case, it seems to always pick 'DC2', no matter what I do.
> INFO  22:17:06 Using data-center name 'DC2' for DCAwareRoundRobinPolicy (if 
> this is incorrect, please provide the correct datacenter name with 
> DCAwareRoundRobinPolicy constructor)
> Could someone please add the ability to configure the DCAwareRoundRobinPolicy 
> with a data center hint?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8305) add check of the system wall clock time at startup

2014-11-14 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212614#comment-14212614
 ] 

sankalp kohli commented on CASSANDRA-8305:
--

" Date roughly taken from 
http://perspectives.mvdirona.com/2008/07/12/FacebookReleasesCassandraAsOpenSource.aspx";

[~jasobrown]
lolWe can take yesterdays date as this code is checked in yesterday :)  

> add check of the system wall clock time at startup
> --
>
> Key: CASSANDRA-8305
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8305
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Minor
>  Labels: clock, gossip
> Fix For: 3.0, 2.1.3
>
> Attachments: v1.txt
>
>
> Related to CASSANDRA-8296, we should add a check of the system wall clock at 
> startup to make sure it 'looks' reasonable. This check will prevent a node 
> from starting with a bad generation in it's gossip metadata (and causing many 
> problems downstream of that).
> Note that this is intended as a simple check of the clock at startup and not 
> a  comprehensive, ongoing check of clocks during the running of the process. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8306) exception in nodetool enablebinary

2014-11-14 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212613#comment-14212613
 ] 

Brandon Williams commented on CASSANDRA-8306:
-

bq. I'd still be interested to know if setting start_native_transport to true 
helps the node join in this case.

I'm 100% certain that won't affect joining.

> exception in nodetool enablebinary
> --
>
> Key: CASSANDRA-8306
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8306
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Rafał Furmański
>Assignee: Russ Hatch
> Attachments: system.log.zip
>
>
> I was trying to add new node (db4) to existing cluster - with no luck. I 
> can't see any errors in system.log. nodetool status shows, that node is 
> joining into cluster (for many hours). Attaching error and cluster info:
> {code}
> root@db4:~# nodetool enablebinary
> error: Error starting native transport: null
> -- StackTrace --
> java.lang.RuntimeException: Error starting native transport: null
>   at 
> org.apache.cassandra.service.StorageService.startNativeTransport(StorageService.java:350)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
>   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
>   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
>   at sun.rmi.transport.Transport$1.run(Transport.java:177)
>   at sun.rmi.transport.Transport$1.run(Transport.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> {code}
> root@db4:~# nodetool describecluster
> Cluster Information:
>   Name: Production Cluster
>   Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
>   Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
>   Schema versions:
>   b7e98bb9-717f-3f59-bac4-84bc19544e90: [10.195.15.163, 
> 10.195.15.162, 10.195.15.167, 10.195.15.166]
> {code}
> {code}
> root@db4:~# nodetool status
> Datacenter: Ashburn
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens  OwnsHost ID 
>   Rack
> UN  

[jira] [Updated] (CASSANDRA-8251) CQLSSTableWriter.builder() throws ex when more than one table

2014-11-14 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian updated CASSANDRA-8251:
--
Attachment: 8251-2.0.txt

> CQLSSTableWriter.builder() throws ex when more than one table
> -
>
> Key: CASSANDRA-8251
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8251
> Project: Cassandra
>  Issue Type: Bug
> Environment: ubuntu x64
>Reporter: Pierre
>Assignee: Carl Yeksigian
> Fix For: 2.0.12
>
> Attachments: 8251-2.0.txt
>
>
> Tested with latest trunk (from github).
> This is not the same bug in 2.1.1 where you can't use 2 differents table 
> because they weren't added to KSMetaData. But related because it occurs at 
> the same condition (more than one table) at CQLSSTableWriter.java#L360
> static codes in KeySpace call DatabaseDescriptor.createAllDirectories() 
> because StorageService.instance is not in clientMode, throws ex because of 
> NullPointer.
> Reproduce bug: 
> {noformat}
> Config.setClientMode(true);
> CQLSSTableWriter.builder()
> .inDirectory("/var/tmp/kspc/t1")
> .forTable("create table kspc.t1 ( id  int, primary key 
> (id));")
> .using("INSERT INTO kspc.t1 (id) VALUES ( ? );")
> .build();
> CQLSSTableWriter.builder()
> .inDirectory("/var/tmp/kspc/t2")
> .forTable("create table kspc.t2 ( id  int, primary key 
> (id));")
> .using("INSERT INTO kspc.t2 (id) VALUES ( ? );")
> .build();
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8264) Problems with multicolumn relations and COMPACT STORAGE

2014-11-14 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8264:
---
Reviewer: Benjamin Lerer

> Problems with multicolumn relations and COMPACT STORAGE
> ---
>
> Key: CASSANDRA-8264
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8264
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
> Fix For: 2.0.12, 2.1.3
>
> Attachments: 8264-2.0.txt, 8264-2.1.txt
>
>
> As discovered in CASSANDRA-7859, there are a few issues with multi-column 
> relations and {{COMPACT STORAGE}}.
> The first issue is that IN clauses on multiple columns aren't handled 
> correctly.  There appear to be other issues as well, but I haven't been able 
> to dig into them yet.  To reproduce the issues, run each of the tests in 
> {{MultiColumnRelationTest}} with a {{COMPACT STORAGE}} version of the table.  
> (Changing the tests to do that automatically will be part of the ticket.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8264) Problems with multicolumn relations and COMPACT STORAGE

2014-11-14 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8264:
---
Attachment: 8264-2.1.txt
8264-2.0.txt

> Problems with multicolumn relations and COMPACT STORAGE
> ---
>
> Key: CASSANDRA-8264
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8264
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Tyler Hobbs
>Assignee: Tyler Hobbs
> Fix For: 2.0.12, 2.1.3
>
> Attachments: 8264-2.0.txt, 8264-2.1.txt
>
>
> As discovered in CASSANDRA-7859, there are a few issues with multi-column 
> relations and {{COMPACT STORAGE}}.
> The first issue is that IN clauses on multiple columns aren't handled 
> correctly.  There appear to be other issues as well, but I haven't been able 
> to dig into them yet.  To reproduce the issues, run each of the tests in 
> {{MultiColumnRelationTest}} with a {{COMPACT STORAGE}} version of the table.  
> (Changing the tests to do that automatically will be part of the ticket.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Remove post-2.1 dead schema migrations and columns

2014-11-14 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 58fdc6b5c -> cbbc1191c


Remove post-2.1 dead schema migrations and columns


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cbbc1191
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cbbc1191
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cbbc1191

Branch: refs/heads/trunk
Commit: cbbc1191ce1656a92354a4fa3859626cb10083e5
Parents: 58fdc6b
Author: Aleksey Yeschenko 
Authored: Fri Nov 14 20:58:33 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Fri Nov 14 20:58:33 2014 +0300

--
 .../apache/cassandra/cache/CachingOptions.java  |  9 +--
 .../org/apache/cassandra/config/CFMetaData.java | 45 +-
 .../org/apache/cassandra/db/SystemKeyspace.java | 62 +---
 3 files changed, 5 insertions(+), 111 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cbbc1191/src/java/org/apache/cassandra/cache/CachingOptions.java
--
diff --git a/src/java/org/apache/cassandra/cache/CachingOptions.java 
b/src/java/org/apache/cassandra/cache/CachingOptions.java
index 6eeaa37..f9c7e64 100644
--- a/src/java/org/apache/cassandra/cache/CachingOptions.java
+++ b/src/java/org/apache/cassandra/cache/CachingOptions.java
@@ -130,11 +130,7 @@ public class CachingOptions
 return result;
 }
 
-public static boolean isLegacy(String CachingOptions)
-{
-return legacyOptions.contains(CachingOptions.toUpperCase());
-}
-
+// FIXME: move to ThriftConversion
 public static CachingOptions fromThrift(String caching, String 
cellsPerRow) throws ConfigurationException
 {
 
@@ -153,6 +149,7 @@ public class CachingOptions
 return new CachingOptions(kc, rc);
 }
 
+// FIXME: move to ThriftConversion
 public String toThriftCaching()
 {
 if (rowCache.isEnabled() && keyCache.isEnabled())
@@ -164,6 +161,7 @@ public class CachingOptions
 return "NONE";
 }
 
+// FIXME: move to ThriftConversion
 public String toThriftCellsPerRow()
 {
 if (rowCache.cacheFullPartitions())
@@ -171,7 +169,6 @@ public class CachingOptions
 return String.valueOf(rowCache.rowsToCache);
 }
 
-
 public static class KeyCache
 {
 public final Type type;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cbbc1191/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index 14271c0..2ed4a95 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -54,7 +54,6 @@ import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.FBUtilities;
 import org.apache.cassandra.utils.UUIDGen;
 
-import static org.apache.cassandra.utils.FBUtilities.fromJsonList;
 import static org.apache.cassandra.utils.FBUtilities.fromJsonMap;
 import static org.apache.cassandra.utils.FBUtilities.json;
 
@@ -1251,7 +1250,6 @@ public final class CFMetaData
 adder.add("min_compaction_threshold", minCompactionThreshold);
 adder.add("max_compaction_threshold", maxCompactionThreshold);
 adder.add("bloom_filter_fp_chance", getBloomFilterFpChance());
-
 adder.add("memtable_flush_period_in_ms", memtableFlushPeriod);
 adder.add("caching", caching.toString());
 adder.add("default_time_to_live", defaultTimeToLive);
@@ -1260,18 +1258,12 @@ public final class CFMetaData
 adder.add("compaction_strategy_options", 
json(compactionStrategyOptions));
 adder.add("min_index_interval", minIndexInterval);
 adder.add("max_index_interval", maxIndexInterval);
-adder.add("index_interval", null);
 adder.add("speculative_retry", speculativeRetry.toString());
 
 for (Map.Entry entry : 
droppedColumns.entrySet())
 adder.addMapEntry("dropped_columns", entry.getKey().toString(), 
entry.getValue());
 
 adder.add("is_dense", isDense);
-
-// Save the CQL3 metadata "the old way" for compatibility sake
-adder.add("key_aliases", aliasesToJson(partitionKeyColumns));
-adder.add("column_aliases", aliasesToJson(clusteringColumns));
-adder.add("value_alias", compactValueColumn == null ? null : 
compactValueColumn.name.toString());
 }
 
 @VisibleForTesting
@@ -1328,11 +1320,9 @@ public final class CFMetaData
 
cfm.compressionParameters(CompressionParameters.create(fromJsonMap(result.getString("compression_parameters";
 
cfm.compactionStrategyOptions(fromJsonMap(result.ge

[jira] [Commented] (CASSANDRA-8193) Multi-DC parallel snapshot repair

2014-11-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212539#comment-14212539
 ] 

Jimmy Mårdell commented on CASSANDRA-8193:
--

New patched added. I've added an enum for specifying the degree of parallelism. 
This cascaded up in the code path a bit. Backward compatibility should be 
maintained, at the expense of adding a few more forceRepair methods in 
StorageService.

As a side note, can't we remove many of forceRepair methods in 
StorageServiceMBean in 2.1? It's getting quite ugly. nodetool only uses two of 
them (one with range and one without range).


> Multi-DC parallel snapshot repair
> -
>
> Key: CASSANDRA-8193
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8193
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jimmy Mårdell
>Assignee: Jimmy Mårdell
>Priority: Minor
> Fix For: 2.0.12
>
> Attachments: cassandra-2.0-8193-1.txt, cassandra-2.0-8193-2.txt
>
>
> The current behaviour of snapshot repair is to let one node at a time 
> calculate a merkle tree. This is to ensure only one node at a time is doing 
> the expensive calculation. The drawback is that it takes even longer time to 
> do the merkle tree calculation.
> In a multi-DC setup, I think it would make more sense to have one node in 
> each DC calculate the merkle tree at the same time. This would yield a 
> significant improvement when you have many data centers.
> I'm not sure how relevant this is in 2.1, but I don't see us upgrading to 2.1 
> any time soon. Unless there is an obvious drawback that I'm missing, I'd like 
> to implement this in the 2.0 branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8193) Multi-DC parallel snapshot repair

2014-11-14 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Mårdell updated CASSANDRA-8193:
-
Attachment: cassandra-2.0-8193-2.txt

> Multi-DC parallel snapshot repair
> -
>
> Key: CASSANDRA-8193
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8193
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jimmy Mårdell
>Assignee: Jimmy Mårdell
>Priority: Minor
> Fix For: 2.0.12
>
> Attachments: cassandra-2.0-8193-1.txt, cassandra-2.0-8193-2.txt
>
>
> The current behaviour of snapshot repair is to let one node at a time 
> calculate a merkle tree. This is to ensure only one node at a time is doing 
> the expensive calculation. The drawback is that it takes even longer time to 
> do the merkle tree calculation.
> In a multi-DC setup, I think it would make more sense to have one node in 
> each DC calculate the merkle tree at the same time. This would yield a 
> significant improvement when you have many data centers.
> I'm not sure how relevant this is in 2.1, but I don't see us upgrading to 2.1 
> any time soon. Unless there is an obvious drawback that I'm missing, I'd like 
> to implement this in the 2.0 branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8306) exception in nodetool enablebinary

2014-11-14 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch updated CASSANDRA-8306:
--
Assignee: Russ Hatch

> exception in nodetool enablebinary
> --
>
> Key: CASSANDRA-8306
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8306
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Rafał Furmański
>Assignee: Russ Hatch
> Attachments: system.log.zip
>
>
> I was trying to add new node (db4) to existing cluster - with no luck. I 
> can't see any errors in system.log. nodetool status shows, that node is 
> joining into cluster (for many hours). Attaching error and cluster info:
> {code}
> root@db4:~# nodetool enablebinary
> error: Error starting native transport: null
> -- StackTrace --
> java.lang.RuntimeException: Error starting native transport: null
>   at 
> org.apache.cassandra.service.StorageService.startNativeTransport(StorageService.java:350)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
>   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
>   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
>   at sun.rmi.transport.Transport$1.run(Transport.java:177)
>   at sun.rmi.transport.Transport$1.run(Transport.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> {code}
> root@db4:~# nodetool describecluster
> Cluster Information:
>   Name: Production Cluster
>   Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
>   Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
>   Schema versions:
>   b7e98bb9-717f-3f59-bac4-84bc19544e90: [10.195.15.163, 
> 10.195.15.162, 10.195.15.167, 10.195.15.166]
> {code}
> {code}
> root@db4:~# nodetool status
> Datacenter: Ashburn
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens  OwnsHost ID 
>   Rack
> UN  10.195.15.163  12.05 GB   256 ?   
> 0a9f478c-80b5-4c15-8b2e-e27df6684c69  RAC1
> UN  10.195.15.162  12.8 GB256 ?   
> c18d2218-ef84-4165-9c3a-05f592f512e9  RAC1
> UJ  10.195.15.167  

[jira] [Commented] (CASSANDRA-4210) Support for variadic parameters list for "in clause" in prepared cql query

2014-11-14 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212483#comment-14212483
 ] 

Tyler Hobbs commented on CASSANDRA-4210:


bq. Do you know what the best place is to reach the team in charge of this 
driver ?

The JIRA for the nodejs driver can be found here: 
https://datastax-oss.atlassian.net/browse/NODEJS

> Support for variadic parameters list for "in clause" in prepared cql query
> --
>
> Key: CASSANDRA-4210
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4210
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.1.0
> Environment: prepared cql queries
>Reporter: Pierre Chalamet
>Assignee: Sylvain Lebresne
>Priority: Minor
> Fix For: 2.0.1
>
> Attachments: 4210.txt
>
>
> This query
> {code}
> select * from Town where key in (?)
> {code}
> only allows one parameter for '?'.
> This means querying for 'Paris' and 'London' can't be executed in one step 
> with this prepared statement.
> Current workarounds are:
> * either execute the prepared query 2 times with 'Paris' then 'London'
> * or prepare a new query {{select * from Town where key in (?, ?)}} and bind 
> the 2 parameters
> Having a support for variadic parameters list with in clause could improve 
> performance:
> * single hop to get the data
> * // fetching server side



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Allow snapshot-based repair on Windows for 3.0

2014-11-14 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/trunk b4d7f3bed -> 58fdc6b5c


Allow snapshot-based repair on Windows for 3.0

Patch by jmckenzie; reviewed by yukim for CASSANDRA-8309


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/58fdc6b5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/58fdc6b5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/58fdc6b5

Branch: refs/heads/trunk
Commit: 58fdc6b5c027215edad4802b2ced453f3ebba2cc
Parents: b4d7f3b
Author: Joshua McKenzie 
Authored: Fri Nov 14 10:52:28 2014 -0600
Committer: Joshua McKenzie 
Committed: Fri Nov 14 10:52:28 2014 -0600

--
 .../apache/cassandra/repair/messages/RepairOption.java| 10 +-
 1 file changed, 1 insertion(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/58fdc6b5/src/java/org/apache/cassandra/repair/messages/RepairOption.java
--
diff --git a/src/java/org/apache/cassandra/repair/messages/RepairOption.java 
b/src/java/org/apache/cassandra/repair/messages/RepairOption.java
index fd1d6e7..3f59d7b 100644
--- a/src/java/org/apache/cassandra/repair/messages/RepairOption.java
+++ b/src/java/org/apache/cassandra/repair/messages/RepairOption.java
@@ -211,15 +211,7 @@ public class RepairOption
 
 public RepairOption(boolean sequential, boolean primaryRange, boolean 
incremental, int jobThreads, Collection> ranges)
 {
-if (!FBUtilities.isUnix() && sequential)
-{
-logger.warn("Snapshot-based repair is not yet supported on 
Windows.  Reverting to parallel repair.");
-this.sequential = false;
-}
-else
-{
-this.sequential = sequential;
-}
+this.sequential = sequential;
 this.primaryRange = primaryRange;
 this.incremental = incremental;
 this.jobThreads = jobThreads;



[jira] [Commented] (CASSANDRA-8306) exception in nodetool enablebinary

2014-11-14 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212468#comment-14212468
 ] 

Russ Hatch commented on CASSANDRA-8306:
---

Seems my last comment may be off the mark a bit, since this is what the log 
instructs users to do when starting with native transport disabled
{noformat}
INFO  [main] 2014-11-14 09:46:36,822 CassandraDaemon.java:392 - Not starting 
native transport as requested. Use JMX (StorageService->startNativeTransport()) 
or nodetool (enablebinary) to start it.
{noformat}

I'd still be interested to know if setting start_native_transport to true helps 
the node join in this case.

> exception in nodetool enablebinary
> --
>
> Key: CASSANDRA-8306
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8306
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Rafał Furmański
> Attachments: system.log.zip
>
>
> I was trying to add new node (db4) to existing cluster - with no luck. I 
> can't see any errors in system.log. nodetool status shows, that node is 
> joining into cluster (for many hours). Attaching error and cluster info:
> {code}
> root@db4:~# nodetool enablebinary
> error: Error starting native transport: null
> -- StackTrace --
> java.lang.RuntimeException: Error starting native transport: null
>   at 
> org.apache.cassandra.service.StorageService.startNativeTransport(StorageService.java:350)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
>   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
>   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
>   at sun.rmi.transport.Transport$1.run(Transport.java:177)
>   at sun.rmi.transport.Transport$1.run(Transport.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> {code}
> root@db4:~# nodetool describecluster
> Cluster Information:
>   Name: Production Cluster
>   Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
>   Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
>   Schema versions:
>   b7e98bb9-717f-3f59-bac4-84bc19544e90: [10.195.15.163, 
> 10.195.15.162, 10.195.15

[jira] [Commented] (CASSANDRA-8306) exception in nodetool enablebinary

2014-11-14 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212451#comment-14212451
 ] 

Russ Hatch commented on CASSANDRA-8306:
---

I think this might just be a side effect of starting cassandra with 
start_native_transport set to false.

[~rfurmanski] can you please double check your setting in cassandra.yaml -- and 
set start_native_transport to true if it is not already. With this setting the 
node should have binary transport enabled automatically on startup and you 
won't have any need to run enablebinary.

If this resolves your problem please let us know because I still think a change 
might be useful to warn when someone attempts enablebinary on a node with 
native transport disabled.

> exception in nodetool enablebinary
> --
>
> Key: CASSANDRA-8306
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8306
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Rafał Furmański
> Attachments: system.log.zip
>
>
> I was trying to add new node (db4) to existing cluster - with no luck. I 
> can't see any errors in system.log. nodetool status shows, that node is 
> joining into cluster (for many hours). Attaching error and cluster info:
> {code}
> root@db4:~# nodetool enablebinary
> error: Error starting native transport: null
> -- StackTrace --
> java.lang.RuntimeException: Error starting native transport: null
>   at 
> org.apache.cassandra.service.StorageService.startNativeTransport(StorageService.java:350)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
>   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
>   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
>   at sun.rmi.transport.Transport$1.run(Transport.java:177)
>   at sun.rmi.transport.Transport$1.run(Transport.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
>   at 
> sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:556)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:811)
>   at 
> sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:670)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> {code}
> root@db4:~# nodetool describecluster
> Cluster Information:
>   Name: Production Cluster
>   Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
>   Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
>   Schema versions:
>  

[jira] [Commented] (CASSANDRA-8295) Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex

2014-11-14 Thread Jose Martinez Poblete (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212436#comment-14212436
 ] 

Jose Martinez Poblete commented on CASSANDRA-8295:
--

OK, set write_request_timeout_in_ms: 2000
Also changed disk to RAID0 and we are able to drop a 10Gb file in 19 secs ~ 
1.8Tb/Hr
Asked him to use executeAsync on his loads
Still we are seeing these 

{noformat}
 INFO [ScheduledTasks:1] 2014-11-13 13:51:29,696 MessagingService.java (line 
875) 949 MUTATION messages dropped in last 5000ms
 INFO [ScheduledTasks:1] 2014-11-13 13:52:49,939 MessagingService.java (line 
875) 1378 MUTATION messages dropped in last 5000ms
 INFO [ScheduledTasks:1] 2014-11-13 13:53:17,215 MessagingService.java (line 
875) 2 MUTATION messages dropped in last 5000ms
 INFO [ScheduledTasks:1] 2014-11-13 13:54:10,277 MessagingService.java (line 
875) 1 MUTATION messages dropped in last 5000ms 
{noformat}

Perhaps raising memtable_flush_queue_size & un-throttle compaction to force 
faster disk flush ?

> Cassandra runs OOM @ java.util.concurrent.ConcurrentSkipListMap$HeadIndex
> -
>
> Key: CASSANDRA-8295
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8295
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: DSE 4.5.3 Cassandra 2.0.11.82
>Reporter: Jose Martinez Poblete
> Attachments: alln01-ats-cas3.cassandra.yaml, output.tgz, system.tgz, 
> system.tgz.1, system.tgz.2, system.tgz.3
>
>
> Customer runs a 3 node cluster 
> Their dataset is less than 1Tb and during data load, one of the nodes enter a 
> GC death spiral:
> {noformat}
>  INFO [ScheduledTasks:1] 2014-11-07 23:31:08,094 GCInspector.java (line 116) 
> GC for ConcurrentMarkSweep: 3348 ms for 2 collections, 1658268944 used; max 
> is 8375238656
>  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,486 GCInspector.java (line 116) 
> GC for ParNew: 442 ms for 2 collections, 6079570032 used; max is 8375238656
>  INFO [ScheduledTasks:1] 2014-11-07 23:40:58,487 GCInspector.java (line 116) 
> GC for ConcurrentMarkSweep: 7351 ms for 2 collections, 6084678280 used; max 
> is 8375238656
>  INFO [ScheduledTasks:1] 2014-11-07 23:41:01,836 GCInspector.java (line 116) 
> GC for ConcurrentMarkSweep: 603 ms for 1 collections, 7132546096 used; max is 
> 8375238656
>  INFO [ScheduledTasks:1] 2014-11-07 23:41:09,626 GCInspector.java (line 116) 
> GC for ConcurrentMarkSweep: 761 ms for 1 collections, 7286946984 used; max is 
> 8375238656
>  INFO [ScheduledTasks:1] 2014-11-07 23:41:15,265 GCInspector.java (line 116) 
> GC for ConcurrentMarkSweep: 703 ms for 1 collections, 7251213520 used; max is 
> 8375238656
>  INFO [ScheduledTasks:1] 2014-11-07 23:41:25,027 GCInspector.java (line 116) 
> GC for ConcurrentMarkSweep: 1205 ms for 1 collections, 6507586104 used; max 
> is 8375238656
>  INFO [ScheduledTasks:1] 2014-11-07 23:41:41,374 GCInspector.java (line 116) 
> GC for ConcurrentMarkSweep: 13835 ms for 3 collections, 6514187192 used; max 
> is 8375238656
>  INFO [ScheduledTasks:1] 2014-11-07 23:41:54,137 GCInspector.java (line 116) 
> GC for ConcurrentMarkSweep: 6834 ms for 2 collections, 6521656200 used; max 
> is 8375238656
> ...
>  INFO [ScheduledTasks:1] 2014-11-08 12:13:11,086 GCInspector.java (line 116) 
> GC for ConcurrentMarkSweep: 43967 ms for 2 collections, 8368777672 used; max 
> is 8375238656
>  INFO [ScheduledTasks:1] 2014-11-08 12:14:14,151 GCInspector.java (line 116) 
> GC for ConcurrentMarkSweep: 63968 ms for 3 collections, 8369623824 used; max 
> is 8375238656
>  INFO [ScheduledTasks:1] 2014-11-08 12:14:55,643 GCInspector.java (line 116) 
> GC for ConcurrentMarkSweep: 41307 ms for 2 collections, 8370115376 used; max 
> is 8375238656
>  INFO [ScheduledTasks:1] 2014-11-08 12:20:06,197 GCInspector.java (line 116) 
> GC for ConcurrentMarkSweep: 309634 ms for 15 collections, 8374994928 used; 
> max is 8375238656
>  INFO [ScheduledTasks:1] 2014-11-08 13:07:33,617 GCInspector.java (line 116) 
> GC for ConcurrentMarkSweep: 2681100 ms for 143 collections, 8347631560 used; 
> max is 8375238656
> {noformat} 
> Their application waits 1 minute before a retry when a timeout is returned
> This is what we find on their heapdumps:
> {noformat}
> Class Name
>   
>   
>| Shallow Heap 
> | Retained Heap | Percentage
> ---

[jira] [Assigned] (CASSANDRA-8320) 2.1.2: NullPointerException in SSTableWriter

2014-11-14 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson reassigned CASSANDRA-8320:
--

Assignee: Marcus Eriksson

> 2.1.2: NullPointerException in SSTableWriter
> 
>
> Key: CASSANDRA-8320
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8320
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Evgeny Pasynkov
>Assignee: Marcus Eriksson
>
> After upgrading to 2.1.2, I've got tons of these exception in log:
> java.lang.NullPointerException: null
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter.openEarly(SSTableWriter.java:381)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.switchWriter(SSTableRewriter.java:295)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.abort(SSTableRewriter.java:186)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:204)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:232)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_60]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_60]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_60]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_60]
>   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_60]
> This error is prepended by the following problems:
> 12:59:59.632 [NonPeriodicTasks:1] ERROR o.a.c.io.sstable.SSTableDeletingTask 
> - Unable to delete 
> E:\Upsource_11959\data\cassandra\data\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-4-Data.db
>  (it will be removed on server restart; we'll also retry after GC)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8320) 2.1.2: NullPointerException in SSTableWriter

2014-11-14 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212413#comment-14212413
 ] 

Marcus Eriksson commented on CASSANDRA-8320:


ok, this happens when we call abort() on the SSTableWriter after calling 
finish() on it (typically, finish() throws exception, we then call abort() on 
the writer)

> 2.1.2: NullPointerException in SSTableWriter
> 
>
> Key: CASSANDRA-8320
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8320
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Evgeny Pasynkov
>
> After upgrading to 2.1.2, I've got tons of these exception in log:
> java.lang.NullPointerException: null
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter.openEarly(SSTableWriter.java:381)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.switchWriter(SSTableRewriter.java:295)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.abort(SSTableRewriter.java:186)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:204)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:232)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_60]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_60]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_60]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_60]
>   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_60]
> This error is prepended by the following problems:
> 12:59:59.632 [NonPeriodicTasks:1] ERROR o.a.c.io.sstable.SSTableDeletingTask 
> - Unable to delete 
> E:\Upsource_11959\data\cassandra\data\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-4-Data.db
>  (it will be removed on server restart; we'll also retry after GC)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8316) "Did not get positive replies from all endpoints" error on incremental repair

2014-11-14 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch updated CASSANDRA-8316:
--
Assignee: Russ Hatch  (was: Ryan McGuire)

>  "Did not get positive replies from all endpoints" error on incremental repair
> --
>
> Key: CASSANDRA-8316
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8316
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: cassandra 2.1.2
>Reporter: Loic Lambiel
>Assignee: Russ Hatch
>
> Hi,
> I've got an issue with incremental repairs on our production 15 nodes 2.1.2 
> (new cluster, not yet loaded, RF=3)
> After having successfully performed an incremental repair (-par -inc) on 3 
> nodes, I started receiving "Repair failed with error Did not get positive 
> replies from all endpoints." from nodetool on all remaining nodes :
> [2014-11-14 09:12:36,488] Starting repair command #3, repairing 108 ranges 
> for keyspace  (seq=false, full=false)
> [2014-11-14 09:12:47,919] Repair failed with error Did not get positive 
> replies from all endpoints.
> All the nodes are up and running and the local system log shows that the 
> repair commands got started and that's it.
> I've also noticed that soon after the repair, several nodes started having 
> more cpu load indefinitely without any particular reason (no tasks / queries, 
> nothing in the logs). I then restarted C* on these nodes and retried the 
> repair on several nodes, which were successful until facing the issue again.
> I tried to repro on our 3 nodes preproduction cluster without success
> It looks like I'm not the only one having this issue: 
> http://www.mail-archive.com/user%40cassandra.apache.org/msg39145.html
> Any idea?
> Thanks
> Loic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8319) Allow LWT DELETE with column comparison

2014-11-14 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-8319.
--
Resolution: Invalid

Apparently this already works. Since 2.0.

> Allow LWT DELETE with column comparison
> ---
>
> Key: CASSANDRA-8319
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8319
> Project: Cassandra
>  Issue Type: Improvement
>  Components: API, Core
>Reporter: DOAN DuyHai
>Priority: Minor
> Fix For: 2.1.2
>
>
> Right now, the only way to use LWT with DELETE is to rely on the IF NOT 
> EXISTS keyword
> There may be some scenarios where using IF column=xxx with DELETE is relevant.
>  I am preparing a hands-on with a chat application using C*. A chatroom is 
> defined as:
> {code:sql}
> CREATE TABLE chatroom (
> room_id uuid PRIMARY KEY,
> name text,
> participants list> // person is an UDT representing a 
> subset of the users table);
> {code}
>  Right now, upon removing a participant from the room, I need to:
> * count remaining participants in the room (read the participants list)
> * remove the room (the whole partition) is there isn't anyone inside
>  This is a read-before-write pattern, but even this does not prevent race 
> conditions. Indeed, the last participant may leave the room at the same time 
> a new one enters
>  So using LWT with "DELETE FROM chatroom IF participants = [] WHERE room_id= 
> ..." may help making the removal safe
>  With this design, room creation/deletion as well as participants 
> addition/removal should go through LWT to be consistent. It's slow but 
> participant joining and leaving event frequency is low enough compared to 
> people posting messages to make the trade off not too expensive in general



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8320) 2.1.2: NullPointerException in SSTableWriter

2014-11-14 Thread Evgeny Pasynkov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212375#comment-14212375
 ] 

Evgeny Pasynkov commented on CASSANDRA-8320:


>From 2.1.0 (So, this exception didn't occurred in 2.1.0)
The database is re-created from scratch using 2.1.2

> 2.1.2: NullPointerException in SSTableWriter
> 
>
> Key: CASSANDRA-8320
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8320
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Evgeny Pasynkov
>
> After upgrading to 2.1.2, I've got tons of these exception in log:
> java.lang.NullPointerException: null
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter.openEarly(SSTableWriter.java:381)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.switchWriter(SSTableRewriter.java:295)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.abort(SSTableRewriter.java:186)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:204)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:232)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_60]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_60]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_60]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_60]
>   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_60]
> This error is prepended by the following problems:
> 12:59:59.632 [NonPeriodicTasks:1] ERROR o.a.c.io.sstable.SSTableDeletingTask 
> - Unable to delete 
> E:\Upsource_11959\data\cassandra\data\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-4-Data.db
>  (it will be removed on server restart; we'll also retry after GC)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8313) cassandra-stress: No way to pass in data center hint for DCAwareRoundRobinPolicy

2014-11-14 Thread Bob Nilsen (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212373#comment-14212373
 ] 

Bob Nilsen commented on CASSANDRA-8313:
---

Thanks for the help, Michael.

I believe the problem is that DCAwareRoundRobinPolicy() needs a 'DC1' hint in 
the constructor, otherwise it'll pick a DC based on the order of the list 
returned by the cluster.

It would seem best (in my opinion) to let the user select the 
LoadBalancingPolicy (and any arguments to pass to it) via command line options.

I'll do some more testing with the -node option and let you know what I find.

> cassandra-stress: No way to pass in data center hint for 
> DCAwareRoundRobinPolicy
> 
>
> Key: CASSANDRA-8313
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8313
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Bob Nilsen
>
> When using cassandra-stress in a multiple datacenter configuration, we need 
> to be able to behave like the applications do and send traffic to nodes 
> co-located in the same data center.
> I can't for the life of me figure out how to pass in such a hint into the new 
> cassandra-stress.
> And passing in a local node into "-node" doesn't help.  Apparently, 
> cassandra-stress will *guess* the data center based on the order of the list 
> that it receives from the cluster.
> In my case, it seems to always pick 'DC2', no matter what I do.
> INFO  22:17:06 Using data-center name 'DC2' for DCAwareRoundRobinPolicy (if 
> this is incorrect, please provide the correct datacenter name with 
> DCAwareRoundRobinPolicy constructor)
> Could someone please add the ability to configure the DCAwareRoundRobinPolicy 
> with a data center hint?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7813) Decide how to deal with conflict between native and user-defined functions

2014-11-14 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212372#comment-14212372
 ] 

Aleksey Yeschenko commented on CASSANDRA-7813:
--

Committed - thanks everyone.

> Decide how to deal with conflict between native and user-defined functions
> --
>
> Key: CASSANDRA-7813
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7813
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Robert Stupp
>  Labels: cql
> Fix For: 3.0
>
> Attachments: 7813.txt, 7813v2.txt, 7813v3.txt, 7813v4.txt, 
> 7813v5.txt, 7813v6.txt, 7813v7.txt, 7813v8.txt
>
>
> We have a bunch of native/hardcoded functions (now(), dateOf(), ...) and in 
> 3.0, user will be able to define new functions. Now, there is a very high 
> change that we will provide more native functions over-time (to be clear, I'm 
> not particularly for adding native functions for allthethings just because we 
> can, but it's clear that we should ultimately provide more than what we 
> have). Which begs the question: how do we want to deal with the problem of 
> adding a native function potentially breaking a previously defined 
> user-defined function?
> A priori I see the following options (maybe there is more?):
> # don't do anything specific, hoping that it won't happen often and consider 
> it a user problem if it does.
> # reserve a big number of names that we're hoping will cover all future need.
> # make native function and user-defined function syntactically distinct so it 
> cannot happen.
> I'm not a huge fan of solution 1). Solution 2) is actually what we did for 
> UDT but I think it's somewhat less practical here: there is so much types 
> that it makes sense to provide natively and so it wasn't too hard to come up 
> with a reasonably small list of types name to reserve just in case. This 
> feels a lot harder for functions to me.
> Which leaves solution 3). Since we already have the concept of namespaces for 
> functions, a simple idea would be to force user function to have namespace. 
> We could even allow that namespace to be empty as long as we force the 
> namespace separator (so we'd allow {{bar::foo}} and {{::foo}} for user 
> functions, but *not* {{foo}} which would be reserved for native function).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Deal with conflicts between system functions and UDFs

2014-11-14 Thread aleksey
Deal with conflicts between system functions and UDFs

patch by Robert Stupp; reviewed by Benjamin Lerer for CASSANDRA-7813


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b4d7f3be
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b4d7f3be
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b4d7f3be

Branch: refs/heads/trunk
Commit: b4d7f3bed0687b449f6a275d9dd675e25d794aeb
Parents: 41a35ec
Author: Robert Stupp 
Authored: Fri Nov 14 18:18:38 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Fri Nov 14 18:20:29 2014 +0300

--
 CHANGES.txt |   4 +-
 build.xml   |   2 +
 pylib/cqlshlib/cql3handling.py  |   2 +-
 src/java/org/apache/cassandra/auth/Auth.java|   6 +-
 .../org/apache/cassandra/config/KSMetaData.java |   1 +
 .../org/apache/cassandra/cql3/Attributes.java   |   6 +
 .../org/apache/cassandra/cql3/CQLStatement.java |   2 +
 .../apache/cassandra/cql3/ColumnCondition.java  |  16 +-
 src/java/org/apache/cassandra/cql3/Cql.g|   8 +-
 .../org/apache/cassandra/cql3/Operation.java|   6 +-
 .../apache/cassandra/cql3/QueryProcessor.java   |  29 +-
 src/java/org/apache/cassandra/cql3/Term.java|  12 +
 .../org/apache/cassandra/cql3/UserTypes.java|   9 +
 .../cassandra/cql3/functions/FunctionCall.java  |  13 +-
 .../cassandra/cql3/functions/FunctionName.java  |  36 +-
 .../cassandra/cql3/functions/Functions.java |  27 +-
 .../cql3/functions/NativeFunction.java  |   8 +-
 .../cassandra/cql3/functions/UDFunction.java|  21 +-
 .../selection/AbstractFunctionSelector.java |   7 +-
 .../cassandra/cql3/selection/Selection.java |  10 +
 .../cassandra/cql3/selection/Selector.java  |   5 +
 .../cql3/selection/SelectorFactories.java   |   8 +
 .../cql3/statements/BatchStatement.java |  10 +
 .../statements/CreateFunctionStatement.java |  28 +-
 .../cql3/statements/DropFunctionStatement.java  |  28 +-
 .../cql3/statements/ModificationStatement.java  |  20 +-
 .../cql3/statements/ParsedStatement.java|   5 +
 .../cassandra/cql3/statements/Restriction.java  |   2 +
 .../cql3/statements/SelectStatement.java|  22 +-
 .../statements/SingleColumnRestriction.java |  40 ++
 .../org/apache/cassandra/db/DefsTables.java |   5 +-
 .../org/apache/cassandra/db/SystemKeyspace.java |   7 +-
 .../cassandra/service/IMigrationListener.java   |   6 +-
 .../cassandra/service/MigrationManager.java |  26 +-
 .../org/apache/cassandra/transport/Event.java   |  24 +-
 .../org/apache/cassandra/transport/Server.java  |   8 +-
 .../apache/cassandra/cql3/AggregationTest.java  |  10 +-
 .../org/apache/cassandra/cql3/PgStringTest.java |   4 +-
 test/unit/org/apache/cassandra/cql3/UFTest.java | 378 +--
 39 files changed, 637 insertions(+), 224 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b4d7f3be/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index f250edc..ff255d8 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,7 +1,7 @@
 3.0
  * Fix aggregate fn results on empty selection, result column name,
and cqlsh parsing (CASSANDRA-8229)
- * Mark sstables as repaired after full repair (CASSANDRA-7586) 
+ * Mark sstables as repaired after full repair (CASSANDRA-7586)
  * Extend Descriptor to include a format value and refactor reader/writer apis 
(CASSANDRA-7443)
  * Integrate JMH for microbenchmarks (CASSANDRA-8151)
  * Keep sstable levels when bootstrapping (CASSANDRA-7460)
@@ -15,7 +15,7 @@
  * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
  * Do anticompaction in groups (CASSANDRA-6851)
  * Support pure user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 
7781, 7929,
-   7924, 7812, 8063)
+   7924, 7812, 8063, 7813)
  * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416)
  * Move sstable RandomAccessReader to nio2, which allows using the
FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b4d7f3be/build.xml
--
diff --git a/build.xml b/build.xml
index c4e27a7..c7aa83e 100644
--- a/build.xml
+++ b/build.xml
@@ -212,6 +212,8 @@
  
  
  
+ 
+  
   
 
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b4d7f3be/pylib/cqlshlib/cql3handling.py
--
diff --git a/pylib/cqlshlib/cql3handling.py b/pylib/cqlshlib/cql3handling.py
index 261161c..f8a3069 100644
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@ -209,7 +209,7 @@ JUNK ::= /([ 
\t\r\f

[1/2] cassandra git commit: Deal with conflicts between system functions and UDFs

2014-11-14 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 41a35ec74 -> b4d7f3bed


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b4d7f3be/test/unit/org/apache/cassandra/cql3/UFTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/UFTest.java 
b/test/unit/org/apache/cassandra/cql3/UFTest.java
index 95bede4..ce850b7 100644
--- a/test/unit/org/apache/cassandra/cql3/UFTest.java
+++ b/test/unit/org/apache/cassandra/cql3/UFTest.java
@@ -19,14 +19,82 @@ package org.apache.cassandra.cql3;
 
 import java.math.BigDecimal;
 import java.math.BigInteger;
+import java.util.Date;
 
+import org.junit.After;
 import org.junit.Assert;
+import org.junit.Before;
 import org.junit.Test;
 
+import org.apache.cassandra.cql3.functions.FunctionName;
+import org.apache.cassandra.cql3.functions.Functions;
 import org.apache.cassandra.exceptions.InvalidRequestException;
+import org.apache.cassandra.service.ClientState;
+import org.apache.cassandra.transport.messages.ResultMessage;
 
 public class UFTest extends CQLTester
 {
+private static final String KS_FOO = "cqltest_foo";
+
+@Before
+public void createKsFoo() throws Throwable
+{
+execute("CREATE KEYSPACE IF NOT EXISTS "+KS_FOO+" WITH replication = 
{'class': 'SimpleStrategy', 'replication_factor': 3};");
+}
+
+@After
+public void dropKsFoo() throws Throwable
+{
+execute("DROP KEYSPACE IF EXISTS "+KS_FOO+";");
+}
+
+@Test
+public void testFunctionDropOnKeyspaceDrop() throws Throwable
+{
+execute("CREATE FUNCTION " + KS_FOO + ".sin ( input double ) RETURNS 
double LANGUAGE java AS 'return 
Double.valueOf(Math.sin(input.doubleValue()));'");
+
+Assert.assertEquals(1, Functions.find(new FunctionName(KS_FOO, 
"sin")).size());
+
+assertRows(execute("SELECT function_name, language FROM 
system.schema_functions WHERE keyspace_name=?", KS_FOO),
+   row("sin", "java"));
+
+execute("DROP KEYSPACE "+KS_FOO+";");
+
+assertRows(execute("SELECT function_name, language FROM 
system.schema_functions WHERE keyspace_name=?", KS_FOO));
+
+Assert.assertEquals(0, Functions.find(new FunctionName(KS_FOO, 
"sin")).size());
+}
+
+@Test
+public void testFunctionDropPreparedStatement() throws Throwable
+{
+createTable("CREATE TABLE %s (key int PRIMARY KEY, d double)");
+
+execute("CREATE FUNCTION " + KS_FOO + ".sin ( input double ) RETURNS 
double LANGUAGE java AS 'return 
Double.valueOf(Math.sin(input.doubleValue()));'");
+
+Assert.assertEquals(1, Functions.find(new FunctionName(KS_FOO, 
"sin")).size());
+
+ResultMessage.Prepared prepared = QueryProcessor.prepare("SELECT key, 
"+KS_FOO+".sin(d) FROM "+KEYSPACE+'.'+currentTable(), 
ClientState.forInternalCalls(), false);
+
Assert.assertNotNull(QueryProcessor.instance.getPrepared(prepared.statementId));
+
+execute("DROP FUNCTION " + KS_FOO + ".sin(double);");
+
+
Assert.assertNull(QueryProcessor.instance.getPrepared(prepared.statementId));
+
+//
+
+execute("CREATE FUNCTION " + KS_FOO + ".sin ( input double ) RETURNS 
double LANGUAGE java AS 'return 
Double.valueOf(Math.sin(input.doubleValue()));'");
+
+Assert.assertEquals(1, Functions.find(new FunctionName(KS_FOO, 
"sin")).size());
+
+prepared = QueryProcessor.prepare("SELECT key, "+KS_FOO+".sin(d) FROM 
"+KEYSPACE+'.'+currentTable(), ClientState.forInternalCalls(), false);
+
Assert.assertNotNull(QueryProcessor.instance.getPrepared(prepared.statementId));
+
+execute("DROP KEYSPACE " + KS_FOO + ";");
+
+
Assert.assertNull(QueryProcessor.instance.getPrepared(prepared.statementId));
+}
+
 @Test
 public void testFunctionCreationAndDrop() throws Throwable
 {
@@ -37,45 +105,47 @@ public class UFTest extends CQLTester
 execute("INSERT INTO %s(key, d) VALUES (?, ?)", 3, 3d);
 
 // simple creation
-execute("CREATE FUNCTION foo::sin ( input double ) RETURNS double 
LANGUAGE java AS 'return Double.valueOf(Math.sin(input.doubleValue()));'");
+execute("CREATE FUNCTION "+KS_FOO+".sin ( input double ) RETURNS 
double LANGUAGE java AS 'return 
Double.valueOf(Math.sin(input.doubleValue()));'");
 // check we can't recreate the same function
-assertInvalid("CREATE FUNCTION foo::sin ( input double ) RETURNS 
double LANGUAGE java AS 'return 
Double.valueOf(Math.sin(input.doubleValue()));'");
+assertInvalid("CREATE FUNCTION "+KS_FOO+".sin ( input double ) RETURNS 
double LANGUAGE java AS 'return 
Double.valueOf(Math.sin(input.doubleValue()));'");
 // but that it doesn't complay with "IF NOT EXISTS"
-execute("CREATE FUNCTION IF NOT EXISTS foo::sin ( input double ) 
RETURNS double LANGUAGE java AS 'return 
Double.valueOf(Math.sin(input.doubleValue()));'");
+execute("CREATE FUNCTION IF NOT EXISTS "+KS_FOO

[jira] [Commented] (CASSANDRA-8320) 2.1.2: NullPointerException in SSTableWriter

2014-11-14 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212367#comment-14212367
 ] 

Marcus Eriksson commented on CASSANDRA-8320:


upgrading to 2.1.2 from what?

> 2.1.2: NullPointerException in SSTableWriter
> 
>
> Key: CASSANDRA-8320
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8320
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Evgeny Pasynkov
>
> After upgrading to 2.1.2, I've got tons of these exception in log:
> java.lang.NullPointerException: null
>   at 
> org.apache.cassandra.io.sstable.SSTableWriter.openEarly(SSTableWriter.java:381)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.switchWriter(SSTableRewriter.java:295)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.io.sstable.SSTableRewriter.abort(SSTableRewriter.java:186)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:204)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:232)
>  ~[cassandra-all-2.1.2.jar:2.1.2]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_60]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_60]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_60]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_60]
>   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_60]
> This error is prepended by the following problems:
> 12:59:59.632 [NonPeriodicTasks:1] ERROR o.a.c.io.sstable.SSTableDeletingTask 
> - Unable to delete 
> E:\Upsource_11959\data\cassandra\data\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-4-Data.db
>  (it will be removed on server restart; we'll also retry after GC)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8318) Unable to replace a node

2014-11-14 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212344#comment-14212344
 ] 

Jens Rantil edited comment on CASSANDRA-8318 at 11/14/14 3:16 PM:
--

Just opened up a healthy node and listed peers. Interestingly, X.X.X.56 is in 
the listing:

{noformat}
cqlsh> SELECT peer, data_center, host_id FROM system.peers;

 peer   | data_center | host_id
+-+--
 X.X.X.33 |   Cassandra | 871968c9-1d6b-4f06-ba90-8b3a8d92dcf0
 X.X.X.54 |   Analytics | 3cd36895-ee47-41c1-a5f5-41cb0f8526a6
 X.X.X.51 |   Analytics | d97cf86f-bfaf-4488-b716-26d71635a8fc
 X.X.X.52 |   Analytics | caa32f68-5a6b-4d87-80bd-baa66a9b61ce
 X.X.X.55 |   Analytics | 7d3f73c4-724e-45a6-bcf9-d3072dfc157f
 X.X.X.50 |   Analytics | 25efdbcd-14d3-4e9c-803a-3db5795d4efa
 X.X.X.31 |   Cassandra | 48cb0782-6c9a-4805-9330-38e192b6b680
 X.X.X.56 |   Analytics | null
 X.X.X.53 |   Analytics | e219321e-a6d5-48c4-9bad-d2e25429b1d2

(9 rows)

cqlsh> SELECT * FROM system.peers WHERE peer='X.X.X.56';

 peer   | data_center | host_id | preferred_ip | rack | release_version | 
rpc_address | schema_version | tokens | workload
+-+-+--+--+-+-+++---
 X.X.X.56 |   Analytics |null | null | null |null | 
   null |   null |   null | Analytics

(1 rows)

cqlsh> SELECT * FROM system.peers WHERE peer='X.X.X.51';

 peer   | data_center | host_id  | preferred_ip 
| rack  | release_version | rpc_address | schema_version   
| tokens | workload
+-+--+--+---+-+-+--++---
 X.X.X.51 |   Analytics | d97cf86f-bfaf-4488-b716-26d71635a8fc | null | 
rack1 |   2.0.10.71 |  X.X.X.51 | cc6357e2-db00-3f93-8dab-17036d4f6ff7 | 
{'-2'} | Analytics

(1 rows)
{noformat}

Should I expect it do be there?


was (Author: ztyx):
Just opened up a healthy node and listed peers. Interestingly, X.X.X.56 is in 
the listing:

{noformat}
cqlsh> SELECT peer, data_center, host_id FROM system.peers;

 peer   | data_center | host_id
+-+--
 X.X.X.33 |   Cassandra | 871968c9-1d6b-4f06-ba90-8b3a8d92dcf0
 X.X.X.54 |   Analytics | 3cd36895-ee47-41c1-a5f5-41cb0f8526a6
 X.X.X.51 |   Analytics | d97cf86f-bfaf-4488-b716-26d71635a8fc
 X.X.X.52 |   Analytics | caa32f68-5a6b-4d87-80bd-baa66a9b61ce
 X.X.X.55 |   Analytics | 7d3f73c4-724e-45a6-bcf9-d3072dfc157f
 X.X.X.50 |   Analytics | 25efdbcd-14d3-4e9c-803a-3db5795d4efa
 X.X.X.31 |   Cassandra | 48cb0782-6c9a-4805-9330-38e192b6b680
 X.X.X.56 |   Analytics | null
 X.X.X.53 |   Analytics | e219321e-a6d5-48c4-9bad-d2e25429b1d2

(9 rows)

cqlsh> SELECT * FROM system.peers WHERE peer='X.X.X.56';

 peer   | data_center | host_id | preferred_ip | rack | release_version | 
rpc_address | schema_version | tokens | workload
+-+-+--+--+-+-+++---
 X.X.X.56 |   Analytics |null | null | null |null | 
   null |   null |   null | Analytics

(1 rows)

cqlsh> SELECT * FROM system.peers WHERE peer='X.X.X.51';

 peer   | data_center | host_id  | preferred_ip 
| rack  | release_version | rpc_address | schema_version   
| tokens | workload
+-+--+--+---+-+-+--++---
 X.X.X.51 |   Analytics | d97cf86f-bfaf-4488-b716-26d71635a8fc | null | 
rack1 |   2.0.10.71 |  X.X.X.51 | cc6357e2-db00-3f93-8dab-17036d4f6ff7 | 
{'-2'} | Analytics

(1 rows)
{noformat}

> Unable to replace a node
> 
>
> Key: CASSANDRA-8318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8318
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.0.8.39 (Datastax DSE 4.5.3)
>Reporter: Jens Rantil
> Attachments: X.X.X.56.log
>
>
> Had a hardware failure of a node. I followed the Datastax documentation[1] on 
> how to replace the node X.X.X.51 using a brand new node with the same IP. 
> Since it didn't come up after waiting for ~5 minutes or so, I decided to 
> replace X.X.X.51 with a brand new unused IP X.X.X.56 instead. It now seems 
> like my gossip is some weird state. When I start the replacement node I see 
> line like
> {noformat}
>  INFO [GossipStage:1

[jira] [Created] (CASSANDRA-8320) 2.1.2: NullPointerException in SSTableWriter

2014-11-14 Thread Evgeny Pasynkov (JIRA)
Evgeny Pasynkov created CASSANDRA-8320:
--

 Summary: 2.1.2: NullPointerException in SSTableWriter
 Key: CASSANDRA-8320
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8320
 Project: Cassandra
  Issue Type: Bug
Reporter: Evgeny Pasynkov


After upgrading to 2.1.2, I've got tons of these exception in log:

java.lang.NullPointerException: null
at 
org.apache.cassandra.io.sstable.SSTableWriter.openEarly(SSTableWriter.java:381) 
~[cassandra-all-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.io.sstable.SSTableRewriter.switchWriter(SSTableRewriter.java:295)
 ~[cassandra-all-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.io.sstable.SSTableRewriter.abort(SSTableRewriter.java:186) 
~[cassandra-all-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:204)
 ~[cassandra-all-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 ~[cassandra-all-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[cassandra-all-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
 ~[cassandra-all-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 ~[cassandra-all-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:232)
 ~[cassandra-all-2.1.2.jar:2.1.2]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_60]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
~[na:1.7.0_60]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_60]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_60]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_60]

This error is prepended by the following problems:

12:59:59.632 [NonPeriodicTasks:1] ERROR o.a.c.io.sstable.SSTableDeletingTask - 
Unable to delete 
E:\Upsource_11959\data\cassandra\data\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-4-Data.db
 (it will be removed on server restart; we'll also retry after GC)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8318) Unable to replace a node

2014-11-14 Thread Jens Rantil (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212344#comment-14212344
 ] 

Jens Rantil commented on CASSANDRA-8318:


Just opened up a healthy node and listed peers. Interestingly, X.X.X.56 is in 
the listing:

{noformat}
cqlsh> SELECT peer, data_center, host_id FROM system.peers;

 peer   | data_center | host_id
+-+--
 X.X.X.33 |   Cassandra | 871968c9-1d6b-4f06-ba90-8b3a8d92dcf0
 X.X.X.54 |   Analytics | 3cd36895-ee47-41c1-a5f5-41cb0f8526a6
 X.X.X.51 |   Analytics | d97cf86f-bfaf-4488-b716-26d71635a8fc
 X.X.X.52 |   Analytics | caa32f68-5a6b-4d87-80bd-baa66a9b61ce
 X.X.X.55 |   Analytics | 7d3f73c4-724e-45a6-bcf9-d3072dfc157f
 X.X.X.50 |   Analytics | 25efdbcd-14d3-4e9c-803a-3db5795d4efa
 X.X.X.31 |   Cassandra | 48cb0782-6c9a-4805-9330-38e192b6b680
 X.X.X.56 |   Analytics | null
 X.X.X.53 |   Analytics | e219321e-a6d5-48c4-9bad-d2e25429b1d2

(9 rows)

cqlsh> SELECT * FROM system.peers WHERE peer='X.X.X.56';

 peer   | data_center | host_id | preferred_ip | rack | release_version | 
rpc_address | schema_version | tokens | workload
+-+-+--+--+-+-+++---
 X.X.X.56 |   Analytics |null | null | null |null | 
   null |   null |   null | Analytics

(1 rows)

cqlsh> SELECT * FROM system.peers WHERE peer='X.X.X.51';

 peer   | data_center | host_id  | preferred_ip 
| rack  | release_version | rpc_address | schema_version   
| tokens | workload
+-+--+--+---+-+-+--++---
 X.X.X.51 |   Analytics | d97cf86f-bfaf-4488-b716-26d71635a8fc | null | 
rack1 |   2.0.10.71 |  X.X.X.51 | cc6357e2-db00-3f93-8dab-17036d4f6ff7 | 
{'-2'} | Analytics

(1 rows)
{noformat}

> Unable to replace a node
> 
>
> Key: CASSANDRA-8318
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8318
> Project: Cassandra
>  Issue Type: Bug
> Environment: 2.0.8.39 (Datastax DSE 4.5.3)
>Reporter: Jens Rantil
> Attachments: X.X.X.56.log
>
>
> Had a hardware failure of a node. I followed the Datastax documentation[1] on 
> how to replace the node X.X.X.51 using a brand new node with the same IP. 
> Since it didn't come up after waiting for ~5 minutes or so, I decided to 
> replace X.X.X.51 with a brand new unused IP X.X.X.56 instead. It now seems 
> like my gossip is some weird state. When I start the replacement node I see 
> line like
> {noformat}
>  INFO [GossipStage:1] 2014-11-14 14:57:03,025 Gossiper.java (line 901) 
> InetAddress /X.X.X.51 is now DOWN
>  INFO [GossipStage:1] 2014-11-14 14:57:03,042 Gossiper.java (line 901) 
> InetAddress /X.X.X.56 is now DOWN
> {noformat}
> . The latter is somewhat surprising since that is the IP of the actual 
> replacement node. It doesn't surprise me it can't talk to itself if it hasn't 
> started!
> Eventually the replacement node shuts down with
> {noformat}
> ERROR [main] 2014-11-14 14:58:06,031 CassandraDaemon.java (line 513) 
> Exception encountered during startup
> java.lang.UnsupportedOperationException: Cannot replace token -2 which does 
> not exist!
>   at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:782)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:614)
>   at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
>   at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:374)
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
>   at com.datastax.bdp.server.DseDaemon.main(DseDaemon.java:615)
>  INFO [Thread-2] 2014-11-14 14:58:06,035 DseDaemon.java (line 461) DSE 
> shutting down...
>  INFO [StorageServiceShutdownHook] 2014-11-14 14:58:06,037 Gossiper.java 
> (line 1307) Announcing shutdown
>  INFO [Thread-2] 2014-11-14 14:58:06,046 PluginManager.java (line 355) All 
> plugins are stopped.
>  INFO [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 463) 
> Cassandra shutting down...
> ERROR [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 199) 
> Exception in thread Thread[Thread-2,5,main]
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.service.CassandraDaemon.stop(CassandraDaemon.java:464)
>   at com.datastax.bdp.server.DseDaemon.stop(DseDaemon.java:464)
>   at com.datastax.bdp.server.DseDaemon$

[jira] [Updated] (CASSANDRA-8319) Allow LWT DELETE with column comparison

2014-11-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-8319:
--
Summary: Allow LWT DELETE with column comparison  (was: Allow CAS DELETE 
with column comparison)

> Allow LWT DELETE with column comparison
> ---
>
> Key: CASSANDRA-8319
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8319
> Project: Cassandra
>  Issue Type: Improvement
>  Components: API, Core
>Reporter: DOAN DuyHai
>Priority: Minor
> Fix For: 2.1.2
>
>
> Right now, the only way to use LWT with DELETE is to rely on the IF NOT 
> EXISTS keyword
> There may be some scenarios where using IF column=xxx with DELETE is relevant.
>  I am preparing a hands-on with a chat application using C*. A chatroom is 
> defined as:
> {code:sql}
> CREATE TABLE chatroom (
> room_id uuid PRIMARY KEY,
> name text,
> participants list> // person is an UDT representing a 
> subset of the users table);
> {code}
>  Right now, upon removing a participant from the room, I need to:
> * count remaining participants in the room (read the participants list)
> * remove the room (the whole partition) is there isn't anyone inside
>  This is a read-before-write pattern, but even this does not prevent race 
> conditions. Indeed, the last participant may leave the room at the same time 
> a new one enters
>  So using LWT with "DELETE FROM chatroom IF participants = [] WHERE room_id= 
> ..." may help making the removal safe
>  With this design, room creation/deletion as well as participants 
> addition/removal should go through LWT to be consistent. It's slow but 
> participant joining and leaving event frequency is low enough compared to 
> people posting messages to make the trade off not too expensive in general



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8319) Allow CAS DELETE with column comparison

2014-11-14 Thread DOAN DuyHai (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DOAN DuyHai updated CASSANDRA-8319:
---
Description: 
Right now, the only way to use LWT with DELETE is to rely on the IF NOT EXISTS 
keyword

There may be some scenarios where using IF column=xxx with DELETE is relevant.

 I am preparing a hands-on with a chat application using C*. A chatroom is 
defined as:

{code:sql}
CREATE TABLE chatroom (
room_id uuid PRIMARY KEY,
name text,
participants list> // person is an UDT representing a 
subset of the users table);
{code}

 Right now, upon removing a participant from the room, I need to:

* count remaining participants in the room (read the participants list)
* remove the room (the whole partition) is there isn't anyone inside

 This is a read-before-write pattern, but even this does not prevent race 
conditions. Indeed, the last participant may leave the room at the same time a 
new one enters

 So using LWT with "DELETE FROM chatroom IF participants = [] WHERE room_id= 
..." may help making the removal safe

 With this design, room creation/deletion as well as participants 
addition/removal should go through LWT to be consistent. It's slow but 
participant joining and leaving event frequency is low enough compared to 
people posting messages to make the trade off not too expensive in general

  was:
Right now, the only way to use LWT with DELETE is to rely on the IF NOT EXISTS 
keyword

There may be some scenarios where using IF column=xxx with DELETE is relevant.

 I am preparing a hands-on with a chat application using C*. A chatroom is 
defined as:

{code:sql}
CREATE TABLE chatroom (
room_id uuid PRIMARY KEY,
name text,
participants list> // person is an UDT representing a 
subset of the users table);
{code}

 Right now, upon removing a participant from the room, I need to:

* count remaining participants in the room
* remove the room (the whole partition) is there isn't anyone inside

 This is a read-before-write pattern, but even this does not prevent race 
conditions. Indeed, the last participant may leave the room at the same time a 
new one enters

 So using LWT with "DELETE FROM chatroom IF participants = [] WHERE room_id= 
..." may help making the removal safe


> Allow CAS DELETE with column comparison
> ---
>
> Key: CASSANDRA-8319
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8319
> Project: Cassandra
>  Issue Type: Improvement
>  Components: API, Core
>Reporter: DOAN DuyHai
>Priority: Minor
> Fix For: 2.1.2
>
>
> Right now, the only way to use LWT with DELETE is to rely on the IF NOT 
> EXISTS keyword
> There may be some scenarios where using IF column=xxx with DELETE is relevant.
>  I am preparing a hands-on with a chat application using C*. A chatroom is 
> defined as:
> {code:sql}
> CREATE TABLE chatroom (
> room_id uuid PRIMARY KEY,
> name text,
> participants list> // person is an UDT representing a 
> subset of the users table);
> {code}
>  Right now, upon removing a participant from the room, I need to:
> * count remaining participants in the room (read the participants list)
> * remove the room (the whole partition) is there isn't anyone inside
>  This is a read-before-write pattern, but even this does not prevent race 
> conditions. Indeed, the last participant may leave the room at the same time 
> a new one enters
>  So using LWT with "DELETE FROM chatroom IF participants = [] WHERE room_id= 
> ..." may help making the removal safe
>  With this design, room creation/deletion as well as participants 
> addition/removal should go through LWT to be consistent. It's slow but 
> participant joining and leaving event frequency is low enough compared to 
> people posting messages to make the trade off not too expensive in general



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8319) Allow CAS DELETE with column comparison

2014-11-14 Thread DOAN DuyHai (JIRA)
DOAN DuyHai created CASSANDRA-8319:
--

 Summary: Allow CAS DELETE with column comparison
 Key: CASSANDRA-8319
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8319
 Project: Cassandra
  Issue Type: Improvement
  Components: API, Core
Reporter: DOAN DuyHai
Priority: Minor
 Fix For: 2.1.2


Right now, the only way to use LWT with DELETE is to rely on the IF NOT EXISTS 
keyword

There may be some scenarios where using IF column=xxx with DELETE is relevant.

 I am preparing a hands-on with a chat application using C*. A chatroom is 
defined as:

{code:sql}
CREATE TABLE chatroom (
room_id uuid PRIMARY KEY,
name text,
participants list> // person is an UDT representing a 
subset of the users table);
{code}

 Right now, upon removing a participant from the room, I need to:

* count remaining participants in the room
* remove the room (the whole partition) is there isn't anyone inside

 This is a read-before-write pattern, but even this does not prevent race 
conditions. Indeed, the last participant may leave the room at the same time a 
new one enters

 So using LWT with "DELETE FROM chatroom IF participants = [] WHERE room_id= 
..." may help making the removal safe



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-6411) Issue with reading from sstable

2014-11-14 Thread Sebastian Estevez (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14207090#comment-14207090
 ] 

Sebastian Estevez edited comment on CASSANDRA-6411 at 11/14/14 2:49 PM:


Found what looks to be this same issue via google alert here:

https://bugs.launchpad.net/opencontrail/+bug/1389663

I reached out to Ed, they were running 2.0.6


was (Author: sebastian.este...@datastax.com):
Found what looks to be this same issue via google alert here:

https://bugs.launchpad.net/opencontrail/+bug/1389663

Not sure what version they are running but thought I would post it here as an 
FYI.

> Issue with reading from sstable
> ---
>
> Key: CASSANDRA-6411
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6411
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
>Reporter: Mike Konobeevskiy
>Assignee: Yuki Morishita
> Attachments: 6411-log.zip, 6411-sstables.zip
>
>
> With Cassandra 1.2.5 this happens almost every week. 
> {noformat}
> java.lang.RuntimeException: 
> org.apache.cassandra.io.sstable.CorruptSSTableException: 
> java.io.EOFException: EOF after 5105 bytes out of 19815
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1582)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:724)
> Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: 
> java.io.EOFException: EOF after 5105 bytes out of 19815
>   at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.(SimpleSliceReader.java:91)
>   at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:68)
>   at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.(SSTableSliceIterator.java:44)
>   at 
> org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:101)
>   at 
> org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68)
>   at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:274)
>   at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1357)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1214)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1126)
>   at org.apache.cassandra.db.Table.getRow(Table.java:347)
>   at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:70)
>   at 
> org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1052)
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1578)
>   ... 3 more
> Caused by: java.io.EOFException: EOF after 5105 bytes out of 19815
>   at 
> org.apache.cassandra.io.util.FileUtils.skipBytesFully(FileUtils.java:350)
>   at 
> org.apache.cassandra.utils.ByteBufferUtil.skipShortLength(ByteBufferUtil.java:382)
>   at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.(SimpleSliceReader.java:72)
>   ... 16 more
> {noformat}
> This is occurring roughly weekly with quite minimal usage.
> Recreation of CF does not help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7813) Decide how to deal with conflict between native and user-defined functions

2014-11-14 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212313#comment-14212313
 ] 

Benjamin Lerer commented on CASSANDRA-7813:
---

+1

> Decide how to deal with conflict between native and user-defined functions
> --
>
> Key: CASSANDRA-7813
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7813
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Robert Stupp
>  Labels: cql
> Fix For: 3.0
>
> Attachments: 7813.txt, 7813v2.txt, 7813v3.txt, 7813v4.txt, 
> 7813v5.txt, 7813v6.txt, 7813v7.txt, 7813v8.txt
>
>
> We have a bunch of native/hardcoded functions (now(), dateOf(), ...) and in 
> 3.0, user will be able to define new functions. Now, there is a very high 
> change that we will provide more native functions over-time (to be clear, I'm 
> not particularly for adding native functions for allthethings just because we 
> can, but it's clear that we should ultimately provide more than what we 
> have). Which begs the question: how do we want to deal with the problem of 
> adding a native function potentially breaking a previously defined 
> user-defined function?
> A priori I see the following options (maybe there is more?):
> # don't do anything specific, hoping that it won't happen often and consider 
> it a user problem if it does.
> # reserve a big number of names that we're hoping will cover all future need.
> # make native function and user-defined function syntactically distinct so it 
> cannot happen.
> I'm not a huge fan of solution 1). Solution 2) is actually what we did for 
> UDT but I think it's somewhat less practical here: there is so much types 
> that it makes sense to provide natively and so it wasn't too hard to come up 
> with a reasonably small list of types name to reserve just in case. This 
> feels a lot harder for functions to me.
> Which leaves solution 3). Since we already have the concept of namespaces for 
> functions, a simple idea would be to force user function to have namespace. 
> We could even allow that namespace to be empty as long as we force the 
> namespace separator (so we'd allow {{bar::foo}} and {{::foo}} for user 
> functions, but *not* {{foo}} which would be reserved for native function).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8318) Unable to replace a node

2014-11-14 Thread Jens Rantil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Rantil updated CASSANDRA-8318:
---
Description: 
Had a hardware failure of a node. I followed the Datastax documentation[1] on 
how to replace the node X.X.X.51 using a brand new node with the same IP. Since 
it didn't come up after waiting for ~5 minutes or so, I decided to replace 
X.X.X.51 with a brand new unused IP X.X.X.56 instead. It now seems like my 
gossip is some weird state. When I start the replacement node I see line like

{noformat}
 INFO [GossipStage:1] 2014-11-14 14:57:03,025 Gossiper.java (line 901) 
InetAddress /X.X.X.51 is now DOWN
 INFO [GossipStage:1] 2014-11-14 14:57:03,042 Gossiper.java (line 901) 
InetAddress /X.X.X.56 is now DOWN
{noformat}
. The latter is somewhat surprising since that is the IP of the actual 
replacement node. It doesn't surprise me it can't talk to itself if it hasn't 
started!

Eventually the replacement node shuts down with
{noformat}
ERROR [main] 2014-11-14 14:58:06,031 CassandraDaemon.java (line 513) Exception 
encountered during startup
java.lang.UnsupportedOperationException: Cannot replace token -2 which does not 
exist!
at 
org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:782)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:614)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:374)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at com.datastax.bdp.server.DseDaemon.main(DseDaemon.java:615)
 INFO [Thread-2] 2014-11-14 14:58:06,035 DseDaemon.java (line 461) DSE shutting 
down...
 INFO [StorageServiceShutdownHook] 2014-11-14 14:58:06,037 Gossiper.java (line 
1307) Announcing shutdown
 INFO [Thread-2] 2014-11-14 14:58:06,046 PluginManager.java (line 355) All 
plugins are stopped.
 INFO [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 463) 
Cassandra shutting down...
ERROR [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 199) 
Exception in thread Thread[Thread-2,5,main]
java.lang.NullPointerException
at 
org.apache.cassandra.service.CassandraDaemon.stop(CassandraDaemon.java:464)
at com.datastax.bdp.server.DseDaemon.stop(DseDaemon.java:464)
at com.datastax.bdp.server.DseDaemon$1.run(DseDaemon.java:364){noformat}

All nodes are showing
{noformat}
root@machine-2:~# nodetool status company
Datacenter: Analytics
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  Owns (effective)  Host ID
   Rack
UN  X.X.X.50  18.35 GB   1   16.7% 
25efdbcd-14d3-4e9c-803a-3db5795d4efa  rack1
DN  X.X.X.51  195.67 KB  1   16.7% 
d97cf86f-bfaf-4488-b716-26d71635a8fc  rack1
UN  X.X.X.52  18.7 GB1   16.7% 
caa32f68-5a6b-4d87-80bd-baa66a9b61ce  rack1
UN  X.X.X.53  18.56 GB   1   16.7% 
e219321e-a6d5-48c4-9bad-d2e25429b1d2  rack1
UN  X.X.X.54  19.69 GB   1   16.7% 
3cd36895-ee47-41c1-a5f5-41cb0f8526a6  rack1
UN  X.X.X.55  18.88 GB   1   16.7% 
7d3f73c4-724e-45a6-bcf9-d3072dfc157f  rack1
Datacenter: Cassandra
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  Owns (effective)  Host ID
   Rack
UN  X.X.X.33  128.95 GB  256 100.0%
871968c9-1d6b-4f06-ba90-8b3a8d92dcf0  rack1
UN  X.X.X.32  115.3 GB   256 100.0%
d7cacd89-8613-4de5-8a5e-a2c53c41ea45  rack1
UN  X.X.X.31  130.45 GB  256 100.0%
48cb0782-6c9a-4805-9330-38e192b6b680  rack1
{noformat}
, but when X.X.X.56 is starting is shows
{noformat}
root@machine-1:/var/lib/cassandra# nodetool status
Note: Ownership information does not include topology; for complete 
information, specify a keyspace
Datacenter: Analytics
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  Owns   Host ID   
Rack
UN  X.X.X.50  18.41 GB   1   0.2%   25efdbcd-14d3-4e9c-803a-3db5795d4efa  
rack1
UN  X.X.X.52  19.07 GB   1   0.0%   caa32f68-5a6b-4d87-80bd-baa66a9b61ce  
rack1
UN  X.X.X.53  18.65 GB   1   0.1%   e219321e-a6d5-48c4-9bad-d2e25429b1d2  
rack1
UN  X.X.X.54  19.69 GB   1   0.0%   3cd36895-ee47-41c1-a5f5-41cb0f8526a6  
rack1
UN  X.X.X.55  18.97 GB   1   0.2%   7d3f73c4-724e-45a6-bcf9-d3072dfc157f  
rack1
Datacenter: Cassandra
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  Owns   Host ID   
Rack
UN  X.X.X.33  129

[jira] [Updated] (CASSANDRA-8318) Unable to replace a node

2014-11-14 Thread Jens Rantil (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Rantil updated CASSANDRA-8318:
---
Description: 
Had a hardware failure of a node. I followed the Datastax documentation[1] on 
how to replace the node X.X.X.51 using a brand new node with the same IP. Since 
it didn't come up after waiting for ~5 minutes or so, I decided to replace 
X.X.X.51 with a brand new unused IP X.X.X.56 instead. It now seems like my 
gossip is some weird state. When I start the replacement node I see line like

{noformat}
 INFO [GossipStage:1] 2014-11-14 14:57:03,025 Gossiper.java (line 901) 
InetAddress /X.X.X.51 is now DOWN
 INFO [GossipStage:1] 2014-11-14 14:57:03,042 Gossiper.java (line 901) 
InetAddress /X.X.X.56 is now DOWN
{noformat}
. The latter is somewhat surprising since that is the IP of the actual 
replacement node. It doesn't surprise me it can't talk to itself if it hasn't 
started!

Eventually the replacement node shuts down with
{noformat}
ERROR [main] 2014-11-14 14:58:06,031 CassandraDaemon.java (line 513) Exception 
encountered during startup
java.lang.UnsupportedOperationException: Cannot replace token -2 which does not 
exist!
at 
org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:782)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:614)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:374)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at com.datastax.bdp.server.DseDaemon.main(DseDaemon.java:615)
 INFO [Thread-2] 2014-11-14 14:58:06,035 DseDaemon.java (line 461) DSE shutting 
down...
 INFO [StorageServiceShutdownHook] 2014-11-14 14:58:06,037 Gossiper.java (line 
1307) Announcing shutdown
 INFO [Thread-2] 2014-11-14 14:58:06,046 PluginManager.java (line 355) All 
plugins are stopped.
 INFO [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 463) 
Cassandra shutting down...
ERROR [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 199) 
Exception in thread Thread[Thread-2,5,main]
java.lang.NullPointerException
at 
org.apache.cassandra.service.CassandraDaemon.stop(CassandraDaemon.java:464)
at com.datastax.bdp.server.DseDaemon.stop(DseDaemon.java:464)
at com.datastax.bdp.server.DseDaemon$1.run(DseDaemon.java:364){noformat}

All nodes are showing
{noformat}
jrantil@machine-2:~$ nodetool status company
Datacenter: Analytics
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  Owns (effective)  Host ID
   Rack
UN  X.X.X.50  18.35 GB   1   16.7% 
25efdbcd-14d3-4e9c-803a-3db5795d4efa  rack1
DN  X.X.X.51  195.67 KB  1   16.7% 
d97cf86f-bfaf-4488-b716-26d71635a8fc  rack1
UN  X.X.X.52  18.7 GB1   16.7% 
caa32f68-5a6b-4d87-80bd-baa66a9b61ce  rack1
UN  X.X.X.53  18.56 GB   1   16.7% 
e219321e-a6d5-48c4-9bad-d2e25429b1d2  rack1
UN  X.X.X.54  19.69 GB   1   16.7% 
3cd36895-ee47-41c1-a5f5-41cb0f8526a6  rack1
UN  X.X.X.55  18.88 GB   1   16.7% 
7d3f73c4-724e-45a6-bcf9-d3072dfc157f  rack1
Datacenter: Cassandra
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  Owns (effective)  Host ID
   Rack
UN  X.X.X.33  128.95 GB  256 100.0%
871968c9-1d6b-4f06-ba90-8b3a8d92dcf0  rack1
UN  X.X.X.32  115.3 GB   256 100.0%
d7cacd89-8613-4de5-8a5e-a2c53c41ea45  rack1
UN  X.X.X.31  130.45 GB  256 100.0%
48cb0782-6c9a-4805-9330-38e192b6b680  rack1
{noformat}
, but when X.X.X.56 is starting is shows
{noformat}
root@machine-1:/var/lib/cassandra# nodetool status
Note: Ownership information does not include topology; for complete 
information, specify a keyspace
Datacenter: Analytics
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  Owns   Host ID   
Rack
UN  X.X.X.50  18.41 GB   1   0.2%   25efdbcd-14d3-4e9c-803a-3db5795d4efa  
rack1
UN  X.X.X.52  19.07 GB   1   0.0%   caa32f68-5a6b-4d87-80bd-baa66a9b61ce  
rack1
UN  X.X.X.53  18.65 GB   1   0.1%   e219321e-a6d5-48c4-9bad-d2e25429b1d2  
rack1
UN  X.X.X.54  19.69 GB   1   0.0%   3cd36895-ee47-41c1-a5f5-41cb0f8526a6  
rack1
UN  X.X.X.55  18.97 GB   1   0.2%   7d3f73c4-724e-45a6-bcf9-d3072dfc157f  
rack1
Datacenter: Cassandra
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  Owns   Host ID   
Rack
UN  X.X.X.33  

[jira] [Created] (CASSANDRA-8318) Unable to replace a node

2014-11-14 Thread Jens Rantil (JIRA)
Jens Rantil created CASSANDRA-8318:
--

 Summary: Unable to replace a node
 Key: CASSANDRA-8318
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8318
 Project: Cassandra
  Issue Type: Bug
 Environment: 2.0.8.39 (Datastax DSE 4.5.3)
Reporter: Jens Rantil
 Attachments: X.X.X.56.log

Had a hardware failure of a node. I followed the Datastax documentation[1] on 
how to replace the node X.X.X.51 using a brand new node with the same IP. Since 
it didn't come up after waiting for ~5 minutes or so, I decided to replace 
X.X.X.51 with a brand new unused IP X.X.X.56 instead. It now seems like my 
gossip is some weird state. When I start the replacement node I see line like

{noformat}
 INFO [GossipStage:1] 2014-11-14 14:57:03,025 Gossiper.java (line 901) 
InetAddress /X.X.X.51 is now DOWN
 INFO [GossipStage:1] 2014-11-14 14:57:03,042 Gossiper.java (line 901) 
InetAddress /X.X.X.56 is now DOWN
{noformat}
. The latter is somewhat surprising since that is the IP of the actual 
replacement node. It doesn't surprise me it can't talk to itself if it hasn't 
started!

Eventually the replacement node shuts down with
{noformat}
ERROR [main] 2014-11-14 14:58:06,031 CassandraDaemon.java (line 513) Exception 
encountered during startup
java.lang.UnsupportedOperationException: Cannot replace token -2 which does not 
exist!
at 
org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:782)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:614)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:503)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378)
at com.datastax.bdp.server.DseDaemon.setup(DseDaemon.java:374)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at com.datastax.bdp.server.DseDaemon.main(DseDaemon.java:615)
 INFO [Thread-2] 2014-11-14 14:58:06,035 DseDaemon.java (line 461) DSE shutting 
down...
 INFO [StorageServiceShutdownHook] 2014-11-14 14:58:06,037 Gossiper.java (line 
1307) Announcing shutdown
 INFO [Thread-2] 2014-11-14 14:58:06,046 PluginManager.java (line 355) All 
plugins are stopped.
 INFO [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 463) 
Cassandra shutting down...
ERROR [Thread-2] 2014-11-14 14:58:06,047 CassandraDaemon.java (line 199) 
Exception in thread Thread[Thread-2,5,main]
java.lang.NullPointerException
at 
org.apache.cassandra.service.CassandraDaemon.stop(CassandraDaemon.java:464)
at com.datastax.bdp.server.DseDaemon.stop(DseDaemon.java:464)
at com.datastax.bdp.server.DseDaemon$1.run(DseDaemon.java:364){noformat}

All nodes are showing
{noformat}
jrantil@analytics-2:~$ nodetool status company
Datacenter: Analytics
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  Owns (effective)  Host ID
   Rack
UN  X.X.X.50  18.35 GB   1   16.7% 
25efdbcd-14d3-4e9c-803a-3db5795d4efa  rack1
DN  X.X.X.51  195.67 KB  1   16.7% 
d97cf86f-bfaf-4488-b716-26d71635a8fc  rack1
UN  X.X.X.52  18.7 GB1   16.7% 
caa32f68-5a6b-4d87-80bd-baa66a9b61ce  rack1
UN  X.X.X.53  18.56 GB   1   16.7% 
e219321e-a6d5-48c4-9bad-d2e25429b1d2  rack1
UN  X.X.X.54  19.69 GB   1   16.7% 
3cd36895-ee47-41c1-a5f5-41cb0f8526a6  rack1
UN  X.X.X.55  18.88 GB   1   16.7% 
7d3f73c4-724e-45a6-bcf9-d3072dfc157f  rack1
Datacenter: Cassandra
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  Owns (effective)  Host ID
   Rack
UN  X.X.X.33  128.95 GB  256 100.0%
871968c9-1d6b-4f06-ba90-8b3a8d92dcf0  rack1
UN  X.X.X.32  115.3 GB   256 100.0%
d7cacd89-8613-4de5-8a5e-a2c53c41ea45  rack1
UN  X.X.X.31  130.45 GB  256 100.0%
48cb0782-6c9a-4805-9330-38e192b6b680  rack1
{noformat}
, but when X.X.X.56 is starting is shows
{noformat}
root@analytics-1:/var/lib/cassandra# nodetool status
Note: Ownership information does not include topology; for complete 
information, specify a keyspace
Datacenter: Analytics
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address Load   Tokens  Owns   Host ID   
Rack
UN  X.X.X.50  18.41 GB   1   0.2%   25efdbcd-14d3-4e9c-803a-3db5795d4efa  
rack1
UN  X.X.X.52  19.07 GB   1   0.0%   caa32f68-5a6b-4d87-80bd-baa66a9b61ce  
rack1
UN  X.X.X.53  18.65 GB   1   0.1%   e219321e-a6d5-48c4-9bad-d2e25429b1d2  
rack1
UN  X.X.X.54  19.69 GB   1   0.0%   3cd36895-ee47-41c1-a5f5-41cb0f8526a6  
rack1
UN  X.X.X.55  18.97 GB   1   0.2%   7d3f73c4-724e-45a6-bcf9-d3072dfc157f  
rack1
D

[jira] [Updated] (CASSANDRA-6993) Windows: remove mmap'ed I/O for index files and force standard file access

2014-11-14 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-6993:
---
Attachment: 6993_2.1_v1.txt

rebased patch attached

> Windows: remove mmap'ed I/O for index files and force standard file access
> --
>
> Key: CASSANDRA-6993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6993
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>Priority: Minor
>  Labels: Windows
> Fix For: 3.0, 2.1.3
>
> Attachments: 6993_2.1_v1.txt, 6993_v1.txt, 6993_v2.txt
>
>
> Memory-mapped I/O on Windows causes issues with hard-links; we're unable to 
> delete hard-links to open files with memory-mapped segments even using nio.  
> We'll need to push for close to performance parity between mmap'ed I/O and 
> buffered going forward as the buffered / compressed path offers other 
> benefits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6993) Windows: remove mmap'ed I/O for index files and force standard file access

2014-11-14 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-6993:
---
Fix Version/s: 2.1.3

> Windows: remove mmap'ed I/O for index files and force standard file access
> --
>
> Key: CASSANDRA-6993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6993
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>Priority: Minor
>  Labels: Windows
> Fix For: 3.0, 2.1.3
>
> Attachments: 6993_2.1_v1.txt, 6993_v1.txt, 6993_v2.txt
>
>
> Memory-mapped I/O on Windows causes issues with hard-links; we're unable to 
> delete hard-links to open files with memory-mapped segments even using nio.  
> We'll need to push for close to performance parity between mmap'ed I/O and 
> buffered going forward as the buffered / compressed path offers other 
> benefits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-6993) Windows: remove mmap'ed I/O for index files and force standard file access

2014-11-14 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie reopened CASSANDRA-6993:


We should probably backport this for the 2.1.3 release as memory mapped I/O is 
going to cause the same problems there and people are beta-testing the software 
on Windows.

> Windows: remove mmap'ed I/O for index files and force standard file access
> --
>
> Key: CASSANDRA-6993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6993
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>Priority: Minor
>  Labels: Windows
> Fix For: 3.0
>
> Attachments: 6993_v1.txt, 6993_v2.txt
>
>
> Memory-mapped I/O on Windows causes issues with hard-links; we're unable to 
> delete hard-links to open files with memory-mapped segments even using nio.  
> We'll need to push for close to performance parity between mmap'ed I/O and 
> buffered going forward as the buffered / compressed path offers other 
> benefits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6283) Windows 7 data files kept open / can't be deleted after compaction.

2014-11-14 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-6283:
---
Fix Version/s: 2.0.7

> Windows 7 data files kept open / can't be deleted after compaction.
> ---
>
> Key: CASSANDRA-6283
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6283
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Windows 7 (32) / Java 1.7.0.45
>Reporter: Andreas Schnitzerling
>Assignee: Joshua McKenzie
>  Labels: Windows, compaction
> Fix For: 2.0.7, 2.1.0, 3.0
>
> Attachments: 6283_StreamWriter_patch.txt, leakdetect.patch, 
> neighbor-log.zip, root-log.zip, screenshot-1.jpg, system.log
>
>
> Files cannot be deleted, patch CASSANDRA-5383 (Win7 deleting problem) doesn't 
> help on Win-7 on Cassandra 2.0.2. Even 2.1 Snapshot is not running. The cause 
> is: Opened file handles seem to be lost and not closed properly. Win 7 
> blames, that another process is still using the file (but its obviously 
> cassandra). Only restart of the server makes the files deleted. But after 
> heavy using (changes) of tables, there are about 24K files in the data folder 
> (instead of 35 after every restart) and Cassandra crashes. I experiminted and 
> I found out, that a finalizer fixes the problem. So after GC the files will 
> be deleted (not optimal, but working fine). It runs now 2 days continously 
> without problem. Possible fix/test:
> I wrote the following finalizer at the end of class 
> org.apache.cassandra.io.util.RandomAccessReader:
> {code:title=RandomAccessReader.java|borderStyle=solid}
> @Override
> protected void finalize() throws Throwable {
>   deallocate();
>   super.finalize();
> }
> {code}
> Can somebody test / develop / patch it? Thx.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6283) Windows 7 data files kept open / can't be deleted after compaction.

2014-11-14 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-6283:
---
Fix Version/s: 2.1.0

> Windows 7 data files kept open / can't be deleted after compaction.
> ---
>
> Key: CASSANDRA-6283
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6283
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Windows 7 (32) / Java 1.7.0.45
>Reporter: Andreas Schnitzerling
>Assignee: Joshua McKenzie
>  Labels: Windows, compaction
> Fix For: 2.1.0, 3.0
>
> Attachments: 6283_StreamWriter_patch.txt, leakdetect.patch, 
> neighbor-log.zip, root-log.zip, screenshot-1.jpg, system.log
>
>
> Files cannot be deleted, patch CASSANDRA-5383 (Win7 deleting problem) doesn't 
> help on Win-7 on Cassandra 2.0.2. Even 2.1 Snapshot is not running. The cause 
> is: Opened file handles seem to be lost and not closed properly. Win 7 
> blames, that another process is still using the file (but its obviously 
> cassandra). Only restart of the server makes the files deleted. But after 
> heavy using (changes) of tables, there are about 24K files in the data folder 
> (instead of 35 after every restart) and Cassandra crashes. I experiminted and 
> I found out, that a finalizer fixes the problem. So after GC the files will 
> be deleted (not optimal, but working fine). It runs now 2 days continously 
> without problem. Possible fix/test:
> I wrote the following finalizer at the end of class 
> org.apache.cassandra.io.util.RandomAccessReader:
> {code:title=RandomAccessReader.java|borderStyle=solid}
> @Override
> protected void finalize() throws Throwable {
>   deallocate();
>   super.finalize();
> }
> {code}
> Can somebody test / develop / patch it? Thx.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-4210) Support for variadic parameters list for "in clause" in prepared cql query

2014-11-14 Thread Alexandre Gaudencio (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212243#comment-14212243
 ] 

Alexandre Gaudencio edited comment on CASSANDRA-4210 at 11/14/14 1:32 PM:
--

Thanks. I'm quite convinced that this is related to the nodejs driver so I'll 
just report this on github.

[edit] Sorry for the edit but there's no place on the github repo to open 
issues for the nodejs driver. Do you know what the best place is to reach the 
team in charge of this driver ?


was (Author: shahor):
Thanks. I'm quite convinced that this is related to the nodejs driver so I'll 
just report this on github.

> Support for variadic parameters list for "in clause" in prepared cql query
> --
>
> Key: CASSANDRA-4210
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4210
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.1.0
> Environment: prepared cql queries
>Reporter: Pierre Chalamet
>Assignee: Sylvain Lebresne
>Priority: Minor
> Fix For: 2.0.1
>
> Attachments: 4210.txt
>
>
> This query
> {code}
> select * from Town where key in (?)
> {code}
> only allows one parameter for '?'.
> This means querying for 'Paris' and 'London' can't be executed in one step 
> with this prepared statement.
> Current workarounds are:
> * either execute the prepared query 2 times with 'Paris' then 'London'
> * or prepare a new query {{select * from Town where key in (?, ?)}} and bind 
> the 2 parameters
> Having a support for variadic parameters list with in clause could improve 
> performance:
> * single hop to get the data
> * // fetching server side



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-4210) Support for variadic parameters list for "in clause" in prepared cql query

2014-11-14 Thread Alexandre Gaudencio (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212243#comment-14212243
 ] 

Alexandre Gaudencio commented on CASSANDRA-4210:


Thanks. I'm quite convinced that this is related to the nodejs driver so I'll 
just report this on github.

> Support for variadic parameters list for "in clause" in prepared cql query
> --
>
> Key: CASSANDRA-4210
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4210
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.1.0
> Environment: prepared cql queries
>Reporter: Pierre Chalamet
>Assignee: Sylvain Lebresne
>Priority: Minor
> Fix For: 2.0.1
>
> Attachments: 4210.txt
>
>
> This query
> {code}
> select * from Town where key in (?)
> {code}
> only allows one parameter for '?'.
> This means querying for 'Paris' and 'London' can't be executed in one step 
> with this prepared statement.
> Current workarounds are:
> * either execute the prepared query 2 times with 'Paris' then 'London'
> * or prepare a new query {{select * from Town where key in (?, ?)}} and bind 
> the 2 parameters
> Having a support for variadic parameters list with in clause could improve 
> performance:
> * single hop to get the data
> * // fetching server side



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6411) Issue with reading from sstable

2014-11-14 Thread Roni Balthazar (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212225#comment-14212225
 ] 

Roni Balthazar commented on CASSANDRA-6411:
---

Getting this in a 2.1.2 production cluster. 2 DCs.

ERROR [SSTableBatchOpen:1] 2014-11-14 01:32:03,333 CassandraDaemon.java:153 - 
Exception in thread Thread[SSTableBatchOpen:1,5,main]
org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
at 
org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:129)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:83)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:50)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:48)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:767) 
~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:726) 
~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:403) 
~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:303) 
~[apache-cassandra-2.1.2.jar:2.1.2]
at 
org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:439) 
~[apache-cassandra-2.1.2.jar:2.1.2]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_25]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[na:1.8.0_25]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_25]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_25]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_25]
Caused by: java.io.EOFException: null
at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) 
~[na:1.8.0_25]
at java.io.DataInputStream.readUTF(DataInputStream.java:589) 
~[na:1.8.0_25]
at java.io.DataInputStream.readUTF(DataInputStream.java:564) 
~[na:1.8.0_25]
at 
org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:104)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
... 13 common frames omitted


> Issue with reading from sstable
> ---
>
> Key: CASSANDRA-6411
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6411
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
>Reporter: Mike Konobeevskiy
>Assignee: Yuki Morishita
> Attachments: 6411-log.zip, 6411-sstables.zip
>
>
> With Cassandra 1.2.5 this happens almost every week. 
> {noformat}
> java.lang.RuntimeException: 
> org.apache.cassandra.io.sstable.CorruptSSTableException: 
> java.io.EOFException: EOF after 5105 bytes out of 19815
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1582)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:724)
> Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: 
> java.io.EOFException: EOF after 5105 bytes out of 19815
>   at 
> org.apache.cassandra.db.columniterator.SimpleSliceReader.(SimpleSliceReader.java:91)
>   at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:68)
>   at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.(SSTableSliceIterator.java:44)
>   at 
> org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:101)
>   at 
> org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68)
>   at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:274)
>   at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1357)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1214)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1126)
>   at org.apache.cassandra.db.Table.getRow(Table.java:347)
>   at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:70)
>   at 
>

[jira] [Commented] (CASSANDRA-4210) Support for variadic parameters list for "in clause" in prepared cql query

2014-11-14 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212199#comment-14212199
 ] 

Sylvain Lebresne commented on CASSANDRA-4210:
-

I strongly suspect that the first error is actually due to the nodejs driver. 
Most probably, the nodejs driver doesn't have enough type information in the 
non-prepared case to properly serialize your uuid as a uuid. Instead, it 
probably serialize it as a string which is incorrect. You should report this to 
the nodejs driver authors to figure this out, but the error pretty clearly 
indicate that something wrong has been sent to the server and that's almost 
surely not a server bug.

Regarding the 2nd case, you do not provided any information to assert that the 
query should indeed have returned two rows. Maybe you're 2nd UUID actually 
doesn't exist in the DB and the answer is correct. Or, here again, maybe this 
is a nodejs driver bug. Because we have a fair amount tests for IN queries so 
the likeliness of a server not returning the proper data is not extremely high 
a priori. Overall I suggest that 1) you double-check that your query should 
indeed return 2 rows, 2) if it should indeed that you first report it to the 
nodejs driver to see if it's not a nodejs driver bug and 3) if it does turn out 
that it's likely a server bug, that you open a new separate ticket with a bit 
more information on your case (like at least the Cassandra version in use) and 
full reproduction steps if possible.

> Support for variadic parameters list for "in clause" in prepared cql query
> --
>
> Key: CASSANDRA-4210
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4210
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.1.0
> Environment: prepared cql queries
>Reporter: Pierre Chalamet
>Assignee: Sylvain Lebresne
>Priority: Minor
> Fix For: 2.0.1
>
> Attachments: 4210.txt
>
>
> This query
> {code}
> select * from Town where key in (?)
> {code}
> only allows one parameter for '?'.
> This means querying for 'Paris' and 'London' can't be executed in one step 
> with this prepared statement.
> Current workarounds are:
> * either execute the prepared query 2 times with 'Paris' then 'London'
> * or prepare a new query {{select * from Town where key in (?, ?)}} and bind 
> the 2 parameters
> Having a support for variadic parameters list with in clause could improve 
> performance:
> * single hop to get the data
> * // fetching server side



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-4210) Support for variadic parameters list for "in clause" in prepared cql query

2014-11-14 Thread Alexandre Gaudencio (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212173#comment-14212173
 ] 

Alexandre Gaudencio commented on CASSANDRA-4210:


I would like to add that using variadic params for IN has different effects 
depending on the prepare option.

I am using the nodejs driver.

With this query :
{code:javascript}
var query = 'SELECT * FROM event WHERE id IN (?)'
var params = [ 
'139a42b0-6bef-11e4-b6d6-0582c8632278,02203b20-6bef-11e4-a5a2-95978cb98ca6']

cql.execute(query, params, function (err, result) {
if (err) {
console.log(err)
} else {
console.log(result);
}
})
{code}

I get the result :
{code:none}
{ [ResponseError: UUID should be 16 or 0 bytes (369)]
  name: 'ResponseError',
  message: 'UUID should be 16 or 0 bytes (369)',
  info: 'Represents an error message from the server',
  code: 8704,
  query: 'SELECT * FROM event WHERE id IN (?)' }
{code}

Whereas with a prepared statement, it goes like this :

{code:javascript}
var query = 'SELECT * FROM event WHERE id IN (?)'
var params = [ 
'139a42b0-6bef-11e4-b6d6-0582c8632278,02203b20-6bef-11e4-a5a2-95978cb98ca6']

cql.execute(query, params, { prepare : true } , function (err, result) {
if (err) {
console.log(err)
} else {
console.log(result);
}
})
{code}

{code:none}
{ rows:
   [ { __columns: [Object],
   id: '139a42b0-6bef-11e4-b6d6-0582c8632278',
   name: someName,
   type: 'someType',
   value: '{"id":"someId"}' } ],
  meta:
   { global_tables_spec: true,
 keyspace: 'someKeyspace',
 table: 'event',
 columns:
  [ [Object],
[Object],
[Object],
[Object],
_col_id: 0,
_col_name: 1,
_col_type: 2,
_col_value: 3 ] },
  _queriedHost: 'xxx.xxx.xxx.xxx' }
{code}

The behavior is inconsistent. 
In the first case I don't any result but an error instead stating that (as I 
understand it) the parser only excepted one UUID parameter and the received arg 
is too long.
In the second case, the query is ran without a problem except for the fact that 
only the first UUID is considered, the rest being ditched. And no error has 
been emitted. (This query should have returned two rows)

> Support for variadic parameters list for "in clause" in prepared cql query
> --
>
> Key: CASSANDRA-4210
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4210
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.1.0
> Environment: prepared cql queries
>Reporter: Pierre Chalamet
>Assignee: Sylvain Lebresne
>Priority: Minor
> Fix For: 2.0.1
>
> Attachments: 4210.txt
>
>
> This query
> {code}
> select * from Town where key in (?)
> {code}
> only allows one parameter for '?'.
> This means querying for 'Paris' and 'London' can't be executed in one step 
> with this prepared statement.
> Current workarounds are:
> * either execute the prepared query 2 times with 'Paris' then 'London'
> * or prepare a new query {{select * from Town where key in (?, ?)}} and bind 
> the 2 parameters
> Having a support for variadic parameters list with in clause could improve 
> performance:
> * single hop to get the data
> * // fetching server side



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8317) ExtendedFilter countCQL3Rows should be exposed to isCQLCount()

2014-11-14 Thread Jacques-Henri Berthemet (JIRA)
Jacques-Henri Berthemet created CASSANDRA-8317:
--

 Summary: ExtendedFilter countCQL3Rows should be exposed to 
isCQLCount()
 Key: CASSANDRA-8317
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8317
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jacques-Henri Berthemet
Priority: Minor


ExtendedFilter countCQL3Rows should be exposed to isCQLCount(). The goal is 
that a SecondaryIndexSearcher implementation knowns that it just needs to count 
rows, not load them.

something like:

public boolean isCQLCount() {
return countCQL3Rows;
}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8316) "Did not get positive replies from all endpoints" error on incremental repair

2014-11-14 Thread Loic Lambiel (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212103#comment-14212103
 ] 

Loic Lambiel commented on CASSANDRA-8316:
-

I forgot to mention that I'm using LCS, in case of

>  "Did not get positive replies from all endpoints" error on incremental repair
> --
>
> Key: CASSANDRA-8316
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8316
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: cassandra 2.1.2
>Reporter: Loic Lambiel
>Assignee: Ryan McGuire
>
> Hi,
> I've got an issue with incremental repairs on our production 15 nodes 2.1.2 
> (new cluster, not yet loaded, RF=3)
> After having successfully performed an incremental repair (-par -inc) on 3 
> nodes, I started receiving "Repair failed with error Did not get positive 
> replies from all endpoints." from nodetool on all remaining nodes :
> [2014-11-14 09:12:36,488] Starting repair command #3, repairing 108 ranges 
> for keyspace  (seq=false, full=false)
> [2014-11-14 09:12:47,919] Repair failed with error Did not get positive 
> replies from all endpoints.
> All the nodes are up and running and the local system log shows that the 
> repair commands got started and that's it.
> I've also noticed that soon after the repair, several nodes started having 
> more cpu load indefinitely without any particular reason (no tasks / queries, 
> nothing in the logs). I then restarted C* on these nodes and retried the 
> repair on several nodes, which were successful until facing the issue again.
> I tried to repro on our 3 nodes preproduction cluster without success
> It looks like I'm not the only one having this issue: 
> http://www.mail-archive.com/user%40cassandra.apache.org/msg39145.html
> Any idea?
> Thanks
> Loic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8192) AssertionError in Memory.java

2014-11-14 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212101#comment-14212101
 ] 

Andreas Schnitzerling commented on CASSANDRA-8192:
--

We cannot install more than 3GB of RAM on 32-bit-system (adressing 
limitations). We need 32-bit version because of old (only available) drivers 
for our laboratory equipment. With 768MB configured, C* doesn't start anymore. 
One reason of my decission for C* was to use available ressources and not to 
buy more computers.

> AssertionError in Memory.java
> -
>
> Key: CASSANDRA-8192
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8192
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Windows-7-32 bit, 3GB RAM, Java 1.7.0_67
>Reporter: Andreas Schnitzerling
>Assignee: Joshua McKenzie
> Attachments: cassandra.bat, cassandra.yaml, system.log
>
>
> Since update of 1 of 12 nodes from 2.1.0-rel to 2.1.1-rel Exception during 
> start up.
> {panel:title=system.log}
> ERROR [SSTableBatchOpen:1] 2014-10-27 09:44:00,079 CassandraDaemon.java:153 - 
> Exception in thread Thread[SSTableBatchOpen:1,5,main]
> java.lang.AssertionError: null
>   at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:135)
>  ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:83)
>  ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:50)
>  ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:48)
>  ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:766) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:725) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:402) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:302) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:438) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
> ~[na:1.7.0_55]
>   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
>   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
> [na:1.7.0_55]
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
> [na:1.7.0_55]
>   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
> {panel}
> In the attached log you can still see as well CASSANDRA-8069 and 
> CASSANDRA-6283.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7813) Decide how to deal with conflict between native and user-defined functions

2014-11-14 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-7813:

Attachment: 7813v8.txt

Another version (v8) with merge conflicts resolved.

> Decide how to deal with conflict between native and user-defined functions
> --
>
> Key: CASSANDRA-7813
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7813
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Robert Stupp
>  Labels: cql
> Fix For: 3.0
>
> Attachments: 7813.txt, 7813v2.txt, 7813v3.txt, 7813v4.txt, 
> 7813v5.txt, 7813v6.txt, 7813v7.txt, 7813v8.txt
>
>
> We have a bunch of native/hardcoded functions (now(), dateOf(), ...) and in 
> 3.0, user will be able to define new functions. Now, there is a very high 
> change that we will provide more native functions over-time (to be clear, I'm 
> not particularly for adding native functions for allthethings just because we 
> can, but it's clear that we should ultimately provide more than what we 
> have). Which begs the question: how do we want to deal with the problem of 
> adding a native function potentially breaking a previously defined 
> user-defined function?
> A priori I see the following options (maybe there is more?):
> # don't do anything specific, hoping that it won't happen often and consider 
> it a user problem if it does.
> # reserve a big number of names that we're hoping will cover all future need.
> # make native function and user-defined function syntactically distinct so it 
> cannot happen.
> I'm not a huge fan of solution 1). Solution 2) is actually what we did for 
> UDT but I think it's somewhat less practical here: there is so much types 
> that it makes sense to provide natively and so it wasn't too hard to come up 
> with a reasonably small list of types name to reserve just in case. This 
> feels a lot harder for functions to me.
> Which leaves solution 3). Since we already have the concept of namespaces for 
> functions, a simple idea would be to force user function to have namespace. 
> We could even allow that namespace to be empty as long as we force the 
> namespace separator (so we'd allow {{bar::foo}} and {{::foo}} for user 
> functions, but *not* {{foo}} which would be reserved for native function).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2014-11-14 Thread marcuse
Merge branch 'cassandra-2.1' into trunk

Conflicts:
test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/41a35ec7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/41a35ec7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/41a35ec7

Branch: refs/heads/trunk
Commit: 41a35ec74e7bc27111cfb994bafb7b9389f94d2b
Parents: 3b6edc6 abbcfc5
Author: Marcus Eriksson 
Authored: Fri Nov 14 11:10:09 2014 +0100
Committer: Marcus Eriksson 
Committed: Fri Nov 14 11:10:09 2014 +0100

--
 CHANGES.txt |  1 +
 .../db/compaction/CompactionController.java | 20 
 .../cassandra/db/compaction/TTLExpiryTest.java  | 50 
 3 files changed, 60 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/41a35ec7/CHANGES.txt
--
diff --cc CHANGES.txt
index 208381f,2476d25..f250edc
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,38 -1,5 +1,39 @@@
 +3.0
 + * Fix aggregate fn results on empty selection, result column name,
 +   and cqlsh parsing (CASSANDRA-8229)
 + * Mark sstables as repaired after full repair (CASSANDRA-7586) 
 + * Extend Descriptor to include a format value and refactor reader/writer 
apis (CASSANDRA-7443)
 + * Integrate JMH for microbenchmarks (CASSANDRA-8151)
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + * Support for aggregation functions (CASSANDRA-4914)
 + * Remove cassandra-cli (CASSANDRA-7920)
 + * Accept dollar quoted strings in CQL (CASSANDRA-7769)
 + * Make assassinate a first class command (CASSANDRA-7935)
 + * Support IN clause on any clustering column (CASSANDRA-4762)
 + * Improve compaction logging (CASSANDRA-7818)
 + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
 + * Do anticompaction in groups (CASSANDRA-6851)
 + * Support pure user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 
7781, 7929,
 +   7924, 7812, 8063)
 + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416)
 + * Move sstable RandomAccessReader to nio2, which allows using the
 +   FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050)
 + * Remove CQL2 (CASSANDRA-5918)
 + * Add Thrift get_multi_slice call (CASSANDRA-6757)
 + * Optimize fetching multiple cells by name (CASSANDRA-6933)
 + * Allow compilation in java 8 (CASSANDRA-7028)
 + * Make incremental repair default (CASSANDRA-7250)
 + * Enable code coverage thru JaCoCo (CASSANDRA-7226)
 + * Switch external naming of 'column families' to 'tables' (CASSANDRA-4369) 
 + * Shorten SSTable path (CASSANDRA-6962)
 + * Use unsafe mutations for most unit tests (CASSANDRA-6969)
 + * Fix race condition during calculation of pending ranges (CASSANDRA-7390)
 + * Fail on very large batch sizes (CASSANDRA-8011)
 + * improve concurrency of repair (CASSANDRA-6455, 8208)
 +
  2.1.3
+  * Do more aggressive entire-sstable TTL expiry checks (CASSANDRA-8243)
   * Add more log info if readMeter is null (CASSANDRA-8238)
   * add check of the system wall clock time at startup (CASSANDRA-8305)
   * Support for frozen collections (CASSANDRA-7859)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/41a35ec7/src/java/org/apache/cassandra/db/compaction/CompactionController.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/41a35ec7/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java
--
diff --cc test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java
index 636370b,4fe5cfb..924c4b5
--- a/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java
+++ b/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java
@@@ -20,43 -20,77 +20,93 @@@ package org.apache.cassandra.db.compact
   * 
   */
  
 +import org.apache.cassandra.io.sstable.format.SSTableReader;
 +import org.junit.BeforeClass;
+ import com.google.common.collect.Sets;
  import org.junit.Test;
  import org.junit.runner.RunWith;
  
  import org.apache.cassandra.OrderedJUnit4ClassRunner;
  import org.apache.cassandra.SchemaLoader;
  import org.apache.cassandra.Util;
 +import org.apache.cassandra.config.KSMetaData;
  import org.apache.cassandra.db.*;
  import org.apache.cassandra.db.columniterator.OnDiskAtomIterator;
 -import org.apache.cassandra.io.sstable.SSTableReader;
 -import org.apache.cassandra.io.sstable.SSTableScanner;
 +import org.apache.cassandra.exceptions.ConfigurationException;
 +import org.apache.cassandra.locator.SimpleStrategy;
  import org.apache.cassandra.utils.ByteBufferUtil;
+ 
+ import

[1/2] cassandra git commit: Do more aggressive ttl expiration checks to be able to drop more sstables

2014-11-14 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk 3b6edc6af -> 41a35ec74


Do more aggressive ttl expiration checks to be able to drop more sstables

Patch by Bjorn Hegerfors; reviewed by marcuse and slebresne for CASSANDRA-8243


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/abbcfc5f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/abbcfc5f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/abbcfc5f

Branch: refs/heads/trunk
Commit: abbcfc5f72323d0c6040a3cc02aba8f2c0058d95
Parents: 054beee
Author: Marcus Eriksson 
Authored: Fri Nov 14 10:54:39 2014 +0100
Committer: Marcus Eriksson 
Committed: Fri Nov 14 11:02:45 2014 +0100

--
 CHANGES.txt |  1 +
 .../db/compaction/CompactionController.java | 20 
 .../cassandra/db/compaction/TTLExpiryTest.java  | 50 
 3 files changed, 60 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/abbcfc5f/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6228893..2476d25 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.3
+ * Do more aggressive entire-sstable TTL expiry checks (CASSANDRA-8243)
  * Add more log info if readMeter is null (CASSANDRA-8238)
  * add check of the system wall clock time at startup (CASSANDRA-8305)
  * Support for frozen collections (CASSANDRA-7859)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/abbcfc5f/src/java/org/apache/cassandra/db/compaction/CompactionController.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/CompactionController.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionController.java
index ef27805..f23d39a 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionController.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionController.java
@@ -92,12 +92,11 @@ public class CompactionController implements AutoCloseable
  * Finds expired sstables
  *
  * works something like this;
- * 1. find "global" minTimestamp of overlapping sstables (excluding the 
possibly droppable ones)
- * 2. build a list of candidates to be dropped
- * 3. sort the candidate list, biggest maxTimestamp first in list
- * 4. check if the candidates to be dropped actually can be dropped 
(maxTimestamp < global minTimestamp) and it is included in the compaction
- *- if not droppable, update global minTimestamp and remove from 
candidates
- * 5. return candidates.
+ * 1. find "global" minTimestamp of overlapping sstables and compacting 
sstables containing any non-expired data
+ * 2. build a list of fully expired candidates
+ * 3. check if the candidates to be dropped actually can be dropped 
(maxTimestamp < global minTimestamp)
+ *- if not droppable, remove from candidates
+ * 4. return candidates.
  *
  * @param cfStore
  * @param compacting we take the drop-candidates from this set, it is 
usually the sstables included in the compaction
@@ -127,10 +126,10 @@ public class CompactionController implements AutoCloseable
 minTimestamp = Math.min(minTimestamp, 
candidate.getMinTimestamp());
 }
 
-// we still need to keep candidates that might shadow something in a
-// non-candidate sstable. And if we remove a sstable from the 
candidates, we
-// must take it's timestamp into account (hence the sorting below).
-Collections.sort(candidates, SSTableReader.maxTimestampComparator);
+// At this point, minTimestamp denotes the lowest timestamp of any 
relevant
+// SSTable that contains a constructive value. candidates contains all 
the
+// candidates with no constructive values. The ones out of these that 
have
+// (getMaxTimestamp() < minTimestamp) serve no purpose anymore.
 
 Iterator iterator = candidates.iterator();
 while (iterator.hasNext())
@@ -138,7 +137,6 @@ public class CompactionController implements AutoCloseable
 SSTableReader candidate = iterator.next();
 if (candidate.getMaxTimestamp() >= minTimestamp)
 {
-minTimestamp = Math.min(candidate.getMinTimestamp(), 
minTimestamp);
 iterator.remove();
 }
 else

http://git-wip-us.apache.org/repos/asf/cassandra/blob/abbcfc5f/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java 
b/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java

cassandra git commit: Do more aggressive ttl expiration checks to be able to drop more sstables

2014-11-14 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 054beee45 -> abbcfc5f7


Do more aggressive ttl expiration checks to be able to drop more sstables

Patch by Bjorn Hegerfors; reviewed by marcuse and slebresne for CASSANDRA-8243


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/abbcfc5f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/abbcfc5f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/abbcfc5f

Branch: refs/heads/cassandra-2.1
Commit: abbcfc5f72323d0c6040a3cc02aba8f2c0058d95
Parents: 054beee
Author: Marcus Eriksson 
Authored: Fri Nov 14 10:54:39 2014 +0100
Committer: Marcus Eriksson 
Committed: Fri Nov 14 11:02:45 2014 +0100

--
 CHANGES.txt |  1 +
 .../db/compaction/CompactionController.java | 20 
 .../cassandra/db/compaction/TTLExpiryTest.java  | 50 
 3 files changed, 60 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/abbcfc5f/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6228893..2476d25 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.3
+ * Do more aggressive entire-sstable TTL expiry checks (CASSANDRA-8243)
  * Add more log info if readMeter is null (CASSANDRA-8238)
  * add check of the system wall clock time at startup (CASSANDRA-8305)
  * Support for frozen collections (CASSANDRA-7859)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/abbcfc5f/src/java/org/apache/cassandra/db/compaction/CompactionController.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/CompactionController.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionController.java
index ef27805..f23d39a 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionController.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionController.java
@@ -92,12 +92,11 @@ public class CompactionController implements AutoCloseable
  * Finds expired sstables
  *
  * works something like this;
- * 1. find "global" minTimestamp of overlapping sstables (excluding the 
possibly droppable ones)
- * 2. build a list of candidates to be dropped
- * 3. sort the candidate list, biggest maxTimestamp first in list
- * 4. check if the candidates to be dropped actually can be dropped 
(maxTimestamp < global minTimestamp) and it is included in the compaction
- *- if not droppable, update global minTimestamp and remove from 
candidates
- * 5. return candidates.
+ * 1. find "global" minTimestamp of overlapping sstables and compacting 
sstables containing any non-expired data
+ * 2. build a list of fully expired candidates
+ * 3. check if the candidates to be dropped actually can be dropped 
(maxTimestamp < global minTimestamp)
+ *- if not droppable, remove from candidates
+ * 4. return candidates.
  *
  * @param cfStore
  * @param compacting we take the drop-candidates from this set, it is 
usually the sstables included in the compaction
@@ -127,10 +126,10 @@ public class CompactionController implements AutoCloseable
 minTimestamp = Math.min(minTimestamp, 
candidate.getMinTimestamp());
 }
 
-// we still need to keep candidates that might shadow something in a
-// non-candidate sstable. And if we remove a sstable from the 
candidates, we
-// must take it's timestamp into account (hence the sorting below).
-Collections.sort(candidates, SSTableReader.maxTimestampComparator);
+// At this point, minTimestamp denotes the lowest timestamp of any 
relevant
+// SSTable that contains a constructive value. candidates contains all 
the
+// candidates with no constructive values. The ones out of these that 
have
+// (getMaxTimestamp() < minTimestamp) serve no purpose anymore.
 
 Iterator iterator = candidates.iterator();
 while (iterator.hasNext())
@@ -138,7 +137,6 @@ public class CompactionController implements AutoCloseable
 SSTableReader candidate = iterator.next();
 if (candidate.getMaxTimestamp() >= minTimestamp)
 {
-minTimestamp = Math.min(candidate.getMinTimestamp(), 
minTimestamp);
 iterator.remove();
 }
 else

http://git-wip-us.apache.org/repos/asf/cassandra/blob/abbcfc5f/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/compaction/TTLExpiryTest.java 
b/test/unit/org/apache/cassandra/db/compaction/TT

[jira] [Commented] (CASSANDRA-8316) "Did not get positive replies from all endpoints" error on incremental repair

2014-11-14 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212063#comment-14212063
 ] 

Marcus Eriksson commented on CASSANDRA-8316:


[~enigmacurry] can your team reproduce? running with -inc and -par on a ~15 
node cluster with vnodes

>  "Did not get positive replies from all endpoints" error on incremental repair
> --
>
> Key: CASSANDRA-8316
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8316
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: cassandra 2.1.2
>Reporter: Loic Lambiel
>
> Hi,
> I've got an issue with incremental repairs on our production 15 nodes 2.1.2 
> (new cluster, not yet loaded, RF=3)
> After having successfully performed an incremental repair (-par -inc) on 3 
> nodes, I started receiving "Repair failed with error Did not get positive 
> replies from all endpoints." from nodetool on all remaining nodes :
> [2014-11-14 09:12:36,488] Starting repair command #3, repairing 108 ranges 
> for keyspace  (seq=false, full=false)
> [2014-11-14 09:12:47,919] Repair failed with error Did not get positive 
> replies from all endpoints.
> All the nodes are up and running and the local system log shows that the 
> repair commands got started and that's it.
> I've also noticed that soon after the repair, several nodes started having 
> more cpu load indefinitely without any particular reason (no tasks / queries, 
> nothing in the logs). I then restarted C* on these nodes and retried the 
> repair on several nodes, which were successful until facing the issue again.
> I tried to repro on our 3 nodes preproduction cluster without success
> It looks like I'm not the only one having this issue: 
> http://www.mail-archive.com/user%40cassandra.apache.org/msg39145.html
> Any idea?
> Thanks
> Loic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8316) "Did not get positive replies from all endpoints" error on incremental repair

2014-11-14 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-8316:
---
Assignee: Ryan McGuire

>  "Did not get positive replies from all endpoints" error on incremental repair
> --
>
> Key: CASSANDRA-8316
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8316
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: cassandra 2.1.2
>Reporter: Loic Lambiel
>Assignee: Ryan McGuire
>
> Hi,
> I've got an issue with incremental repairs on our production 15 nodes 2.1.2 
> (new cluster, not yet loaded, RF=3)
> After having successfully performed an incremental repair (-par -inc) on 3 
> nodes, I started receiving "Repair failed with error Did not get positive 
> replies from all endpoints." from nodetool on all remaining nodes :
> [2014-11-14 09:12:36,488] Starting repair command #3, repairing 108 ranges 
> for keyspace  (seq=false, full=false)
> [2014-11-14 09:12:47,919] Repair failed with error Did not get positive 
> replies from all endpoints.
> All the nodes are up and running and the local system log shows that the 
> repair commands got started and that's it.
> I've also noticed that soon after the repair, several nodes started having 
> more cpu load indefinitely without any particular reason (no tasks / queries, 
> nothing in the logs). I then restarted C* on these nodes and retried the 
> repair on several nodes, which were successful until facing the issue again.
> I tried to repro on our 3 nodes preproduction cluster without success
> It looks like I'm not the only one having this issue: 
> http://www.mail-archive.com/user%40cassandra.apache.org/msg39145.html
> Any idea?
> Thanks
> Loic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8316) "Did not get positive replies from all endpoints" error on incremental repair

2014-11-14 Thread Loic Lambiel (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212055#comment-14212055
 ] 

Loic Lambiel commented on CASSANDRA-8316:
-

Nope, nothing special noticed on other nodes (except load on few nodes)

>  "Did not get positive replies from all endpoints" error on incremental repair
> --
>
> Key: CASSANDRA-8316
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8316
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: cassandra 2.1.2
>Reporter: Loic Lambiel
>
> Hi,
> I've got an issue with incremental repairs on our production 15 nodes 2.1.2 
> (new cluster, not yet loaded, RF=3)
> After having successfully performed an incremental repair (-par -inc) on 3 
> nodes, I started receiving "Repair failed with error Did not get positive 
> replies from all endpoints." from nodetool on all remaining nodes :
> [2014-11-14 09:12:36,488] Starting repair command #3, repairing 108 ranges 
> for keyspace  (seq=false, full=false)
> [2014-11-14 09:12:47,919] Repair failed with error Did not get positive 
> replies from all endpoints.
> All the nodes are up and running and the local system log shows that the 
> repair commands got started and that's it.
> I've also noticed that soon after the repair, several nodes started having 
> more cpu load indefinitely without any particular reason (no tasks / queries, 
> nothing in the logs). I then restarted C* on these nodes and retried the 
> repair on several nodes, which were successful until facing the issue again.
> I tried to repro on our 3 nodes preproduction cluster without success
> It looks like I'm not the only one having this issue: 
> http://www.mail-archive.com/user%40cassandra.apache.org/msg39145.html
> Any idea?
> Thanks
> Loic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8316) "Did not get positive replies from all endpoints" error on incremental repair

2014-11-14 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212045#comment-14212045
 ] 

Marcus Eriksson commented on CASSANDRA-8316:


any exceptions on any other nodes?

>  "Did not get positive replies from all endpoints" error on incremental repair
> --
>
> Key: CASSANDRA-8316
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8316
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: cassandra 2.1.2
>Reporter: Loic Lambiel
>
> Hi,
> I've got an issue with incremental repairs on our production 15 nodes 2.1.2 
> (new cluster, not yet loaded, RF=3)
> After having successfully performed an incremental repair (-par -inc) on 3 
> nodes, I started receiving "Repair failed with error Did not get positive 
> replies from all endpoints." from nodetool on all remaining nodes :
> [2014-11-14 09:12:36,488] Starting repair command #3, repairing 108 ranges 
> for keyspace  (seq=false, full=false)
> [2014-11-14 09:12:47,919] Repair failed with error Did not get positive 
> replies from all endpoints.
> All the nodes are up and running and the local system log shows that the 
> repair commands got started and that's it.
> I've also noticed that soon after the repair, several nodes started having 
> more cpu load indefinitely without any particular reason (no tasks / queries, 
> nothing in the logs). I then restarted C* on these nodes and retried the 
> repair on several nodes, which were successful until facing the issue again.
> I tried to repro on our 3 nodes preproduction cluster without success
> It looks like I'm not the only one having this issue: 
> http://www.mail-archive.com/user%40cassandra.apache.org/msg39145.html
> Any idea?
> Thanks
> Loic



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >