Repairs at scale in Cassandra 2.1.13

2016-09-26 Thread Anubhav Kale
Hello,

We run Cassandra 2.1.13 (don't have plans to upgrade yet). What is the best way 
to run repairs at scale (400 nodes, each holding ~600GB) that actually works ?

I'm considering doing subrange repairs 
(https://github.com/BrianGallew/cassandra_range_repair/blob/master/src/range_repair.py)
 as I've heard from folks that incremental repairs simply don't work even in 
3.x (Yeah, that's a strong statement but I heard that from multiple folks at 
the Summit).

Any guidance would be greatly appreciated !

Thanks,
Anubhav


Re: How long/how many days 'nodetool gossipinfo' will have decommissioned nodes info

2016-09-26 Thread laxmikanth sadula
Thank you @Jaoquin and @DuyHai

On Mon, Sep 26, 2016 at 10:00 PM, DuyHai Doan  wrote:

> I've read from some  that the gossip info will stay
> around for 72h before being removed.
>
> On Mon, Sep 26, 2016 at 6:19 PM, Joaquin Casares <
> joaq...@thelastpickle.com> wrote:
>
>> Hello Techpyassa,
>>
>> Sometimes old gossip information tends to echo around for quite a bit
>> longer than intended. I'm unsure how long the LEFT messages are supposed to
>> be echoed for, but if you want to force the removal of a removed node from
>> gossip, you can use the Assassinate Endpoint JMX command. On larger
>> clusters, running this command synchronously across all machines may be
>> required. Instructions on Assassinate Endpoint can be found here:
>>
>> https://gist.github.com/justenwalker/8338334
>>
>> If you're planning on recommissioning the same node, upon bootstrapping
>> the gossiped message should change to a JOINING message overwriting the
>> LEFT message.
>>
>> I've personally never checked `nodetool gossipinfo` before
>> recommissioning a node and typically only ensure the node does not appear
>> in `nodetool status`.
>>
>> Hope that helps,
>>
>> Joaquin Casares
>> Consultant
>> Austin, TX
>>
>> Apache Cassandra Consulting
>> http://www.thelastpickle.com
>>
>> On Sun, Sep 25, 2016 at 2:17 PM, Laxmikanth S 
>> wrote:
>>
>>> Hi,
>>>
>>> Recently we have decommissioned nodes from Cassandra cluster , but even
>>> after nearly 48 hours 'nodetool gossipinfo' still shows the removed nodes(
>>> as LEFT).
>>>
>>> I just wanted to recommission the same node again. So just wanted to
>>> know , will it create a problem if I recommission the same node(same IP)
>>>  again while its state is maintained as LEFT in 'nodetool gossipnfo'.
>>>
>>>
>>> Thanks,
>>> Techpyaasa
>>>
>>
>>
>


-- 
Regards,
Laxmikanth
99621 38051


Re: How long/how many days 'nodetool gossipinfo' will have decommissioned nodes info

2016-09-26 Thread DuyHai Doan
I've read from some  that the gossip info will stay
around for 72h before being removed.

On Mon, Sep 26, 2016 at 6:19 PM, Joaquin Casares 
wrote:

> Hello Techpyassa,
>
> Sometimes old gossip information tends to echo around for quite a bit
> longer than intended. I'm unsure how long the LEFT messages are supposed to
> be echoed for, but if you want to force the removal of a removed node from
> gossip, you can use the Assassinate Endpoint JMX command. On larger
> clusters, running this command synchronously across all machines may be
> required. Instructions on Assassinate Endpoint can be found here:
>
> https://gist.github.com/justenwalker/8338334
>
> If you're planning on recommissioning the same node, upon bootstrapping
> the gossiped message should change to a JOINING message overwriting the
> LEFT message.
>
> I've personally never checked `nodetool gossipinfo` before recommissioning
> a node and typically only ensure the node does not appear in `nodetool
> status`.
>
> Hope that helps,
>
> Joaquin Casares
> Consultant
> Austin, TX
>
> Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> On Sun, Sep 25, 2016 at 2:17 PM, Laxmikanth S 
> wrote:
>
>> Hi,
>>
>> Recently we have decommissioned nodes from Cassandra cluster , but even
>> after nearly 48 hours 'nodetool gossipinfo' still shows the removed nodes(
>> as LEFT).
>>
>> I just wanted to recommission the same node again. So just wanted to know
>> , will it create a problem if I recommission the same node(same IP)  again
>> while its state is maintained as LEFT in 'nodetool gossipnfo'.
>>
>>
>> Thanks,
>> Techpyaasa
>>
>
>


Re: How long/how many days 'nodetool gossipinfo' will have decommissioned nodes info

2016-09-26 Thread Joaquin Casares
Hello Techpyassa,

Sometimes old gossip information tends to echo around for quite a bit
longer than intended. I'm unsure how long the LEFT messages are supposed to
be echoed for, but if you want to force the removal of a removed node from
gossip, you can use the Assassinate Endpoint JMX command. On larger
clusters, running this command synchronously across all machines may be
required. Instructions on Assassinate Endpoint can be found here:

https://gist.github.com/justenwalker/8338334

If you're planning on recommissioning the same node, upon bootstrapping the
gossiped message should change to a JOINING message overwriting the LEFT
message.

I've personally never checked `nodetool gossipinfo` before recommissioning
a node and typically only ensure the node does not appear in `nodetool
status`.

Hope that helps,

Joaquin Casares
Consultant
Austin, TX

Apache Cassandra Consulting
http://www.thelastpickle.com

On Sun, Sep 25, 2016 at 2:17 PM, Laxmikanth S  wrote:

> Hi,
>
> Recently we have decommissioned nodes from Cassandra cluster , but even
> after nearly 48 hours 'nodetool gossipinfo' still shows the removed nodes(
> as LEFT).
>
> I just wanted to recommission the same node again. So just wanted to know
> , will it create a problem if I recommission the same node(same IP)  again
> while its state is maintained as LEFT in 'nodetool gossipnfo'.
>
>
> Thanks,
> Techpyaasa
>


Using Spring Data Cassandra with Spring Boot Batch

2016-09-26 Thread Amit Trivedi
Hi,

I am wondering if anyone has used or tried using Spring data Cassandra with
Spring Batch or Spring book batch. I understand that Spring batch today
only supports RDBMS to store its metadata and only way I is to either
provide a relational database as datasource for spring batch or use an
embedded database (like H2 of HSQL).

Is it possible to create spring batch metadata tables in Cassandra and use
these instead? I guess the answer that to do this spring batch code needs
to be modified to support Cassandra.

Thanks
AT


Re: How to query '%' character using LIKE operator in Cassandra 3.7?

2016-09-26 Thread DuyHai Doan
"In the current implementation (‘%’ could be a wildcard only at the
start/end of a term) I guess it should be ’ENDS with ‘%escape’ ‘."

--> Yes in the current impl, it means ENDS WITH '%escape' but we want SASI
to understand the %% as an escape for % so the goal is that SASI
understands LIKE '%%escape' as EQUALS TO '%escape'. Am I correct ?

"Moreover all terms that contains single ‘%’ somewhere in the middle should
cause an exception."

--> Not necessarily, sometime people may want to search text pattern
containing the literal %. Imagine the text "this year the average income
has increase by 10%". People may want to search for "10%".


"BUT may be it’s better to make escaping more universal to support a future
possible case where a wildcard could be placed in the middle of a term too?"

--> I guess universal escaping for % is the cleaner and better solution.
However it may involve some complex regular expression. I'm not sure that
input.replaceAll("%%", "%") trick would work for any cases.

And we also need to define when we want to detect operation type
(LIKE_PREFIX, LIKE_SUFFIX, LIKE_CONTAINS, EQUAL) ?

Should we detect operation type BEFORE escaping or AFTER escaping ?





On Mon, Sep 26, 2016 at 3:54 PM, Mikhail Krupitskiy <
mikhail.krupits...@jetbrains.com> wrote:

> LIKE '%%%escape' --> EQUALS TO '%%escape' ???
>
> In the current implementation (‘%’ could be a wildcard only at the
> start/end of a term) I guess it should be ’ENDS with ‘%escape’ ‘.
> Moreover all terms that contains single ‘%’ somewhere in the middle should
> cause an exception.
> BUT may be it’s better to make escaping more universal to support a future
> possible case where a wildcard could be placed in the middle of a term too?
>
> Thanks,
> Mikhail
>
> On 24 Sep 2016, at 21:09, DuyHai Doan  wrote:
>
> Reminder, right now, the % character is only interpreted as wildcard IF
> AND ONLY IF it is the first/last character of the searched term
>
>
> LIKE '%escape' --> ENDS WITH 'escape'
>
> If we use % to escape %,
> LIKE '%%escape' -->  EQUALS TO '%escape'
>
> LIKE '%%%escape' --> EQUALS TO '%%escape' ???
>
>
>
>
> On Fri, Sep 23, 2016 at 5:02 PM, Mikhail Krupitskiy <
> mikhail.krupits...@jetbrains.com> wrote:
>
>> Hi, Jim,
>>
>> What pattern should be used to search “ends with  ‘%escape’ “ with your
>> conception?
>>
>> Thanks,
>> Mikhail
>>
>> On 22 Sep 2016, at 18:51, Jim Ancona  wrote:
>>
>> To answer DuyHai's question without introducing new syntax, I'd suggest:
>>
>> LIKE '%%%escape' means STARTS WITH '%' AND ENDS WITH 'escape'
>>
>> So the first two %'s are translated to a literal, non-wildcard % and the
>> third % is a wildcard because it's not doubled.
>>
>> Jim
>>
>> On Thu, Sep 22, 2016 at 11:40 AM, Mikhail Krupitskiy <
>> mikhail.krupits...@jetbrains.com> wrote:
>>
>>> I guess that it should be similar to how it is done in SQL for LIKE
>>> patterns.
>>>
>>> You can introduce an escape character, e.g. ‘\’.
>>> Examples:
>>> ‘%’ - any string
>>> ‘\%’ - equal to ‘%’ character
>>> ‘\%foo%’ - starts from ‘%foo’
>>> ‘%%%escape’ - ends with ’escape’
>>> ‘\%%’ - starts from ‘%’
>>> ‘\\\%%’ - starts from ‘\%’ .
>>>
>>> What do you think?
>>>
>>> Thanks,
>>> Mikhail
>>>
>>> On 22 Sep 2016, at 16:47, DuyHai Doan  wrote:
>>>
>>> Hello Mikhail
>>>
>>> It's more complicated that it seems
>>>
>>> LIKE '%%escape' means  EQUAL TO '%escape'
>>>
>>> LIKE '%escape' means ENDS WITH 'escape'
>>>
>>> What's about LIKE '%%%escape' 
>>>
>>> How should we treat this case ? Replace %% by % at the beginning of the
>>> searched term ??
>>>
>>>
>>>
>>> On Thu, Sep 22, 2016 at 3:31 PM, Mikhail Krupitskiy <
>>> mikhail.krupits...@jetbrains.com> wrote:
>>>
 Hi!

 We’ve talked about two items:
 1) ‘%’ as a wildcard in the middle of LIKE pattern.
 2) How to escape ‘%’ to be able to find strings with the ‘%’ char with
 help of LIKE.

 Item #1was resolved as CASSANDRA-12573.

 Regarding to item #2: you said the following:

 A possible fix would be:

 1) convert the bytebuffer into plain String (UTF8 or ASCII, depending
 on the column data type)
 2) remove the escape character e.g. before parsing OR use some advanced
 regex to exclude the %% from parsing e.g

 Step 2) is dead easy but step 1) is harder because I don't know if
 converting the bytebuffer into String at this stage of the CQL parser is
 expensive or not (in term of computation)

 Let me try a patch

 So is there any update on this?

 Thanks,
 Mikhail


 On 20 Sep 2016, at 18:38, Mikhail Krupitskiy <
 mikhail.krupits...@jetbrains.com> wrote:

 Hi!

 Have you had a chance to try your patch or solve the issue in an other
 way?

 Thanks,
 Mikhail

 On 15 Sep 2016, at 16:02, DuyHai Doan  wrote:

 Ok so I've found the source of the issue, 

Re: How to query '%' character using LIKE operator in Cassandra 3.7?

2016-09-26 Thread Mikhail Krupitskiy
> LIKE '%%%escape' --> EQUALS TO '%%escape' ???
In the current implementation (‘%’ could be a wildcard only at the start/end of 
a term) I guess it should be ’ENDS with ‘%escape’ ‘.
Moreover all terms that contains single ‘%’ somewhere in the middle should 
cause an exception.
BUT may be it’s better to make escaping more universal to support a future 
possible case where a wildcard could be placed in the middle of a term too?

Thanks,
Mikhail 
> On 24 Sep 2016, at 21:09, DuyHai Doan  wrote:
> 
> Reminder, right now, the % character is only interpreted as wildcard IF AND 
> ONLY IF it is the first/last character of the searched term
> 
> 
> LIKE '%escape' --> ENDS WITH 'escape' 
> 
> If we use % to escape %,
> LIKE '%%escape' -->  EQUALS TO '%escape'
> 
> LIKE '%%%escape' --> EQUALS TO '%%escape' ???
> 
> 
> 
> 
> On Fri, Sep 23, 2016 at 5:02 PM, Mikhail Krupitskiy 
> > 
> wrote:
> Hi, Jim,
> 
> What pattern should be used to search “ends with  ‘%escape’ “ with your 
> conception?
> 
> Thanks,
> Mikhail
> 
>> On 22 Sep 2016, at 18:51, Jim Ancona > > wrote:
>> 
>> To answer DuyHai's question without introducing new syntax, I'd suggest:
>>> LIKE '%%%escape' means STARTS WITH '%' AND ENDS WITH 'escape' 
>> So the first two %'s are translated to a literal, non-wildcard % and the 
>> third % is a wildcard because it's not doubled.
>> 
>> Jim
>> 
>> On Thu, Sep 22, 2016 at 11:40 AM, Mikhail Krupitskiy 
>> > 
>> wrote:
>> I guess that it should be similar to how it is done in SQL for LIKE patterns.
>> 
>> You can introduce an escape character, e.g. ‘\’.
>> Examples:
>> ‘%’ - any string
>> ‘\%’ - equal to ‘%’ character
>> ‘\%foo%’ - starts from ‘%foo’
>> ‘%%%escape’ - ends with ’escape’
>> ‘\%%’ - starts from ‘%’
>> ‘\\\%%’ - starts from ‘\%’ .
>> 
>> What do you think?
>> 
>> Thanks,
>> Mikhail
>>> On 22 Sep 2016, at 16:47, DuyHai Doan >> > wrote:
>>> 
>>> Hello Mikhail
>>> 
>>> It's more complicated that it seems
>>> 
>>> LIKE '%%escape' means  EQUAL TO '%escape'
>>> 
>>> LIKE '%escape' means ENDS WITH 'escape'
>>> 
>>> What's about LIKE '%%%escape' 
>>> 
>>> How should we treat this case ? Replace %% by % at the beginning of the 
>>> searched term ??
>>> 
>>> 
>>> 
>>> On Thu, Sep 22, 2016 at 3:31 PM, Mikhail Krupitskiy 
>>> >> > wrote:
>>> Hi!
>>> 
>>> We’ve talked about two items:
>>> 1) ‘%’ as a wildcard in the middle of LIKE pattern.
>>> 2) How to escape ‘%’ to be able to find strings with the ‘%’ char with help 
>>> of LIKE.
>>> 
>>> Item #1was resolved as CASSANDRA-12573.
>>> 
>>> Regarding to item #2: you said the following:
 A possible fix would be:
 
 1) convert the bytebuffer into plain String (UTF8 or ASCII, depending on 
 the column data type)
 2) remove the escape character e.g. before parsing OR use some advanced 
 regex to exclude the %% from parsing e.g
 
 Step 2) is dead easy but step 1) is harder because I don't know if 
 converting the bytebuffer into String at this stage of the CQL parser is 
 expensive or not (in term of computation)
 
 Let me try a patch 
>>> 
>>> So is there any update on this?
>>> 
>>> Thanks,
>>> Mikhail
>>> 
>>> 
 On 20 Sep 2016, at 18:38, Mikhail Krupitskiy 
 > wrote:
 
 Hi!
 
 Have you had a chance to try your patch or solve the issue in an other 
 way? 
 
 Thanks,
 Mikhail
> On 15 Sep 2016, at 16:02, DuyHai Doan  > wrote:
> 
> Ok so I've found the source of the issue, it's pretty well hidden because 
> it is NOT in the SASI source code directly.
> 
> Here is the method where C* determines what kind of LIKE expression 
> you're using (LIKE_PREFIX , LIKE CONTAINS or LIKE_MATCHES)
> 
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/cql3/restrictions/SingleColumnRestriction.java#L733-L778
>  
> 
> 
> As you can see, it's pretty simple, maybe too simple. Indeed, they forget 
> to remove escape character BEFORE doing the matching so if your search is 
> LIKE '%%esc%', the detected expression is LIKE_CONTAINS.
> 
> A possible fix would be:
> 
> 1) convert the bytebuffer into plain String (UTF8 or ASCII, depending on 
> the column data type)
> 2) remove the escape character e.g. before parsing OR use some advanced 
> regex to exclude the %% from 

Re: Exceptions whenever compaction happens

2016-09-26 Thread Nikhil Sharma
Hi Ben,

Thanks for your help.
We have created an issue here:
https://issues.apache.org/jira/browse/CASSANDRA-12706

Let's see if we can get some comments there.

Regards,

Nikhil Sharma

On Mon, Sep 26, 2016 at 1:12 PM, Ben Slater 
wrote:

> Hi Nikhil,
>
> If you haven’t already done so I would recommend logging Cassandra project
> JIRA for this issue - it looks like a defect to me and occurring during
> compaction will probably be hard to work around. If you can work out what
> table is being compacted when the issue occurs and include the schema of
> the table that might help.
>
> Beyond that the only thing I can think of is running scrub if you haven’t
> already done so.
>
> Cheers
> Ben
>
> On Mon, 26 Sep 2016 at 16:36 Nikhil Sharma  wrote:
>
>> Hi,
>>
>> We are not exactly sure what is causing this problem. But after
>> compaction happens (after 1 week ttl). We start getting this exception:
>>
>> WARN  [SharedPool-Worker-1] 2016-09-26 04:07:19,849
>> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on
>> thread Thread[SharedPool-Worker-1,5,main]: {}
>> java.lang.NullPointerException: null
>> at org.apache.cassandra.db.Slices$ArrayBackedSlices$
>> ComponentOfSlice.isEQ(Slices.java:748) ~[apache-cassandra-3.0.9.jar:
>> 3.0.9]
>> at 
>> org.apache.cassandra.db.Slices$ArrayBackedSlices.toCQLString(Slices.java:659)
>> ~[apache-cassandra-3.0.9.jar:3.0.9]
>> at org.apache.cassandra.db.filter.ClusteringIndexSliceFilter.
>> toCQLString(ClusteringIndexSliceFilter.java:150)
>> ~[apache-cassandra-3.0.9.jar:3.0.9]
>> at org.apache.cassandra.db.SinglePartitionReadCommand.
>> appendCQLWhereClause(SinglePartitionReadCommand.java:911)
>> ~[apache-cassandra-3.0.9.jar:3.0.9]
>> at 
>> org.apache.cassandra.db.ReadCommand.toCQLString(ReadCommand.java:560)
>> ~[apache-cassandra-3.0.9.jar:3.0.9]
>> at 
>> org.apache.cassandra.db.ReadCommand$1MetricRecording.onClose(ReadCommand.java:506)
>> ~[apache-cassandra-3.0.9.jar:3.0.9]
>> at org.apache.cassandra.db.transform.BasePartitions.
>> runOnClose(BasePartitions.java:70) ~[apache-cassandra-3.0.9.jar:3.0.9]
>> at 
>> org.apache.cassandra.db.transform.BaseIterator.close(BaseIterator.java:76)
>> ~[apache-cassandra-3.0.9.jar:3.0.9]
>> at org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(
>> ReadCommandVerbHandler.java:48) ~[apache-cassandra-3.0.9.jar:3.0.9]
>> at 
>> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67)
>> ~[apache-cassandra-3.0.9.jar:3.0.9]
>> at 
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> ~[na:1.8.0_102]
>> at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorServ
>> ice$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>> ~[apache-cassandra-3.0.9.jar:3.0.9]
>> at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorServ
>> ice$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>> [apache-cassandra-3.0.9.jar:3.0.9]
>> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
>> [apache-cassandra-3.0.9.jar:3.0.9]
>> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
>> WARN  [SharedPool-Worker-2] 2016-09-26 04:07:20,639
>> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on
>> thread Thread[SharedPool-Worker-2,5,main]: {}
>> java.lang.RuntimeException: java.lang.NullPointerException
>> at org.apache.cassandra.service.StorageProxy$
>> DroppableRunnable.run(StorageProxy.java:2470)
>> ~[apache-cassandra-3.0.9.jar:3.0.9]
>> at 
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> ~[na:1.8.0_102]
>> at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorServ
>> ice$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>> ~[apache-cassandra-3.0.9.jar:3.0.9]
>> at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorServ
>> ice$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
>> [apache-cassandra-3.0.9.jar:3.0.9]
>> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
>> [apache-cassandra-3.0.9.jar:3.0.9]
>> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
>> Caused by: java.lang.NullPointerException: null
>> at org.apache.cassandra.db.Slices$ArrayBackedSlices$
>> ComponentOfSlice.isEQ(Slices.java:748) ~[apache-cassandra-3.0.9.jar:
>> 3.0.9]
>> at 
>> org.apache.cassandra.db.Slices$ArrayBackedSlices.toCQLString(Slices.java:659)
>> ~[apache-cassandra-3.0.9.jar:3.0.9]
>> at org.apache.cassandra.db.filter.ClusteringIndexSliceFilter.
>> toCQLString(ClusteringIndexSliceFilter.java:150)
>> ~[apache-cassandra-3.0.9.jar:3.0.9]
>> at org.apache.cassandra.db.SinglePartitionReadCommand.
>> appendCQLWhereClause(SinglePartitionReadCommand.java:911)
>> ~[apache-cassandra-3.0.9.jar:3.0.9]
>> at 
>> 

Re: Exceptions whenever compaction happens

2016-09-26 Thread Ben Slater
Hi Nikhil,

If you haven’t already done so I would recommend logging Cassandra project
JIRA for this issue - it looks like a defect to me and occurring during
compaction will probably be hard to work around. If you can work out what
table is being compacted when the issue occurs and include the schema of
the table that might help.

Beyond that the only thing I can think of is running scrub if you haven’t
already done so.

Cheers
Ben

On Mon, 26 Sep 2016 at 16:36 Nikhil Sharma  wrote:

> Hi,
>
> We are not exactly sure what is causing this problem. But after compaction
> happens (after 1 week ttl). We start getting this exception:
>
> WARN  [SharedPool-Worker-1] 2016-09-26 04:07:19,849
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.NullPointerException: null
> at
> org.apache.cassandra.db.Slices$ArrayBackedSlices$ComponentOfSlice.isEQ(Slices.java:748)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
> at
> org.apache.cassandra.db.Slices$ArrayBackedSlices.toCQLString(Slices.java:659)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
> at
> org.apache.cassandra.db.filter.ClusteringIndexSliceFilter.toCQLString(ClusteringIndexSliceFilter.java:150)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
> at
> org.apache.cassandra.db.SinglePartitionReadCommand.appendCQLWhereClause(SinglePartitionReadCommand.java:911)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
> at
> org.apache.cassandra.db.ReadCommand.toCQLString(ReadCommand.java:560)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
> at
> org.apache.cassandra.db.ReadCommand$1MetricRecording.onClose(ReadCommand.java:506)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
> at
> org.apache.cassandra.db.transform.BasePartitions.runOnClose(BasePartitions.java:70)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
> at
> org.apache.cassandra.db.transform.BaseIterator.close(BaseIterator.java:76)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
> at
> org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:48)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
> at
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> ~[na:1.8.0_102]
> at
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
> at
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
> [apache-cassandra-3.0.9.jar:3.0.9]
> at
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
> [apache-cassandra-3.0.9.jar:3.0.9]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
> WARN  [SharedPool-Worker-2] 2016-09-26 04:07:20,639
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread
> Thread[SharedPool-Worker-2,5,main]: {}
> java.lang.RuntimeException: java.lang.NullPointerException
> at
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2470)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> ~[na:1.8.0_102]
> at
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
> at
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
> [apache-cassandra-3.0.9.jar:3.0.9]
> at
> org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
> [apache-cassandra-3.0.9.jar:3.0.9]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
> Caused by: java.lang.NullPointerException: null
> at
> org.apache.cassandra.db.Slices$ArrayBackedSlices$ComponentOfSlice.isEQ(Slices.java:748)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
> at
> org.apache.cassandra.db.Slices$ArrayBackedSlices.toCQLString(Slices.java:659)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
> at
> org.apache.cassandra.db.filter.ClusteringIndexSliceFilter.toCQLString(ClusteringIndexSliceFilter.java:150)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
> at
> org.apache.cassandra.db.SinglePartitionReadCommand.appendCQLWhereClause(SinglePartitionReadCommand.java:911)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
> at
> org.apache.cassandra.db.ReadCommand.toCQLString(ReadCommand.java:560)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
> at
> org.apache.cassandra.db.ReadCommand$1MetricRecording.onClose(ReadCommand.java:506)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
> at
> org.apache.cassandra.db.transform.BasePartitions.runOnClose(BasePartitions.java:70)
> ~[apache-cassandra-3.0.9.jar:3.0.9]
> at

Exceptions whenever compaction happens

2016-09-26 Thread Nikhil Sharma
Hi,

We are not exactly sure what is causing this problem. But after compaction
happens (after 1 week ttl). We start getting this exception:

WARN  [SharedPool-Worker-1] 2016-09-26 04:07:19,849
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread
Thread[SharedPool-Worker-1,5,main]: {}
java.lang.NullPointerException: null
at
org.apache.cassandra.db.Slices$ArrayBackedSlices$ComponentOfSlice.isEQ(Slices.java:748)
~[apache-cassandra-3.0.9.jar:3.0.9]
at
org.apache.cassandra.db.Slices$ArrayBackedSlices.toCQLString(Slices.java:659)
~[apache-cassandra-3.0.9.jar:3.0.9]
at
org.apache.cassandra.db.filter.ClusteringIndexSliceFilter.toCQLString(ClusteringIndexSliceFilter.java:150)
~[apache-cassandra-3.0.9.jar:3.0.9]
at
org.apache.cassandra.db.SinglePartitionReadCommand.appendCQLWhereClause(SinglePartitionReadCommand.java:911)
~[apache-cassandra-3.0.9.jar:3.0.9]
at
org.apache.cassandra.db.ReadCommand.toCQLString(ReadCommand.java:560)
~[apache-cassandra-3.0.9.jar:3.0.9]
at
org.apache.cassandra.db.ReadCommand$1MetricRecording.onClose(ReadCommand.java:506)
~[apache-cassandra-3.0.9.jar:3.0.9]
at
org.apache.cassandra.db.transform.BasePartitions.runOnClose(BasePartitions.java:70)
~[apache-cassandra-3.0.9.jar:3.0.9]
at
org.apache.cassandra.db.transform.BaseIterator.close(BaseIterator.java:76)
~[apache-cassandra-3.0.9.jar:3.0.9]
at
org.apache.cassandra.db.ReadCommandVerbHandler.doVerb(ReadCommandVerbHandler.java:48)
~[apache-cassandra-3.0.9.jar:3.0.9]
at
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67)
~[apache-cassandra-3.0.9.jar:3.0.9]
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
~[na:1.8.0_102]
at
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
~[apache-cassandra-3.0.9.jar:3.0.9]
at
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
[apache-cassandra-3.0.9.jar:3.0.9]
at
org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
[apache-cassandra-3.0.9.jar:3.0.9]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
WARN  [SharedPool-Worker-2] 2016-09-26 04:07:20,639
AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread
Thread[SharedPool-Worker-2,5,main]: {}
java.lang.RuntimeException: java.lang.NullPointerException
at
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2470)
~[apache-cassandra-3.0.9.jar:3.0.9]
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
~[na:1.8.0_102]
at
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
~[apache-cassandra-3.0.9.jar:3.0.9]
at
org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136)
[apache-cassandra-3.0.9.jar:3.0.9]
at
org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
[apache-cassandra-3.0.9.jar:3.0.9]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_102]
Caused by: java.lang.NullPointerException: null
at
org.apache.cassandra.db.Slices$ArrayBackedSlices$ComponentOfSlice.isEQ(Slices.java:748)
~[apache-cassandra-3.0.9.jar:3.0.9]
at
org.apache.cassandra.db.Slices$ArrayBackedSlices.toCQLString(Slices.java:659)
~[apache-cassandra-3.0.9.jar:3.0.9]
at
org.apache.cassandra.db.filter.ClusteringIndexSliceFilter.toCQLString(ClusteringIndexSliceFilter.java:150)
~[apache-cassandra-3.0.9.jar:3.0.9]
at
org.apache.cassandra.db.SinglePartitionReadCommand.appendCQLWhereClause(SinglePartitionReadCommand.java:911)
~[apache-cassandra-3.0.9.jar:3.0.9]
at
org.apache.cassandra.db.ReadCommand.toCQLString(ReadCommand.java:560)
~[apache-cassandra-3.0.9.jar:3.0.9]
at
org.apache.cassandra.db.ReadCommand$1MetricRecording.onClose(ReadCommand.java:506)
~[apache-cassandra-3.0.9.jar:3.0.9]
at
org.apache.cassandra.db.transform.BasePartitions.runOnClose(BasePartitions.java:70)
~[apache-cassandra-3.0.9.jar:3.0.9]
at
org.apache.cassandra.db.transform.BaseIterator.close(BaseIterator.java:76)
~[apache-cassandra-3.0.9.jar:3.0.9]
at
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1797)
~[apache-cassandra-3.0.9.jar:3.0.9]
at
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2466)
~[apache-cassandra-3.0.9.jar:3.0.9]
... 5 common frames omitted


We have no idea how to solve this. I am losing sleep over this, please help!

Cassandra: 3.0.9 (also happened in 3.0.8)
Java: Oracle jdk "1.8.0_102"
jemalloc enabled


Regards,

Nikhil Sharma