Cassandra OS Patching.

2020-01-29 Thread Anshu Vajpayee
Hi Team,
What is the best way to patch OS of 1000 nodes Multi DC Cassandra cluster
where we cannot suspend application traffic( we can redirect traffic to one
DC).

Please suggest if anyone has any best practice around it.

-- 
*C*heers,*
*Anshu V*


Refresh from Prod to Dev

2018-02-08 Thread Anshu Vajpayee
Team ,

I want to validate and POC on production data. Data on production is huge.
What could be optimal method to move the data from Prod to Dev
environment?  I know there are few solutions but what/which is most
efficient method do refresh for dev env?

-- 
*C*heers,*
*Anshu V*


Re: Upgrade using rebuild

2017-12-19 Thread Anshu Vajpayee
​Any specific reason why It doesn't work across  major version ? a

On Fri, Dec 15, 2017 at 12:05 AM, Jon Haddad <j...@jonhaddad.com> wrote:

> Heh, hit send accidentally.
>
> You generally can’t run rebuild to upgrade, because it’s a streaming
> operation.  Streaming isn’t supported between versions, although on 3.x it
> might work.
>
>
> On Dec 14, 2017, at 11:01 AM, Jon Haddad <j...@jonhaddad.com> wrote:
>
> no
>
> On Dec 14, 2017, at 10:59 AM, Anshu Vajpayee <anshu.vajpa...@gmail.com>
> wrote:
>
> Thanks! I am aware with these steps.
>
> I m just thinking , is it possible to do the upgrade using nodetool
> rebuild like  we rebuld new dc ?
>
> Has anyone tried -  upgrade with nodetool rebuild ?
>
>
>
> On Thu, 14 Dec 2017 at 7:08 PM, Hannu Kröger <hkro...@gmail.com> wrote:
>
>> If you want to do a version upgrade, you need to basically do follow node
>> by node:
>>
>> 0) stop repairs
>> 1) make sure your sstables are at the latest version (nodetool
>> upgradesstables can do it)
>> 2) stop cassandra
>> 3) update cassandra software and update cassandra.yaml and
>> cassandra-env.sh files
>> 4) start cassandra
>>
>> After all nodes are up, run “nodetool upgradesstables” on each node to
>> update your sstables to the latest version.
>>
>> Also please note that when you upgrade, you need to upgrade only between
>> compatible versions.
>>
>> E.g. 2.2.x -> 3.0.x  but not 1.2 to 3.11
>>
>> Cheers,
>> Hannu
>>
>> On 14 December 2017 at 12:33:49, Anshu Vajpayee (anshu.vajpa...@gmail.com)
>> wrote:
>>
>> Hi -
>>
>> Is it possible to upgrade a  cluster ( DC wise) using nodetool rebuild ?
>>
>>
>>
>> --
>> *C*heers,*
>> *Anshu V*
>>
>>
>> --
> *C*heers,*
> *Anshu V*
>
>
>
>
>


-- 
*C*heers,*
*Anshu V*


Re: Upgrade using rebuild

2017-12-15 Thread Anshu Vajpayee
Thanks Jon.

On Fri, Dec 15, 2017 at 12:05 AM, Jon Haddad <j...@jonhaddad.com> wrote:

> Heh, hit send accidentally.
>
> You generally can’t run rebuild to upgrade, because it’s a streaming
> operation.  Streaming isn’t supported between versions, although on 3.x it
> might work.
>
>
> On Dec 14, 2017, at 11:01 AM, Jon Haddad <j...@jonhaddad.com> wrote:
>
> no
>
> On Dec 14, 2017, at 10:59 AM, Anshu Vajpayee <anshu.vajpa...@gmail.com>
> wrote:
>
> Thanks! I am aware with these steps.
>
> I m just thinking , is it possible to do the upgrade using nodetool
> rebuild like  we rebuld new dc ?
>
> Has anyone tried -  upgrade with nodetool rebuild ?
>
>
>
> On Thu, 14 Dec 2017 at 7:08 PM, Hannu Kröger <hkro...@gmail.com> wrote:
>
>> If you want to do a version upgrade, you need to basically do follow node
>> by node:
>>
>> 0) stop repairs
>> 1) make sure your sstables are at the latest version (nodetool
>> upgradesstables can do it)
>> 2) stop cassandra
>> 3) update cassandra software and update cassandra.yaml and
>> cassandra-env.sh files
>> 4) start cassandra
>>
>> After all nodes are up, run “nodetool upgradesstables” on each node to
>> update your sstables to the latest version.
>>
>> Also please note that when you upgrade, you need to upgrade only between
>> compatible versions.
>>
>> E.g. 2.2.x -> 3.0.x  but not 1.2 to 3.11
>>
>> Cheers,
>> Hannu
>>
>> On 14 December 2017 at 12:33:49, Anshu Vajpayee (anshu.vajpa...@gmail.com)
>> wrote:
>>
>> Hi -
>>
>> Is it possible to upgrade a  cluster ( DC wise) using nodetool rebuild ?
>>
>>
>>
>> --
>> *C*heers,*
>> *Anshu V*
>>
>>
>> --
> *C*heers,*
> *Anshu V*
>
>
>
>
>


-- 
*C*heers,*
*Anshu V*


Re: Upgrade using rebuild

2017-12-14 Thread Anshu Vajpayee
Thanks! I am aware with these steps.

I m just thinking , is it possible to do the upgrade using nodetool rebuild
like  we rebuld new dc ?

Has anyone tried -  upgrade with nodetool rebuild ?



On Thu, 14 Dec 2017 at 7:08 PM, Hannu Kröger <hkro...@gmail.com> wrote:

> If you want to do a version upgrade, you need to basically do follow node
> by node:
>
> 0) stop repairs
> 1) make sure your sstables are at the latest version (nodetool
> upgradesstables can do it)
> 2) stop cassandra
> 3) update cassandra software and update cassandra.yaml and
> cassandra-env.sh files
> 4) start cassandra
>
> After all nodes are up, run “nodetool upgradesstables” on each node to
> update your sstables to the latest version.
>
> Also please note that when you upgrade, you need to upgrade only between
> compatible versions.
>
> E.g. 2.2.x -> 3.0.x  but not 1.2 to 3.11
>
> Cheers,
> Hannu
>
> On 14 December 2017 at 12:33:49, Anshu Vajpayee (anshu.vajpa...@gmail.com)
> wrote:
>
> Hi -
>
> Is it possible to upgrade a  cluster ( DC wise) using nodetool rebuild ?
>
>
>
> --
> *C*heers,*
> *Anshu V*
>
>
> --
*C*heers,*
*Anshu V*


Upgrade using rebuild

2017-12-14 Thread Anshu Vajpayee
Hi -

Is it possible to upgrade a  cluster ( DC wise) using nodetool rebuild ?



-- 
*C*heers,*
*Anshu V*


Re: nodetool rebuild data size

2017-12-14 Thread Anshu Vajpayee
You will require to rebuild each node  with nodetool rebuild command.  it
would be 60TB.

On Thu, Dec 14, 2017 at 11:35 AM, Peng Xiao <2535...@qq.com> wrote:

> Hi there,
>
> if we have a Cassandra DC1 with data size 60T,RF=3,then we rebuild a new
> DC2(RF=3),how much data will stream to DC2?20T or 60T?
>
> Thanks,
> Peng Xiao
>



-- 
*C*heers,*
*Anshu V*


Re: How quickly we can bootstrap

2017-11-19 Thread Anshu Vajpayee
Adding more compute power means again vertical scaling. I understand this
is one method to handle the load in case of increasing demand. But it
doesn't match with philosophy of Cassandra for horizontal scaling. Hitting
capacity cannot be restricted to only compute power.



Also in case of node failure, this vertical scaling is not going help.
Where we need to bootstrap a new or same node faster and quicker in ring.



On Sat, Nov 18, 2017 at 4:20 AM, Ben Slater <ben.sla...@instaclustr.com>
wrote:

> Hi Anshu
>
> For quick scaling, we’ve had success with an approach of scaling up the
> compute capacity (attached to EBS) rather than scaling out with more nodes
> in order to provide relatively quick scale up/down capability. The approach
> is implemented as part of our managed service but the concept is generic
> enough to work in any virtualised environment. You can find more detail
> here if interested: https://www.instaclustr.com/instaclustr-
> dynamic-resizing-for-apache-cassandra/
>
> Cheers
> Ben
>
> On Sat, 18 Nov 2017 at 05:02 Anshu Vajpayee <anshu.vajpa...@gmail.com>
> wrote:
>
>> Cassandra supports elastic scalability  - meaning on demand we can
>> increase or decrease #of nodes as per scaling demand from the application.
>>
>> Let's consider we have 5 node cluster and each node has data pressure of
>> about 3 TB.
>>
>> Now as per sudden load, we want to add 1 node in the cluster  as quick as
>> possible.
>>
>> Please suggest what would be the fastest method to add the new node on
>> cluster? Normal bootstrapping will definitely take time because it needs to
>> stream at least 2.5 TB ( 5*3TB/6 nodes) from 5 nodes.  Please
>> consider multi-core machines & 10 Gpbs card .
>>
>> Streaming throughput can help but not much.
>>
>> The similar requirement can come when we want to replace the failed node
>> due to any hardware.
>>
>> Please suggest any best practice or scenarios to deal with above
>> situations.
>>
>> Scaling is good but how quickly we can scale is another thing to
>> consider.
>>
>>
>>
>>
>>
>>
>>
>>
>> --
>> *C*heers,*
>> *Anshu V*
>>
>>
>> --
>
>
> *Ben Slater*
>
> *Chief Product Officer <https://www.instaclustr.com/>*
>
> <https://www.facebook.com/instaclustr>   <https://twitter.com/instaclustr>
><https://www.linkedin.com/company/instaclustr>
>
> Read our latest technical blog posts here
> <https://www.instaclustr.com/blog/>.
>
> This email has been sent on behalf of Instaclustr Pty. Limited (Australia)
> and Instaclustr Inc (USA).
>
> This email and any attachments may contain confidential and legally
> privileged information.  If you are not the intended recipient, do not copy
> or disclose its content, but please reply to this email immediately and
> highlight the error to the sender and then immediately delete the message.
>



-- 
*C*heers,*
*Anshu V*


How quickly we can bootstrap

2017-11-17 Thread Anshu Vajpayee
Cassandra supports elastic scalability  - meaning on demand we can increase
or decrease #of nodes as per scaling demand from the application.

Let's consider we have 5 node cluster and each node has data pressure of
about 3 TB.

Now as per sudden load, we want to add 1 node in the cluster  as quick as
possible.

Please suggest what would be the fastest method to add the new node on
cluster? Normal bootstrapping will definitely take time because it needs to
stream at least 2.5 TB ( 5*3TB/6 nodes) from 5 nodes.  Please
consider multi-core machines & 10 Gpbs card .

Streaming throughput can help but not much.

The similar requirement can come when we want to replace the failed node
due to any hardware.

Please suggest any best practice or scenarios to deal with above
situations.

Scaling is good but how quickly we can scale is another thing to consider.








-- 
*C*heers,*
*Anshu V*


Re: Reaper 1.0

2017-11-17 Thread Anshu Vajpayee
Sure, I will update this thread.

On Sat, Nov 18, 2017 at 12:26 AM, Jonathan Haddad <j...@jonhaddad.com> wrote:

> It should work with DSE, but we don’t explicitly test it.
>
> Mind testing it and posting your results? If you could include the DSE
> version it would be great.
> On Thu, Nov 16, 2017 at 11:57 PM Anshu Vajpayee <anshu.vajpa...@gmail.com>
> wrote:
>
>> Thanks John for your efforts and nicley putting it on website & youtube .
>>
>> Just quick question - Is  it compactiable with DSE  versions? I know
>> under the hood they have  cassandra only , but just wanted to listen your
>> thoughts.
>>
>> On Thu, Nov 16, 2017 at 1:23 AM, Jon Haddad <j...@jonhaddad.com> wrote:
>>
>>> Apache 2 Licensed, just like Cassandra.  https://github.com/
>>> thelastpickle/cassandra-reaper/blob/master/LICENSE.txt
>>>
>>> Feel free to modify, put in prod, fork or improve.
>>>
>>> Unfortunately I had to re-upload the Getting Started video, we had
>>> accidentally uploaded a first cut.  Correctly link is here:
>>> https://www.youtube.com/watch?v=0dub29BgwPI
>>>
>>> Jon
>>>
>>> On Nov 15, 2017, at 9:14 AM, Harika Vangapelli -T (hvangape - AKRAYA INC
>>> at Cisco) <hvang...@cisco.com> wrote:
>>>
>>> Open source, free to use in production? Any License constraints, Please
>>> let me know.
>>>
>>> I experimented with it yesterday, really liked it.
>>>
>>> 
>>>
>>> *Harika Vangapelli*
>>> Engineer - IT
>>> hvang...@cisco.com
>>> Tel:
>>> *Cisco Systems, Inc.*
>>>
>>>
>>>
>>> United States
>>> cisco.com
>>>
>>> Think before you print.
>>> This email may contain confidential and privileged material for the sole
>>> use of the intended recipient. Any review, use, distribution or disclosure
>>> by others is strictly prohibited. If you are not the intended recipient (or
>>> authorized to receive for the recipient), please contact the sender by
>>> reply email and delete all copies of this message.
>>> Please click here
>>> <http://www.cisco.com/web/about/doing_business/legal/cri/index.html> for
>>> Company Registration Information.
>>>
>>> *From:* Jon Haddad [mailto:jonathan.had...@gmail.com
>>> <jonathan.had...@gmail.com>] *On Behalf Of *Jon Haddad
>>> *Sent:* Tuesday, November 14, 2017 2:18 PM
>>> *To:* user <user@cassandra.apache.org>
>>> *Subject:* Reaper 1.0
>>>
>>> We’re excited to announce the release of the 1.0 version of Reaper for
>>> Apache Cassandra!  We’ve made a lot of improvements to the flexibility of
>>> managing repairs and simplified the UI based on feedback we’ve received.
>>>
>>> We’ve written a blog post discussing the changes in detail here:
>>> http://thelastpickle.com/blog/2017/11/14/reaper-10-announcement.html
>>>
>>> We also have a new YouTube video to help folks get up and running
>>> quickly: https://www.youtube.com/watch?v=YKJRRFa22T4
>>>
>>> The reaper site has all the docs should you have any questions:
>>> http://cassandra-reaper.io/
>>>
>>> Thanks all,
>>> Jon
>>>
>>>
>>>
>>
>>
>> --
>> *C*heers,*
>> *Anshu V*
>>
>>
>>


-- 
*C*heers,*
*Anshu V*


Re: Reaper 1.0

2017-11-16 Thread Anshu Vajpayee
Thanks John for your efforts and nicley putting it on website & youtube .

Just quick question - Is  it compactiable with DSE  versions? I know under
the hood they have  cassandra only , but just wanted to listen your
thoughts.

On Thu, Nov 16, 2017 at 1:23 AM, Jon Haddad  wrote:

> Apache 2 Licensed, just like Cassandra.  https://github.com/
> thelastpickle/cassandra-reaper/blob/master/LICENSE.txt
>
> Feel free to modify, put in prod, fork or improve.
>
> Unfortunately I had to re-upload the Getting Started video, we had
> accidentally uploaded a first cut.  Correctly link is here:
> https://www.youtube.com/watch?v=0dub29BgwPI
>
> Jon
>
> On Nov 15, 2017, at 9:14 AM, Harika Vangapelli -T (hvangape - AKRAYA INC
> at Cisco)  wrote:
>
> Open source, free to use in production? Any License constraints, Please
> let me know.
>
> I experimented with it yesterday, really liked it.
>
> 
>
> *Harika Vangapelli*
> Engineer - IT
> hvang...@cisco.com
> Tel:
> *Cisco Systems, Inc.*
>
>
>
> United States
> cisco.com
>
> Think before you print.
> This email may contain confidential and privileged material for the sole
> use of the intended recipient. Any review, use, distribution or disclosure
> by others is strictly prohibited. If you are not the intended recipient (or
> authorized to receive for the recipient), please contact the sender by
> reply email and delete all copies of this message.
> Please click here
>  for
> Company Registration Information.
>
> *From:* Jon Haddad [mailto:jonathan.had...@gmail.com
> ] *On Behalf Of *Jon Haddad
> *Sent:* Tuesday, November 14, 2017 2:18 PM
> *To:* user 
> *Subject:* Reaper 1.0
>
> We’re excited to announce the release of the 1.0 version of Reaper for
> Apache Cassandra!  We’ve made a lot of improvements to the flexibility of
> managing repairs and simplified the UI based on feedback we’ve received.
>
> We’ve written a blog post discussing the changes in detail here:
> http://thelastpickle.com/blog/2017/11/14/reaper-10-announcement.html
>
> We also have a new YouTube video to help folks get up and running quickly:
> https://www.youtube.com/watch?v=YKJRRFa22T4
>
> The reaper site has all the docs should you have any questions:
> http://cassandra-reaper.io/
>
> Thanks all,
> Jon
>
>
>


-- 
*C*heers,*
*Anshu V*


Re: Node Failure Scenario

2017-11-15 Thread Anshu Vajpayee
Thank you Jonathan and all.

On Tue, Nov 14, 2017 at 10:53 PM, Jonathan Haddad <j...@jonhaddad.com> wrote:

> Anthony’s suggestions using replace_address_first_boot lets you avoid that
> requirement, and it’s specifically why it was added in 2.2.
> On Tue, Nov 14, 2017 at 1:02 AM Anshu Vajpayee <anshu.vajpa...@gmail.com>
> wrote:
>
>> ​Thanks  guys ,
>>
>> I thikn better to pass replace_address on command line rather than update
>> the cassndra-env file so that there would not be requirement to  remove it
>> later.
>> ​
>>
>> On Tue, Nov 14, 2017 at 6:32 AM, Anthony Grasso <anthony.gra...@gmail.com
>> > wrote:
>>
>>> Hi Anshu,
>>>
>>> To add to Erick's comment, remember to remove the *replace_address* method
>>> from the *cassandra-env.sh* file once the node has rejoined
>>> successfully. The node will fail the next restart otherwise.
>>>
>>> Alternatively, use the *replace_address_first_boot* method which works
>>> exactly the same way as *replace_address* the only difference is there
>>> is no need to remove it from the *cassandra-env.sh* file.
>>>
>>> Kind regards,
>>> Anthony
>>>
>>> On 13 November 2017 at 14:59, Erick Ramirez <flightc...@gmail.com>
>>> wrote:
>>>
>>>> Use the replace_address method with its own IP address. Make sure you
>>>> delete the contents of the following directories:
>>>> - data/
>>>> - commitlog/
>>>> - saved_caches/
>>>>
>>>> Forget rejoining with repair -- it will just cause more problems.
>>>> Cheers!
>>>>
>>>> On Mon, Nov 13, 2017 at 2:54 PM, Anshu Vajpayee <
>>>> anshu.vajpa...@gmail.com> wrote:
>>>>
>>>>> Hi All ,
>>>>>
>>>>> There was a node failure in one of production cluster due to disk
>>>>> failure.  After h/w recovery that node is noew ready be part of cluster,
>>>>> but it doesn't has any data due to disk crash.
>>>>>
>>>>>
>>>>>
>>>>> I can think of following option :
>>>>>
>>>>>
>>>>>
>>>>> 1. replace the node with same. using replace_address
>>>>>
>>>>> 2. Set bootstrap=false , start the node and run the repair to stream
>>>>> the data.
>>>>>
>>>>>
>>>>>
>>>>> Please suggest if both option are good and which is  best as per your
>>>>> experience. This is live production cluster.
>>>>>
>>>>>
>>>>> Thanks,
>>>>>
>>>>>
>>>>> --
>>>>> *C*heers,*
>>>>> *Anshu V*
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>>
>> --
>> *C*heers,*
>> *Anshu V*
>>
>>
>>


-- 
*C*heers,*
*Anshu V*


Re: Node Failure Scenario

2017-11-14 Thread Anshu Vajpayee
​Thanks  guys ,

I thikn better to pass replace_address on command line rather than update
the cassndra-env file so that there would not be requirement to  remove it
later.
​

On Tue, Nov 14, 2017 at 6:32 AM, Anthony Grasso <anthony.gra...@gmail.com>
wrote:

> Hi Anshu,
>
> To add to Erick's comment, remember to remove the *replace_address* method
> from the *cassandra-env.sh* file once the node has rejoined successfully.
> The node will fail the next restart otherwise.
>
> Alternatively, use the *replace_address_first_boot* method which works
> exactly the same way as *replace_address* the only difference is there is
> no need to remove it from the *cassandra-env.sh* file.
>
> Kind regards,
> Anthony
>
> On 13 November 2017 at 14:59, Erick Ramirez <flightc...@gmail.com> wrote:
>
>> Use the replace_address method with its own IP address. Make sure you
>> delete the contents of the following directories:
>> - data/
>> - commitlog/
>> - saved_caches/
>>
>> Forget rejoining with repair -- it will just cause more problems. Cheers!
>>
>> On Mon, Nov 13, 2017 at 2:54 PM, Anshu Vajpayee <anshu.vajpa...@gmail.com
>> > wrote:
>>
>>> Hi All ,
>>>
>>> There was a node failure in one of production cluster due to disk
>>> failure.  After h/w recovery that node is noew ready be part of cluster,
>>> but it doesn't has any data due to disk crash.
>>>
>>>
>>>
>>> I can think of following option :
>>>
>>>
>>>
>>> 1. replace the node with same. using replace_address
>>>
>>> 2. Set bootstrap=false , start the node and run the repair to stream the
>>> data.
>>>
>>>
>>>
>>> Please suggest if both option are good and which is  best as per your
>>> experience. This is live production cluster.
>>>
>>>
>>> Thanks,
>>>
>>>
>>> --
>>> *C*heers,*
>>> *Anshu V*
>>>
>>>
>>>
>>
>


-- 
*C*heers,*
*Anshu V*


Node Failure Scenario

2017-11-12 Thread Anshu Vajpayee
Hi All ,

There was a node failure in one of production cluster due to disk failure.
After h/w recovery that node is noew ready be part of cluster, but it
doesn't has any data due to disk crash.



I can think of following option :



1. replace the node with same. using replace_address

2. Set bootstrap=false , start the node and run the repair to stream the
data.



Please suggest if both option are good and which is  best as per your
experience. This is live production cluster.


Thanks,


-- 
*C*heers,*
*Anshu V*


Re: Best approach to prepare to shutdown a cassandra node

2017-10-25 Thread Anshu Vajpayee
use nodetool stopdaemon

On Wed, Oct 25, 2017 at 4:42 AM, Javier Canillas <javier.canil...@gmail.com>
wrote:

> So, just to clarify.. a good approach to shutdown an instance of cassandra
> should be:
>
> *# Drain all information from commitlog into sstables*
> *bin/nodetool stopdaemon*
> *cassandra_pid=`ps -ef|grep "java.*apache-cassandra"|grep -v "grep"|awk
> '{print $2}'`*
> *if [ "$?" -ne 0 ]; then*
> *echo "Cassandra stopdaemon fail? Please check logs"*
> *if [ ! -z "$cassandra_pid" ] && [ "$cassandra_pid" -ne "1" ];
> then*
> *echo "Cassandra is still running, killing it gracefully"*
> *kill $cassandra_pid*
> *echo -n "+ Checking it is down. "*
> *counter=10*
> *while [ "$counter" -ne 0 -a ! kill -0 $cassandra_pid >
> /dev/null 2>&1 ]*
> *do*
> *  echo -n ". "*
> *  ((counter--))*
> *  sleep 1s*
> *done*
> *echo ""*
> *if ! kill -0 $cassandra_pid > /dev/null 2>&1; then*
> *echo "+ Its down."*
> *else*
> *echo "- Killing forcefully Cassandra."*
> *kill -9 $cassandra_pid*
> *fi*
> *   else*
> *echo "Care there was a problem finding Cassandra PID, it
> might be still running"*
> *exit 1*
> *   fi*
> *  else*
> *   echo "Cassandra stopped"*
> *fi*
>
> 2017-10-20 9:04 GMT-03:00 Lutaya Shafiq Holmes <lutayasha...@gmail.com>:
>
>> Looking at the code in trunk, the stopdemon command invokes the
>> CassandraDaemon.stop() function which does a graceful shutdown by
>> stopping jmxServer and drains the node by the shutdown hook.
>>
>>
>> On 10/20/17, Simon Fontana Oscarsson
>> <simon.fontana.oscars...@ericsson.com> wrote:
>> > Yes, drain will always be run when Cassandra exits normally.
>> >
>> > On 2017-10-20 00:57, Varun Gupta wrote:
>> >> Does, nodetool stopdaemon, implicitly drain too? or we should invoke
>> >> drain and then stopdaemon?
>> >>
>> >> On Mon, Oct 16, 2017 at 4:54 AM, Simon Fontana Oscarsson
>> >> <simon.fontana.oscars...@ericsson.com
>> >> <mailto:simon.fontana.oscars...@ericsson.com>> wrote:
>> >>
>> >> Looking at the code in trunk, the stopdemon command invokes the
>> >> CassandraDaemon.stop() function which does a graceful shutdown by
>> >> stopping jmxServer and drains the node by the shutdown hook.
>> >>
>> >> /Simon
>> >>
>> >>
>> >> On 2017-10-13 20:42, Javier Canillas wrote:
>> >>> As far as I know, the nodetool stopdaemon is doing a "kill -9".
>> >>>
>> >>> Or did it change?
>> >>>
>> >>> 2017-10-12 23:49 GMT-03:00 Anshu Vajpayee
>> >>> <anshu.vajpa...@gmail.com <mailto:anshu.vajpa...@gmail.com>>:
>> >>>
>> >>> Why are you killing when we have nodetool stopdaemon ?
>> >>>
>> >>> On Fri, Oct 13, 2017 at 1:49 AM, Javier Canillas
>> >>> <javier.canil...@gmail.com
>> >>> <mailto:javier.canil...@gmail.com>> wrote:
>> >>>
>> >>> That's what I thought.
>> >>>
>> >>> Thanks!
>> >>>
>> >>> 2017-10-12 14:26 GMT-03:00 Hannu Kröger
>> >>> <hkro...@gmail.com <mailto:hkro...@gmail.com>>:
>> >>>
>> >>> Hi,
>> >>>
>> >>> Drain should be enough.  It stops accepting writes
>> >>> and after that cassandra can be safely shut down.
>> >>>
>> >>> Hannu
>> >>>
>> >>> On 12 October 2017 at 20:24:41, Javier Canillas
>> >>> (javier.canil...@gmail.com
>> >>> <mailto:javier.canil...@gmail.com>) wrote:
>> >>>
>> >>>> Hello everyone,
>> >>&g

Re: Best approach to prepare to shutdown a cassandra node

2017-10-12 Thread Anshu Vajpayee
Why are you killing when we have nodetool stopdaemon ?

On Fri, Oct 13, 2017 at 1:49 AM, Javier Canillas 
wrote:

> That's what I thought.
>
> Thanks!
>
> 2017-10-12 14:26 GMT-03:00 Hannu Kröger :
>
>> Hi,
>>
>> Drain should be enough.  It stops accepting writes and after that
>> cassandra can be safely shut down.
>>
>> Hannu
>>
>> On 12 October 2017 at 20:24:41, Javier Canillas (
>> javier.canil...@gmail.com) wrote:
>>
>> Hello everyone,
>>
>> I have some time working with Cassandra, but every time I need to
>> shutdown a node (for any reason like upgrading version or moving instance
>> to another host) I see several errors on the client applications (yes, I'm
>> using the official java driver).
>>
>> By the way, I'm starting C* as a stand-alone process
>> ,
>> and C* version is 3.11.0.
>>
>> The way I have implemented the shutdown process is something like the
>> following:
>>
>> *# Drain all information from commitlog into sstables*
>>
>> *bin/nodetool drain*
>>
>>
>> *cassandra_pid=`ps -ef|grep "java.*apache-cassandra"|grep -v "grep"|awk
>> '{print $2}'`*
>> *if [ ! -z "$cassandra_pid" ] && [ "$cassandra_pid" -ne "1" ]; then*
>> *echo "Asking Cassandra to shutdown (nodetool drain doesn't stop
>> cassandra)"*
>> *kill $cassandra_pid*
>>
>> *echo -n "+ Checking it is down. "*
>> *counter=10*
>> *while [ "$counter" -ne 0 -a ! kill -0 $cassandra_pid > /dev/null
>> 2>&1 ]*
>> *do*
>> *echo -n ". "*
>> *((counter--))*
>> *sleep 1s*
>> *done*
>> *echo ""*
>> *if ! kill -0 $cassandra_pid > /dev/null 2>&1; then*
>> *echo "+ Its down."*
>> *else*
>> *echo "- Killing Cassandra."*
>> *kill -9 $cassandra_pid*
>> *fi*
>> *else*
>> *echo "Care there was a problem finding Cassandra PID"*
>> *fi*
>>
>> Should I add at the beginning the following lines?
>>
>> echo "shutdowing cassandra gracefully with: nodetool disable gossip"
>> $CASSANDRA_HOME/$CASSANDRA_APP/bin/nodetool disablegossip
>> echo "shutdowing cassandra gracefully with: nodetool disable binary
>> protocol"
>> $CASSANDRA_HOME/$CASSANDRA_APP/bin/nodetool disablebinary
>> echo "shutdowing cassandra gracefully with: nodetool thrift"
>> $CASSANDRA_HOME/$CASSANDRA_APP/bin/nodetool disablethrift
>>
>> The shutdown log is the following:
>>
>> *WARN  [RMI TCP Connection(10)-127.0.0.1] 2017-10-12 14:20:52,343
>> StorageService.java:321 - Stopping gossip by operator request*
>> *INFO  [RMI TCP Connection(10)-127.0.0.1] 2017-10-12 14:20:52,344
>> Gossiper.java:1532 - Announcing shutdown*
>> *INFO  [RMI TCP Connection(10)-127.0.0.1] 2017-10-12 14:20:52,355
>> StorageService.java:2268 - Node /10.254.169.36  state
>> jump to shutdown*
>> *INFO  [RMI TCP Connection(12)-127.0.0.1] 2017-10-12 14:20:56,141
>> Server.java:176 - Stop listening for CQL clients*
>> *INFO  [RMI TCP Connection(16)-127.0.0.1] 2017-10-12 14:20:59,472
>> StorageService.java:1442 - DRAINING: starting drain process*
>> *INFO  [RMI TCP Connection(16)-127.0.0.1] 2017-10-12 14:20:59,474
>> HintsService.java:220 - Paused hints dispatch*
>> *INFO  [RMI TCP Connection(16)-127.0.0.1] 2017-10-12 14:20:59,477
>> Gossiper.java:1532 - Announcing shutdown*
>> *INFO  [RMI TCP Connection(16)-127.0.0.1] 2017-10-12 14:20:59,480
>> StorageService.java:2268 - Node /127.0.0.1  state jump to
>> shutdown*
>> *INFO  [RMI TCP Connection(16)-127.0.0.1] 2017-10-12 14:21:01,483
>> MessagingService.java:984 - Waiting for messaging service to quiesce*
>> *INFO  [ACCEPT-/192.168.6.174 ] 2017-10-12
>> 14:21:01,485 MessagingService.java:1338 - MessagingService has terminated
>> the accept() thread*
>> *INFO  [RMI TCP Connection(16)-127.0.0.1] 2017-10-12 14:21:02,095
>> HintsService.java:220 - Paused hints dispatch*
>> *INFO  [RMI TCP Connection(16)-127.0.0.1] 2017-10-12 14:21:02,111
>> StorageService.java:1442 - DRAINED*
>>
>> Disabling Gossip seemed a good idea, but watching the logs, it may use it
>> to gracefully telling the other nodes he is going down, so I don't know if
>> it's good or bad idea.
>>
>> Disabling Thrift and Binary protocol should only avoid new connections,
>> but the one stablished and running should be attempted to finish.
>>
>> Any thoughts or comments?
>>
>> Thanks
>>
>> Javier.
>>
>>
>>
>


-- 
*Regards,*
*Anshu *


Compaction through put and compaction tasks

2017-09-26 Thread Anshu Vajpayee
Hello -

Ihave very generic question regarding  compaction. How does cassandra
internally generate the  number of comapction tasks?  How does it get
affected with compaction throughput ?

If we increase the  number of compaction  throughput, will the per second
compaction  task increase for same kind of write load ?

Finally whats your  recommendation for  compaction throughput for SSDs?

-- 
*​C*heers,*
*Anshu *


Restrict cassandra to number of cpus

2017-04-28 Thread Anshu Vajpayee
Hi All -
Is it possible to restrict cassandra with  some limited number of
cpus/cores on a given bix?

There is one  JVM parameter to do that but all thread pools dont respect
that setting.

-D cassandra.available_processors

Is there any other way to achieve this ?


Re: Help

2017-01-15 Thread Anshu Vajpayee
​Setup is not on cloud. We have  few nodes in one  DC(1) and same number of
nodes in other DC(2). We have dedicated firewall in-front on nodes.

Read and write happen with local quorum so those dont get affected but
hints get accumulated from one DC to other DC for replications. Hints are
also getting timed out sporadically in logs.

describe cluster didn't show any error , but in some case it was taking
longer time.

On Sun, Jan 15, 2017 at 3:01 AM, Aleksandr Ivanov  wrote:

> Could you share a bit your cluster setup? Do you use cloud for your
> deployment or dedicated firewalls in front of nodes?
>
> If gossip shows that everything is up it doesn't mean that all nodes can
> communicate with each other. I have noticed situations when TCP connection
> was killed by firewall and Cassandra didn't reconnect automatically. It can
> be easily detected with nodetool describecluster command.
>
> Aleksandr
>
>  shows - all nodes are up.
>>
>> But when  we perform writes , coordinator stores the hints. It means  -
>> coordinator was not able to deliver the writes to few nodes after meeting
>> consistency requirements.
>>
>> The nodes for which  writes were failing, are in different DC. Those
>> nodes do not have any load.
>>
>> Gossips shows everything is up.  I already set write timeout to 60 sec,
>> but no help.
>>
>> Can anyone encounter this scenario ? Network side everything is fine.
>>
>> Cassandra version is 2.1.13
>>
>> --
>> *Regards,*
>> *Anshu *
>>
>>
>>


-- 
*Regards,*
*Anshu *


Help

2017-01-08 Thread Anshu Vajpayee
Gossip shows - all nodes are up.

But when  we perform writes , coordinator stores the hints. It means  -
coordinator was not able to deliver the writes to few nodes after meeting
consistency requirements.

The nodes for which  writes were failing, are in different DC. Those nodes
do not have any load.

Gossips shows everything is up.  I already set write timeout to 60 sec, but
no help.

Can anyone encounter this scenario ? Network side everything is fine.

Cassandra version is 2.1.13

-- 
*Regards,*
*Anshu *


RE: Growing Hints

2017-01-03 Thread Anshu Vajpayee
Cassandra verison 2.1.13
On Jan 4, 2017 12:34 AM, <sean_r_dur...@homedepot.com> wrote:

> Version number may help.
>
>
>
> Sean Durity
>
>
>
> *From:* Anshu Vajpayee [mailto:anshu.vajpa...@gmail.com]
> *Sent:* Tuesday, January 03, 2017 10:09 AM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Growing Hints
>
>
>
> Anyone aware about  issue ?
>
> Hints are still growing although gossip and repair was successfull. Gossip
> is flowing without any issue as all nodes status is UN.
>
> Hints are growing and there is timed out message in log hinted handoff
> delivery.  Hints are not truncating after defined time period.
>
> Please let me know if you have any question. Thanks
>
>
>
>
>
>
> On Dec 29, 2016 10:06 AM, "Anshu Vajpayee" <anshu.vajpa...@gmail.com>
> wrote:
>
> Hello All
>
> We have one unusual issue on our cluster. We are seeing growing hints
> table on  node although all the nodes are up  and coming online with
> notetool status.
>
>
>
> I know  Cassandra  appends the hints in case if there is  write timeout
> for other nodes. In our case  all nodes are up and functional , Gossip is
> also flowing well.  Also write time out value is quite high in our cluster .
>
> Can anyone suggest what could be other possible reason for these growing
> hints ?
>
>
>
>
>
>
>
>
>
>
>
>
> --
>
> The information in this Internet Email is confidential and may be legally
> privileged. It is intended solely for the addressee. Access to this Email
> by anyone else is unauthorized. If you are not the intended recipient, any
> disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful. When addressed
> to our clients any opinions or advice contained in this Email are subject
> to the terms and conditions expressed in any applicable governing The Home
> Depot terms of business or client engagement letter. The Home Depot
> disclaims all responsibility and liability for the accuracy and content of
> this attachment and for any damages or losses arising from any
> inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other
> items of a destructive nature, which may be contained in this attachment
> and shall not be liable for direct, indirect, consequential or special
> damages in connection with this e-mail message or its attachment.
>
> --
>
> The information in this Internet Email is confidential and may be legally
> privileged. It is intended solely for the addressee. Access to this Email
> by anyone else is unauthorized. If you are not the intended recipient, any
> disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful. When addressed
> to our clients any opinions or advice contained in this Email are subject
> to the terms and conditions expressed in any applicable governing The Home
> Depot terms of business or client engagement letter. The Home Depot
> disclaims all responsibility and liability for the accuracy and content of
> this attachment and for any damages or losses arising from any
> inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other
> items of a destructive nature, which may be contained in this attachment
> and shall not be liable for direct, indirect, consequential or special
> damages in connection with this e-mail message or its attachment.
>


Re: Growing Hints

2017-01-03 Thread Anshu Vajpayee
Anyone aware about  issue ?

Hints are still growing although gossip and repair was successfull. Gossip
is flowing without any issue as all nodes status is UN.

Hints are growing and there is timed out message in log hinted handoff
delivery.  Hints are not truncating after defined time period.

Please let me know if you have any question. Thanks







On Dec 29, 2016 10:06 AM, "Anshu Vajpayee" <anshu.vajpa...@gmail.com> wrote:

> Hello All
> We have one unusual issue on our cluster. We are seeing growing hints
> table on  node although all the nodes are up  and coming online with
> notetool status.
>
> I know  Cassandra  appends the hints in case if there is  write timeout
> for other nodes. In our case  all nodes are up and functional , Gossip is
> also flowing well.  Also write time out value is quite high in our cluster .
> Can anyone suggest what could be other possible reason for these growing
> hints ?
>
>
>
>
>
>


Growing Hints

2016-12-28 Thread Anshu Vajpayee
Hello All
We have one unusual issue on our cluster. We are seeing growing hints table
on  node although all the nodes are up  and coming online with notetool
status.

I know  Cassandra  appends the hints in case if there is  write timeout for
other nodes. In our case  all nodes are up and functional , Gossip is also
flowing well.  Also write time out value is quite high in our cluster .
Can anyone suggest what could be other possible reason for these growing
hints ?


Re: Partition size

2016-09-12 Thread Anshu Vajpayee
Thanks Jeff.  I got the answer now.
Is there any way to put guardrail  to avoid large partition from cassandra
side?  I know it is modeling problem and cassandra writes warning on
system. log for large partition.  But I think there should be a way to put
restriction for it from Cassandra side.
On 12 Sep 2016 9:50 p.m., "Jeff Jirsa" <jji...@apache.org> wrote:

> On 2016-09-08 18:53 (-0700), Anshu Vajpayee <anshu.vajpa...@gmail.com>
> wrote:
> > Is there any way to get partition size for a  partition key ?
> >
>
> Anshu,
>
> The simple answer to your question is that it is not currently possible to
> get a partition size for an arbitrary key without quite a lot of work
> (basically you'd have to write a tool that iterated over the data on disk,
> which is nontrivial).
>
> There exists a ticket to expose this: https://issues.apache.org/
> jira/browse/CASSANDRA-12367
>
> It's not clear when that ticket will land, but I expect you'll see an API
> for getting the size of a partition key in the near future.
>
>
>


Partition size

2016-09-08 Thread Anshu Vajpayee
Is there any way to get partition size for a  partition key ?


Re: Read timeouts on primary key queries

2016-09-05 Thread Anshu Vajpayee
We have seen read time out issue in cassandra due to high droppable
tombstone ratio for repository.

Please check for high droppable tombstone ratio for your repo.

On Mon, Sep 5, 2016 at 8:11 PM, Romain Hardouin  wrote:

> Yes dclocal_read_repair_chance will reduce the cross-DC traffic and
> latency, so you can swap the values ( https://issues.apache.org/
> jira/browse/CASSANDRA-7320 ). I guess the sstable_size_in_mb was set to
> 50 because back in the day (C* 1.0) the default size was way too small: 5
> MB. So maybe someone in your company tried "10 * the default" i.e. 50 MB.
> Now the default is 160 MB. I don't say to change the value but just keep in
> mind that you're using a small value here, it could help you someday.
>
> Regarding the cells, the histograms shows an *estimation* of the min, p50,
> ..., p99, max of cells based on SSTables metadata. On your screenshot, the
> Max is 4768. So you have a partition key with ~ 4768 cells. The p99 is
> 1109, so 99% of your partition keys have less than (or equal to) 1109
> cells.
> You can see these data of a given sstable with the tool sstablemetadata.
>
> Best,
>
> Romain
>
>
>
> Le Lundi 5 septembre 2016 15h17, Joseph Tech  a
> écrit :
>
>
> Thanks, Romain . We will try to enable the DEBUG logging (assuming it
> won't clog the logs much) . Regarding the table configs, read_repair_chance
> must be carried over from older versions - mostly defaults. I think 
> sstable_size_in_mb
> was set to limit the max SSTable size, though i am not sure on the reason
> for the 50 MB value.
>
> Does setting dclocal_read_repair_chance help in reducing cross-DC traffic
> (haven't looked into this parameter, just going by the name).
>
> By the cell count definition : is it incremented based on the number of
> writes for a given name(key?) and value. This table is heavy on reads and
> writes. If so, the value should be much higher?
>
> On Mon, Sep 5, 2016 at 7:35 AM, Romain Hardouin 
> wrote:
>
> Hi,
>
> Try to put org.apache.cassandra.db. ConsistencyLevel at DEBUG level, it
> could help to find a regular pattern. By the way, I see that you have set a
> global read repair chance:
> read_repair_chance = 0.1
> And not the local read repair:
> dclocal_read_repair_chance = 0.0
> Is there any reason to do that or is it just the old (pre 2.0.9) default
> configuration?
>
> The cell count is the number of triplets: (name, value, timestamp)
>
> Also, I see that you have set sstable_size_in_mb at 50 MB. What is the
> rational behind this? (Yes I'm curious :-) ). Anyway your "SSTables per
> read" are good.
>
> Best,
>
> Romain
>
> Le Lundi 5 septembre 2016 13h32, Joseph Tech  a
> écrit :
>
>
> Hi Ryan,
>
> Attached are the cfhistograms run within few mins of each other. On the
> surface, don't see anything which indicates too much skewing (assuming
> skewing ==keys spread across many SSTables) . Please confirm. Related to
> this, what does the "cell count" metric indicate ; didn't find a clear
> explanation in the documents.
>
> Thanks,
> Joseph
>
>
> On Thu, Sep 1, 2016 at 6:30 PM, Ryan Svihla  wrote:
>
> Have you looked at cfhistograms/tablehistograms your data maybe just
> skewed (most likely explanation is probably the correct one here)
>
> Regard,
>
> Ryan Svihla
>
> _
> From: Joseph Tech 
> Sent: Wednesday, August 31, 2016 11:16 PM
> Subject: Re: Read timeouts on primary key queries
> To: 
>
>
>
> Patrick,
>
> The desc table is below (only col names changed) :
>
> CREATE TABLE db.tbl (
> id1 text,
> id2 text,
> id3 text,
> id4 text,
> f1 text,
> f2 map,
> f3 map,
> created timestamp,
> updated timestamp,
> PRIMARY KEY (id1, id2, id3, id4)
> ) WITH CLUSTERING ORDER BY (id2 ASC, id3 ASC, id4 ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'sstable_size_in_mb': '50', 'class':
> 'org.apache.cassandra.db. compaction. LeveledCompactionStrategy'}
> AND compression = {'sstable_compression': 'org.apache.cassandra.io.
> compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.0
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.1
> AND speculative_retry = '99.0PERCENTILE';
>
> and the query is select * from tbl where id1=? and id2=? and id3=? and
> id4=?
>
> The timeouts happen within ~2s to ~5s, while the successful calls have avg
> of 8ms and p99 of 15s. These times are seen from app side, the actual query
> times would be slightly lower.
>
> Is there a way to capture traces only when queries take longer than a
> specified duration? . We can't 

Re: Why there is no native shutdown command in cassandra

2016-06-13 Thread Anshu Vajpayee
https://issues.apache.org/jira/browse/CASSANDRA-12001 for your reference.

On Mon, Jun 13, 2016 at 11:55 PM, Jake Luciani <jak...@gmail.com> wrote:

> If that's true it's a bug then. can you open a ticket and include the
> logs? https://issues.apache.org/jira/browse/CASSANDRA
>
> On Mon, Jun 13, 2016 at 2:19 PM, Anshu Vajpayee <anshu.vajpa...@gmail.com>
> wrote:
>
>> I just tested. It doesn't flush memtables like nodetool drain/flush
>> command. Means it only does crash for the node, no graceful shutdown.
>>
>>
>>
>> On Mon, Jun 13, 2016 at 10:51 PM, Jake Luciani <jak...@gmail.com> wrote:
>>
>>> Yeah same as drain.  Just exits at the end.
>>>
>>> On Mon, Jun 13, 2016 at 1:11 PM, Anshu Vajpayee <
>>> anshu.vajpa...@gmail.com> wrote:
>>>
>>>> Thanks for information.
>>>>
>>>> Does stopdaemon also flush memtables  and stop trift and CQL interface
>>>> before shutting down the daemon ?  does node also announce  shutting down
>>>> message  in ring  ?
>>>>
>>>>
>>>> On Mon, Jun 13, 2016 at 10:14 PM, Jake Luciani <jak...@gmail.com>
>>>> wrote:
>>>>
>>>>> If you want to understand why, it's because C* was designed to be
>>>>> crash-only.
>>>>>
>>>>> https://www.usenix.org/conference/hotos-ix/crash-only-software
>>>>>
>>>>> Since this is great for the project but bad for operators experience
>>>>> we have later added this stopdaemon command.
>>>>>
>>>>> On Mon, Jun 13, 2016 at 12:37 PM, Anshu Vajpayee <
>>>>> anshu.vajpa...@gmail.com> wrote:
>>>>>
>>>>>> As per Documentation(pasted as below), It does not stop Daemon . I
>>>>>> tested also.I was looking for graceful shutdown  for Cassandra
>>>>>> Daemon.Description
>>>>>> <https://docs.datastax.com/en/cassandra/2.1/cassandra/tools/toolsDrain.html?scroll=toolsDrain__description_unique_11>
>>>>>>
>>>>>> Flushes all memtables from the node to SSTables on disk. Cassandra
>>>>>> stops listening for connections from the client and other nodes. You need
>>>>>> to restart Cassandra after running nodetool drain. You typically use
>>>>>> this command before upgrading a node to a new version of Cassandra. To
>>>>>> simply flush memtables to disk, use nodetool flush.
>>>>>>
>>>>>> On Mon, Jun 13, 2016 at 10:00 PM, Jeff Jirsa <
>>>>>> jeff.ji...@crowdstrike.com> wrote:
>>>>>>
>>>>>>> `nodetool drain`
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> *From: *Anshu Vajpayee <anshu.vajpa...@gmail.com>
>>>>>>> *Reply-To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
>>>>>>> *Date: *Monday, June 13, 2016 at 9:28 AM
>>>>>>> *To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
>>>>>>> *Subject: *Why there is no native shutdown command in cassandra
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Hi All
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Why we dont have native shutdown command in Cassandra ?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Every software provides graceful shutdown command.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> ​Regards,
>>>>>>>
>>>>>>> Anshu​
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *Regards,*
>>>>>> *Anshu *
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> http://twitter.com/tjake
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Regards,*
>>>> *Anshu *
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> http://twitter.com/tjake
>>>
>>
>>
>>
>> --
>> *Regards,*
>> *Anshu *
>>
>>
>>
>
>
> --
> http://twitter.com/tjake
>



-- 
*Regards,*
*Anshu *


Re: Why there is no native shutdown command in cassandra

2016-06-13 Thread Anshu Vajpayee
I just tested. It doesn't flush memtables like nodetool drain/flush
command. Means it only does crash for the node, no graceful shutdown.



On Mon, Jun 13, 2016 at 10:51 PM, Jake Luciani <jak...@gmail.com> wrote:

> Yeah same as drain.  Just exits at the end.
>
> On Mon, Jun 13, 2016 at 1:11 PM, Anshu Vajpayee <anshu.vajpa...@gmail.com>
> wrote:
>
>> Thanks for information.
>>
>> Does stopdaemon also flush memtables  and stop trift and CQL interface
>> before shutting down the daemon ?  does node also announce  shutting down
>> message  in ring  ?
>>
>>
>> On Mon, Jun 13, 2016 at 10:14 PM, Jake Luciani <jak...@gmail.com> wrote:
>>
>>> If you want to understand why, it's because C* was designed to be
>>> crash-only.
>>>
>>> https://www.usenix.org/conference/hotos-ix/crash-only-software
>>>
>>> Since this is great for the project but bad for operators experience we
>>> have later added this stopdaemon command.
>>>
>>> On Mon, Jun 13, 2016 at 12:37 PM, Anshu Vajpayee <
>>> anshu.vajpa...@gmail.com> wrote:
>>>
>>>> As per Documentation(pasted as below), It does not stop Daemon . I
>>>> tested also.I was looking for graceful shutdown  for Cassandra Daemon.
>>>> Description
>>>> <https://docs.datastax.com/en/cassandra/2.1/cassandra/tools/toolsDrain.html?scroll=toolsDrain__description_unique_11>
>>>>
>>>> Flushes all memtables from the node to SSTables on disk. Cassandra
>>>> stops listening for connections from the client and other nodes. You need
>>>> to restart Cassandra after running nodetool drain. You typically use
>>>> this command before upgrading a node to a new version of Cassandra. To
>>>> simply flush memtables to disk, use nodetool flush.
>>>>
>>>> On Mon, Jun 13, 2016 at 10:00 PM, Jeff Jirsa <
>>>> jeff.ji...@crowdstrike.com> wrote:
>>>>
>>>>> `nodetool drain`
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> *From: *Anshu Vajpayee <anshu.vajpa...@gmail.com>
>>>>> *Reply-To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
>>>>> *Date: *Monday, June 13, 2016 at 9:28 AM
>>>>> *To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
>>>>> *Subject: *Why there is no native shutdown command in cassandra
>>>>>
>>>>>
>>>>>
>>>>> Hi All
>>>>>
>>>>>
>>>>>
>>>>> Why we dont have native shutdown command in Cassandra ?
>>>>>
>>>>>
>>>>>
>>>>> Every software provides graceful shutdown command.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> ​Regards,
>>>>>
>>>>> Anshu​
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Regards,*
>>>> *Anshu *
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> http://twitter.com/tjake
>>>
>>
>>
>>
>> --
>> *Regards,*
>> *Anshu *
>>
>>
>>
>
>
> --
> http://twitter.com/tjake
>



-- 
*Regards,*
*Anshu *


Re: Why there is no native shutdown command in cassandra

2016-06-13 Thread Anshu Vajpayee
Thanks for information.

Does stopdaemon also flush memtables  and stop trift and CQL interface
before shutting down the daemon ?  does node also announce  shutting down
message  in ring  ?


On Mon, Jun 13, 2016 at 10:14 PM, Jake Luciani <jak...@gmail.com> wrote:

> If you want to understand why, it's because C* was designed to be
> crash-only.
>
> https://www.usenix.org/conference/hotos-ix/crash-only-software
>
> Since this is great for the project but bad for operators experience we
> have later added this stopdaemon command.
>
> On Mon, Jun 13, 2016 at 12:37 PM, Anshu Vajpayee <anshu.vajpa...@gmail.com
> > wrote:
>
>> As per Documentation(pasted as below), It does not stop Daemon . I tested
>> also.I was looking for graceful shutdown  for Cassandra Daemon.
>> Description
>> <https://docs.datastax.com/en/cassandra/2.1/cassandra/tools/toolsDrain.html?scroll=toolsDrain__description_unique_11>
>>
>> Flushes all memtables from the node to SSTables on disk. Cassandra stops
>> listening for connections from the client and other nodes. You need to
>> restart Cassandra after running nodetool drain. You typically use this
>> command before upgrading a node to a new version of Cassandra. To simply
>> flush memtables to disk, use nodetool flush.
>>
>> On Mon, Jun 13, 2016 at 10:00 PM, Jeff Jirsa <jeff.ji...@crowdstrike.com>
>> wrote:
>>
>>> `nodetool drain`
>>>
>>>
>>>
>>>
>>>
>>> *From: *Anshu Vajpayee <anshu.vajpa...@gmail.com>
>>> *Reply-To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
>>> *Date: *Monday, June 13, 2016 at 9:28 AM
>>> *To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
>>> *Subject: *Why there is no native shutdown command in cassandra
>>>
>>>
>>>
>>> Hi All
>>>
>>>
>>>
>>> Why we dont have native shutdown command in Cassandra ?
>>>
>>>
>>>
>>> Every software provides graceful shutdown command.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> ​Regards,
>>>
>>> Anshu​
>>>
>>>
>>>
>>>
>>>
>>
>>
>>
>> --
>> *Regards,*
>> *Anshu *
>>
>>
>>
>
>
> --
> http://twitter.com/tjake
>



-- 
*Regards,*
*Anshu *


Re: Why there is no native shutdown command in cassandra

2016-06-13 Thread Anshu Vajpayee
Thanks, I am able to run the stopdaemon option in version 2.1.
but it doesn't print anything in log for log level INFO about shutdown.

In case of log level DEBUG , it prints shutdown information.

INFO  [RMI TCP Connection(4)-127.0.0.1] 2016-06-13 09:49:29,223
CassandraDaemon.java:410 - Cassandra shutting down...
INFO  [RMI TCP Connection(4)-127.0.0.1] 2016-06-13 09:49:29,223
ThriftServer.java:142 - Stop listening to thrift clients
INFO  [RMI TCP Connection(4)-127.0.0.1] 2016-06-13 09:49:29,235
Server.java:213 - Stop listening for CQL clients


On Mon, Jun 13, 2016 at 10:01 PM, DuyHai Doan <doanduy...@gmail.com> wrote:

> In Cassandra 3.x, I think there is a "nodetool stopdaemon" command
>
> On Mon, Jun 13, 2016 at 6:28 PM, Anshu Vajpayee <anshu.vajpa...@gmail.com>
> wrote:
>
>> Hi All
>>
>> Why we dont have native shutdown command in Cassandra ?
>>
>> Every software provides graceful shutdown command.
>>
>>
>>
>> ​Regards,
>> Anshu​
>>
>>
>>
>


-- 
*Regards,*
*Anshu *


Re: select query on entire primary key returning more than one row in result

2016-06-13 Thread Anshu Vajpayee
were all rows same? If not what was different ?

What was droppable tombstone  compaction  ratio for that table/CF?

On Mon, Jun 13, 2016 at 6:11 PM, Siddharth Verma <
verma.siddha...@snapdeal.com> wrote:

> Running nodetool compact fixed the issue.
>
> Could someone help out as why it occurred.
>
>
>


-- 
*Regards,*
*Anshu *


Re: Why there is no native shutdown command in cassandra

2016-06-13 Thread Anshu Vajpayee
As per Documentation(pasted as below), It does not stop Daemon . I tested
also.I was looking for graceful shutdown  for Cassandra Daemon.Description
<https://docs.datastax.com/en/cassandra/2.1/cassandra/tools/toolsDrain.html?scroll=toolsDrain__description_unique_11>

Flushes all memtables from the node to SSTables on disk. Cassandra stops
listening for connections from the client and other nodes. You need to
restart Cassandra after running nodetool drain. You typically use this
command before upgrading a node to a new version of Cassandra. To simply
flush memtables to disk, use nodetool flush.

On Mon, Jun 13, 2016 at 10:00 PM, Jeff Jirsa <jeff.ji...@crowdstrike.com>
wrote:

> `nodetool drain`
>
>
>
>
>
> *From: *Anshu Vajpayee <anshu.vajpa...@gmail.com>
> *Reply-To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
> *Date: *Monday, June 13, 2016 at 9:28 AM
> *To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
> *Subject: *Why there is no native shutdown command in cassandra
>
>
>
> Hi All
>
>
>
> Why we dont have native shutdown command in Cassandra ?
>
>
>
> Every software provides graceful shutdown command.
>
>
>
>
>
>
>
> ​Regards,
>
> Anshu​
>
>
>
>
>



-- 
*Regards,*
*Anshu *


Why there is no native shutdown command in cassandra

2016-06-13 Thread Anshu Vajpayee
Hi All

Why we dont have native shutdown command in Cassandra ?

Every software provides graceful shutdown command.



​Regards,
Anshu​


Per node limit for Disk Space

2016-05-27 Thread Anshu Vajpayee
Hi All,
I have question regarding max disk space limit  on a node.

As per Data stax, We can have 1TB max disk space for rotational disks and
up to 5 TB for SSDs on a node.

Could you please suggest per your experience what would be limit for space
on a single node with out causing so much stress on a  node?





*​Thanks,​*