Hello team,

Just to add on to the discussion, one may run,
Nodetool disablebinary followed by a nodetool disablethrift followed by
nodetool drain.
Nodetool drain also does the work of nodetool flush+ declaring in the
cluster that I'm down and not accepting traffic.

Thanks



On Mon, 25 Nov, 2019, 12:55 AM Surbhi Gupta, <surbhi.gupt...@gmail.com>
wrote:

> Before Cassandra shutdown, nodetool drain should be executed first. As
> soon as you do nodetool drain, others node will see this node down and no
> new traffic will come to this node.
> I generally gives 10 seconds gap between nodetool drain and Cassandra
> stop.
>
> On Sun, Nov 24, 2019 at 9:52 AM Paul Mena <pm...@whoi.edu> wrote:
>
>> Thank you for the replies. I had made no changes to the config before the
>> rolling restart.
>>
>>
>> I can try another restart but was wondering if I should do it
>> differently. I had simply done "service cassandra stop" followed by
>> "service cassandra start".  Since then I've seen some suggestions to
>> proceed the shutdown with "nodetool disablegossip" and/or "nodetool drain".
>> Are these commands advisable? Are any other commands recommended either
>> before the shutdown or after the startup?
>>
>>
>> Thanks again!
>>
>>
>> Paul
>> ------------------------------
>> *From:* Naman Gupta <naman.gu...@girnarsoft.com>
>> *Sent:* Sunday, November 24, 2019 11:18:14 AM
>> *To:* user@cassandra.apache.org
>> *Subject:* Re: Cassandra is not showing a node up hours after restart
>>
>> Did you change the name of datacenter or any other config changes before
>> the rolling restart?
>>
>> On Sun, Nov 24, 2019 at 8:49 PM Paul Mena <pm...@whoi.edu> wrote:
>>
>>> I am in the process of doing a rolling restart on a 4-node cluster
>>> running Cassandra 2.1.9. I stopped and started Cassandra on node 1 via
>>> "service cassandra stop/start", and noted nothing unusual in either
>>> system.log or cassandra.log. Doing a "nodetool status" from node 1 shows
>>> all four nodes up:
>>>
>>> user@node001=> nodetool status
>>> Datacenter: datacenter1
>>>
>>> =======================
>>> Status=Up/Down
>>> |/ State=Normal/Leaving/Joining/Moving
>>> --  Address          Load       Tokens  Owns    Host ID                     
>>>           Rack
>>> UN  192.168.187.121  538.95 GB  256     ?       
>>> c99cf581-f4ae-4aa9-ab37-1a114ab2429b  rack1
>>> UN  192.168.187.122  630.72 GB  256     ?       
>>> bfa07f47-7e37-42b4-9c0b-024b3c02e93f  rack1
>>> UN  192.168.187.123  572.73 GB  256     ?       
>>> 273df9f3-e496-4c65-a1f2-325ed288a992  rack1
>>> UN  192.168.187.124  625.05 GB  256     ?       
>>> b8639cf1-5413-4ece-b882-2161bbb8a9c3  rack1
>>>
>>> But doing the same command from any other of the 3 nodes shows node 1
>>> still down.
>>>
>>> user@node002=> nodetool status
>>> Datacenter: datacenter1
>>> =======================
>>> Status=Up/Down
>>> |/ State=Normal/Leaving/Joining/Moving
>>> --  Address          Load       Tokens  Owns    Host ID                     
>>>           Rack
>>> DN  192.168.187.121  538.94 GB  256     ?       
>>> c99cf581-f4ae-4aa9-ab37-1a114ab2429b  rack1
>>> UN  192.168.187.122  630.72 GB  256     ?       
>>> bfa07f47-7e37-42b4-9c0b-024b3c02e93f  rack1
>>> UN  192.168.187.123  572.73 GB  256     ?       
>>> 273df9f3-e496-4c65-a1f2-325ed288a992  rack1
>>> UN  192.168.187.124  625.04 GB  256     ?       
>>> b8639cf1-5413-4ece-b882-2161bbb8a9c3  rack1
>>>
>>> Is there something I can do to remedy this current situation - so that I
>>> can continue with the rolling restart?
>>>
>>>

Reply via email to