It sounds silly but sometimes restarting again the node which is showing
down from other nodes fix the issue. This looks like a gossip issue.

On Sun, Nov 24, 2019 at 7:19 AM Paul Mena <pm...@whoi.edu> wrote:

> I am in the process of doing a rolling restart on a 4-node cluster running
> Cassandra 2.1.9. I stopped and started Cassandra on node 1 via "service
> cassandra stop/start", and noted nothing unusual in either system.log or
> cassandra.log. Doing a "nodetool status" from node 1 shows all four nodes
> up:
>
> user@node001=> nodetool status
> Datacenter: datacenter1
>
> =======================
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address          Load       Tokens  Owns    Host ID                       
>         Rack
> UN  192.168.187.121  538.95 GB  256     ?       
> c99cf581-f4ae-4aa9-ab37-1a114ab2429b  rack1
> UN  192.168.187.122  630.72 GB  256     ?       
> bfa07f47-7e37-42b4-9c0b-024b3c02e93f  rack1
> UN  192.168.187.123  572.73 GB  256     ?       
> 273df9f3-e496-4c65-a1f2-325ed288a992  rack1
> UN  192.168.187.124  625.05 GB  256     ?       
> b8639cf1-5413-4ece-b882-2161bbb8a9c3  rack1
>
> But doing the same command from any other of the 3 nodes shows node 1
> still down.
>
> user@node002=> nodetool status
> Datacenter: datacenter1
> =======================
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address          Load       Tokens  Owns    Host ID                       
>         Rack
> DN  192.168.187.121  538.94 GB  256     ?       
> c99cf581-f4ae-4aa9-ab37-1a114ab2429b  rack1
> UN  192.168.187.122  630.72 GB  256     ?       
> bfa07f47-7e37-42b4-9c0b-024b3c02e93f  rack1
> UN  192.168.187.123  572.73 GB  256     ?       
> 273df9f3-e496-4c65-a1f2-325ed288a992  rack1
> UN  192.168.187.124  625.04 GB  256     ?       
> b8639cf1-5413-4ece-b882-2161bbb8a9c3  rack1
>
> Is there something I can do to remedy this current situation - so that I
> can continue with the rolling restart?
>
>

Reply via email to