[
https://issues.apache.org/jira/browse/CASSANDRA-12405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15426077#comment-15426077
]
Benjamin Roth commented on CASSANDRA-12405:
-------------------------------------------
I guess this was due to a "suboptimal" configuration. Issue did not reoccur
since some tuning efforts.
> node health status inconsistent
> -------------------------------
>
> Key: CASSANDRA-12405
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12405
> Project: Cassandra
> Issue Type: Bug
> Environment: Cassandra 3.9, Linux Xenial
> Reporter: Benjamin Roth
>
> ATM we run a 4 node cluster with cassandra 3.9
> Due to another issue (hanging repairs) I am forced to restart nodes from time
> to time. Before I restart a node, all nodes are listed as UP from any other
> node.
> When I restart one node in the cluster, the health statuses of other nodes
> are affected as well.
> After having restarted node "cas", the "nodetool status" output on all nodes
> looks like this during startup phase of cas1:
> https://gist.github.com/brstgt/9be77470814d2fd160617a1c06579804
> After cas1 is up again, I restart cas2. During the startup phase of cas2 the
> status looks like this:
> https://gist.github.com/brstgt/d27ef540b2389b3a7d2d015ab83af547
> The nodetool output goes along with log messages like this:
> 2016-08-08T07:30:06+00:00 cas1 [GossipTasks: 1]
> org.apache.cassandra.gms.Gossiper Convicting /10.23.71.3 with status NORMAL -
> alive false
> 2016-08-08T07:30:06+00:00 cas1 [GossipTasks: 1]
> org.apache.cassandra.gms.Gossiper Convicting /10.23.71.2 with status NORMAL -
> alive false
> In extreme cases, nodes didn't even come up again after a restart with an
> error that there are no seed hosts (sorry, don't have the error message in
> current logs), but the seed host(s) were definitively up and running. A
> reboot fixed that issue, starting the node again and again did not help.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)