[
https://issues.apache.org/jira/browse/CASSANDRA-10969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15102244#comment-15102244
]
adil chabaq commented on CASSANDRA-10969:
-----------------------------------------
Hi,
we are running in this problem, cassandra version 2.1.2, we did a full restart
of two DC of 5 nodes each due to power outage, the nodes were running fine for
a long time, nodetool status was reporting all nodes UN, but now after the full
restart the nodetool report incoerent info about nodes, we are seeing this
message in the log " received an invalid gossip generation for peer /x.x.x.x;
local generation = 1417171692, received generation = 1452847182"
we look at local and peers tables but we didn't found where the local
generation is stored.
do u know how to solve the problem? we are thinking about gossip info but the
documentation is not clear for us.
thanks
> long-running cluster sees bad gossip generation when a node restarts
> --------------------------------------------------------------------
>
> Key: CASSANDRA-10969
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10969
> Project: Cassandra
> Issue Type: Bug
> Components: Coordination
> Environment: 4-node Cassandra 2.1.1 cluster, each node running on a
> Linux 2.6.32-431.20.3.dl6.x86_64 VM
> Reporter: T. David Hudson
> Assignee: Joel Knighton
> Priority: Minor
> Fix For: 3.3, 2.1.x, 2.2.x, 3.0.x
>
>
> One of the nodes in a long-running Cassandra 2.1.1 cluster (not under my
> control) restarted. The remaining nodes are logging errors like this:
> "received an invalid gossip generation for peer xxx.xxx.xxx.xxx; local
> generation = 1414613355, received generation = 1450978722"
> The gap between the local and received generation numbers exceeds the
> one-year threshold added for CASSANDRA-8113. The system clocks are
> up-to-date for all nodes.
> If this is a bug, the latest released Gossiper.java code in 2.1.x, 2.2.x, and
> 3.0.x seems not to have changed the behavior that I'm seeing.
> I presume that restarting the remaining nodes will clear up the problem,
> whence the minor priority.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)