Sorry, disregard the schema ID. It's too early in the morning here ;)
On Tue, Nov 26, 2019 at 7:58 AM Shalom Sagges
wrote:
> Hi Paul,
>
> From the gossipinfo output, it looks like the node's IP address and
> rpc_address are different.
> /192.168.*187*.121 vs RPC_ADDRESS:192.168.*185*.121
> You
Hi Paul,
>From the gossipinfo output, it looks like the node's IP address and
rpc_address are different.
/192.168.*187*.121 vs RPC_ADDRESS:192.168.*185*.121
You can also see that there's a schema disagreement between nodes, e.g.
schema_id on node001 is fd2dcb4b-ca62-30df-b8f2-d3fd774f2801 and on
Hello ,
Check and compare everything parameters
1. Java version should ideally match across all nodes in the cluster
2. Check if port 7000 is open between the nodes. Use telnet or nc commands
3. You must see some clues in system logs, why the gossip is failing.
Do confirm on the above things.
NTP was restarted on the Cassandra nodes, but unfortunately I’m still getting
the same result: the restarted node does not appear to be rejoining the cluster.
Here’s another data point: “nodetool gossipinfo”, when run from the restarted
node (“node001”) shows a status of “normal”:
Hello,
As part of the final stages of our 2.2 --> 3.11 upgrades, one of our
clusters (on AWS/ 18 nodes/ m4.2xlarge) produced some post-upgrade fits. We
started getting spikes of Cassandra read and write timeouts despite the
fact the overall metrics volumes were unchanged. As part of the upgrade
I’ve just discovered that NTP is not running on any of these Cassandra nodes,
and that the timestamps are all over the map. Could this be causing my issue?
user@remote=> ansible pre-prod-cassandra -a date
node001.intra.myorg.org | CHANGED | rc=0 >>
Mon Nov 25 13:58:17 UTC 2019