It will likely hang around in gossip for 3-15 days but then should
disappear. As long as it's not showing up in the cluster it should be OK.
On 1 Nov. 2017 20:25, "Peng Xiao" <2535...@qq.com> wrote:
> Dear All,
>
> We have decommisioned a DC,but from system.log,it'still gossiping
> INFO [GossipS
Is the ghost node in your system.peers on any node? Check every node (since
it's local strategy).
Otherwise the nodes must be in the endpoint list in memory and you'll have
to rolling restart. Make sure you drain though before the rolling restart
since it may be in your commit log somewhere too.
O
Solved. Finally found that there was one node with 172.29.8.8 in gossipinfo
and appearing incomplete in status:
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID
Rack
UL 172.29.8.8 13.28 GB 256 9.5% null
1
Re
I'm thinking I'll have to find something similar for 2.0. I just don't
understand where the node is coming from!
Jeff
On 1 July 2015 at 10:21, Vitalii Skakun wrote:
> Hi,
>
> just a guess, there was a possibility to purge gossip state on a node, at
> least in version 1.2
>
> http://docs.datasta
Hi,
just a guess, there was a possibility to purge gossip state on a node, at
least in version 1.2
http://docs.datastax.com/en/cassandra/1.2/cassandra/architecture/architectureGossipPurge_t.html
the trick was to add -Dcassandra.load_ring_state=false somehow to the jvm
parameters
I'm not sure if
Thanks for the tip Aiman, but this node is not in the seed list anywhere.
Jeff
On 30 June 2015 at 18:16, Aiman Parvaiz wrote:
> I was having exactly the same issue with the same version, check your seed
> list and make sure it contains only the live nodes, I know that seeds are
> only read when
I was having exactly the same issue with the same version, check your seed list
and make sure it contains only the live nodes, I know that seeds are only read
when cassandra starts but updating the seed list to live nodes and then doing a
roiling restart fixed this issue for me.
I hope this hel