In terms of turning it into Ansible, it's going to depend a lot on how you
manage the physical layer as well as replication/consistency. Currently, I
just use groups per "rack". If you have an API-accessible CMDB you could
probably pull the physical location from there and translate that to
Will do.
On Tue, Jun 7, 2022 at 6:12 PM Jeff Jirsa wrote:
> This deserves a JIRA ticket please.
>
> (I assume the sending host is randomly choosing the bad IP and blocking on
> it for some period of time, causing other tasks to pile up, but it should
> be investigated as a regression).
>
>
>
>
This deserves a JIRA ticket please.
(I assume the sending host is randomly choosing the bad IP and blocking on
it for some period of time, causing other tasks to pile up, but it should
be investigated as a regression).
On Tue, Jun 7, 2022 at 7:52 AM Gil Ganz wrote:
> Yes, I know the issue
Yes, I know the issue with the peers table, we had it in different
clusters, in this case it appears the cause of the problem was indeed a bad
ip in the seed list.
After removing it from all nodes and reloading seeds, running a rolling
restart does not cause any gossip issues, and in general the
Regarding the "ghost IP", you may want to check the system.peers_v2
table by doing "select * from system.peers_v2 where peer =
'123.456.789.012';"
I've seen this (non-)issue many times, and I had to do "delete from
system.peers_v2 where peer=..." to fix it, as on our client side, the
Python