Sounds good. Thanks a lot for all your help!
Jaydeep
On Mon, Oct 23, 2023 at 3:30 PM Jeff Jirsa wrote:
> Not aware of any that survive node restart, though in the past, there were
> races around starting an expansion while one node was partitioned/down (and
> missing the initial gossip / UP).
Not aware of any that survive node restart, though in the past, there were
races around starting an expansion while one node was partitioned/down (and
missing the initial gossip / UP). A heap dump could have told us a bit more
conclusively, but it's hard to guess for now.
On Mon, Oct 23, 2023
The issue was persisting on a few nodes despite no changes to the topology.
Even node restarting did not help. Only after we evacuated those nodes, the
issue got resolved.
Do you think of a possible situation under which this could happen?
Jaydeep
On Sat, Oct 21, 2023 at 10:25 AM Jaydeep
Thanks, Jeff!
I will keep this thread updated on our findings.
Jaydeep
On Sat, Oct 21, 2023 at 9:37 AM Jeff Jirsa wrote:
> That code path was added to protect against invalid gossip states
>
> For this logger to be issued, the coordinator receiving the query must
> identify a set of replicas
That code path was added to protect against invalid gossip states
For this logger to be issued, the coordinator receiving the query must identify
a set of replicas holding the data to serve the read, and one of the selected
replicas must disagree that it’s a replica based on its view of the
Hi,
I am using Cassandra 4.0.6 in production, and receiving the following
error. This indicates that Cassandra nodes have mismatch in
token-owership.
Has anyone seen this issue before?
Received a read request from /XX.XX.XXX.XXX:Y for a range that is
not owned by the current replica