I just configured a 3 node cluster in this way and was able to reproduce
the warning message:

cqlsh> select peer, rpc_address from system.peers;

 peer      | rpc_address
-----------+-------------
 127.0.0.3 |   127.0.0.1
 127.0.0.2 |   127.0.0.1

(2 rows)

cqlsh> select rpc_address from system.local;

 rpc_address
-------------
   127.0.0.1

10:22:40.399 [s0-admin-0] WARN  c.d.o.d.i.c.metadata.DefaultMetadata - [s0]
Unexpected error while refreshing token map, keeping previous version
java.lang.IllegalArgumentException: Multiple entries with same key:
Murmur3Token(-100881582699237014)=/127.0.0.1:9042 and
Murmur3Token(-100881582699237014)=/127.0.0.1:9042
at
com.datastax.oss.driver.shaded.guava.common.collect.ImmutableMap.conflictException(ImmutableMap.java:215)
at
com.datastax.oss.driver.shaded.guava.common.collect.ImmutableMap.checkNoConflict(ImmutableMap.java:209)
at
com.datastax.oss.driver.shaded.guava.common.collect.RegularImmutableMap.checkNoConflictInKeyBucket(RegularImmutableMap.java:147)
at
com.datastax.oss.driver.shaded.guava.common.collect.RegularImmutableMap.fromEntryArray(RegularImmutableMap.java:110)
at
com.datastax.oss.driver.shaded.guava.common.collect.ImmutableMap$Builder.build(ImmutableMap.java:393)
at
com.datastax.oss.driver.internal.core.metadata.token.DefaultTokenMap.buildTokenToPrimaryAndRing(DefaultTokenMap.java:261)
at
com.datastax.oss.driver.internal.core.metadata.token.DefaultTokenMap.build(DefaultTokenMap.java:57)
at
com.datastax.oss.driver.internal.core.metadata.DefaultMetadata.rebuildTokenMap(DefaultMetadata.java:146)
at
com.datastax.oss.driver.internal.core.metadata.DefaultMetadata.withNodes(DefaultMetadata.java:104)
at
com.datastax.oss.driver.internal.core.metadata.InitialNodeListRefresh.compute(InitialNodeListRefresh.java:96)
at
com.datastax.oss.driver.internal.core.metadata.MetadataManager.apply(MetadataManager.java:475)
at
com.datastax.oss.driver.internal.core.metadata.MetadataManager$SingleThreaded.refreshNodes(MetadataManager.java:299)
at
com.datastax.oss.driver.internal.core.metadata.MetadataManager$SingleThreaded.access$1700(MetadataManager.java:265)
at
com.datastax.oss.driver.internal.core.metadata.MetadataManager.lambda$refreshNodes$0(MetadataManager.java:155)
at
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
at
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at io.netty.channel.DefaultEventLoop.run(DefaultEventLoop.java:54)
at
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
at
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)

Interestingly enough, the version 3 of the driver only recognizes 1 node,
where version 4 is able to detect 3 nodes separately.  It's probably not a
scenario that was given a lot of thought since this is a misconfiguration.
Will think about how it should be handled and log tickets in any case, as
would be nice to surface to the user that something isn't right in a more
clear way.

Can you please confirm when you have chance that this is indeed a
configuration issue with rpc_address?  Just to make sure I'm not ignoring a
possible bug ;)

Thanks,
Andy


On Thu, Jun 20, 2019 at 10:20 AM Andy Tolbert <andrew.tolb...@datastax.com>
wrote:

> One thing that strikes me is that the endpoint reported is '127.0.0.1'.
> Is it possible that you have rpc_address set to 127.0.0.1 on each of your
> three nodes in cassandra.yaml?  The driver uses the system.peers table to
> identify nodes in the cluster and associates them by rpc_address.  Can you
> verify this by executing 'select peer, rpc_address from system.peers' to
> see what is being reported as the rpc_address and let me know?
>
> In any case, the driver should probably handle this better, I'll create a
> driver ticket.
>
> Thanks,
> Andy
>
> On Thu, Jun 20, 2019 at 10:03 AM Jeff Jirsa <jji...@gmail.com> wrote:
>
>> There’s a reasonable chance this is a bug in the Datastax driver - may
>> want to start there when debugging .
>>
>> It’s also just a warn, and the two entries with the same token are the
>> same endpoint which doesn’t seem concerning to me, but I don’t know the
>> Datastax driver that well
>>
>> On Jun 20, 2019, at 7:40 AM, Котельников Александр <a.kotelni...@crpt.ru>
>> wrote:
>>
>> It appears that no such warning is issued if I connected to Cassandra
>> from a remote server, not locally.
>>
>>
>>
>> *From: *Котельников Александр <a.kotelni...@crpt.ru>
>> *Reply-To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
>> *Date: *Thursday, 20 June 2019 at 10:46
>> *To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
>> *Subject: *Unexpected error while refreshing token map, keeping previous
>> version (IllegalArgumentException: Multiple entries with same key ?
>>
>>
>>
>> Hey!
>>
>>
>>
>> I’ve  just configured a test 3-node Cassandra cluster and run very
>> trivial java test against it.
>>
>>
>>
>> I see the following warning from java-driver on each CqlSession
>> initialization:
>>
>>
>>
>> 13:54:13.913 [loader-admin-0] WARN  c.d.o.d.i.c.metadata.DefaultMetadata
>> - [loader] Unexpected error while refreshing token map, keeping previous
>> version (IllegalArgumentException: Multiple entries with same key:
>> Murmur3Token(-1060405237057176857)=/127.0.0.1:9042 and
>> Murmur3Token(-1060405237057176857)=/127.0.0.1:9042)
>>
>>
>>
>> What does It mean? Why?
>>
>>
>>
>> Cassandra 3.11.4, driver 4.0.1.
>>
>>
>>
>> nodetool status
>>
>> Datacenter: datacenter1
>>
>> =======================
>>
>> Status=Up/Down
>>
>> |/ State=Normal/Leaving/Joining/Moving
>>
>> --  Address       Load       Tokens       Owns (effective)  Host
>> ID                               Rack
>>
>> UN  10.73.66.36   419.36 MiB  256          100.0%
>> fafa2737-9024-437b-9a59-c1c037bce244  rack1
>>
>> UN  10.73.66.100  336.47 MiB  256          100.0%
>> d5323ad0-f8cd-42d4-b34d-9afcd002ea47  rack1
>>
>> UN  10.73.67.196  336.4 MiB  256          100.0%
>> 74dffe0c-32a4-4071-8b36-5ada5afa4a7d  rack1
>>
>>
>>
>> The issue persists if I reset the cluster, just the token changes its
>> value.
>>
>> Alexander
>>
>>

Reply via email to