Yes, as I mentioned in my other thread, LOCAL_ONE does not allow the retry 
policy to take action if all local nodes are down.

Yes, I am using withLocalDc(); Here's the code (Scala):

  def getClusterBuilder: Builder = {
    val pool = new PoolingOptions
    pool.setConnectionsPerHost(HostDistance.LOCAL, 
config.coreConnectionsPerHost, config.maxConnectionsPerHost)

    val codecRegistry: CodecRegistry = new CodecRegistry()
        .register(InstantCodec.instance)
        .register(SimpleTimestampCodec.instance)

    // By specifying MaxValue here, we allow any & all hosts in remote DCs to 
be used by queries when necessary. That
    // allows TokenAwarePolicy to choose the appropriate nodes in remote DCs.
    val maxHostsToUsePerRemoteDc = Int.MaxValue

    // We have nodes from the remote DC in the initial list (so that we can 
tolerate DC failover), so we have to
    // specify the local DC explicitly.
    val dcAwarePolicy = DCAwareRoundRobinPolicy.builder()
        .withLocalDc(config.localDc)
        .withUsedHostsPerRemoteDc(maxHostsToUsePerRemoteDc)
        .build()

    val shuffleReplicas = true
    val builder = Cluster.builder()
        .withClusterName(config.clusterName)
        .withPoolingOptions(pool)
        .withLoadBalancingPolicy(new TokenAwarePolicy(dcAwarePolicy, 
shuffleReplicas))
        .withRetryPolicy(DowngradingConsistencyRetryPolicy.INSTANCE)
        .withCodecRegistry(codecRegistry)

    val contactPoints = config.contactPointsProvider.getContactPoints.asScala
    contactPoints.foreach(builder.addContactPoint)

    builder
  }

I will have to turn up the logging in order to see that log message you refer 
to. But it seems to me that the DC config is the same regardless of whether I 
use ONE or LOCAL_ONE so I don't think it would make a difference. From what 
I've seen, I'd expect all the non-local nodes to be listed in that message. But 
I'll see what I can.

Thanks for your responses! I posted to the other list as you suggested: 
https://groups.google.com/a/lists.datastax.com/forum/#!topic/java-driver-user/o0GVBjFCHCA

From: Nate McCall <n...@thelastpickle.com<mailto:n...@thelastpickle.com>>
Reply-To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" 
<user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Date: Tuesday, March 21, 2017 at 7:16 PM
To: Cassandra Users 
<user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Subject: Re: ONE has much higher latency than LOCAL_ONE



On Wed, Mar 22, 2017 at 1:11 PM, Nate McCall 
<n...@thelastpickle.com<mailto:n...@thelastpickle.com>> wrote:


On Wed, Mar 22, 2017 at 12:48 PM, Shannon Carey 
<sca...@expedia.com<mailto:sca...@expedia.com>> wrote:
>
> The cluster is in two DCs, and yes the client is deployed locally to each DC.

First off, what is the goal of using ONE instead of LOCAL_ONE? If it's 
failover, this could be addressed with a RetryPolicy starting wth LOCAL_ONE and 
falling back to ONE.


Just read your previous thread about this. That's pretty un-intuitive and 
counter to the way I remember that working (though admittedly, it's been a 
while).

Do please open a thread on the driver mailing list, i'm curious about the 
response.

Reply via email to