Even though the first DNS refresh is too early to notice the new address, as long as the old address is still returned, RoundRobin will continue trying to connect to the old address (subject to exponential back-off of Subchannel reconnections). Of course it will fail, but whenever it does, a new DNS refresh will be triggered. Eventually you will get the new address.
If you have waited long enough and still not seen the new address, it may be due to the TTL of the DNS record, or more likely, JVM's DNS caching <https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-jvm-ttl.html> . On Tuesday, January 15, 2019 at 2:36:39 PM UTC-8, Yee-Ning Cheng wrote: > > Hi, > > I have a gRPC client using the default DnsNameResolver and > RoundRobinLoadBalancer that is connected to gRPC servers on Kubernetes > using the Kube DNS endpoint. The servers are deployed as Kube pods and may > fail. I see that when a pod fails, the onStateChange gets called to > refresh the DnsNameResolver. The problem is that the new Kube pod that > gets spun up in the old pod's place is not up yet when the resolver is > trying to refresh the subchannel state and doesn't see the new pod. And > thus, the client is not able to see the new pod and does not connect to it. > > Is there a configuration I am missing or is there a way to refresh the > resolver on a scheduled timer? > > Thanks, > > Yee-Ning > -- You received this message because you are subscribed to the Google Groups "grpc.io" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/grpc-io. To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/ce745b00-6320-429d-a223-107b89500c8f%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
