Hi,

I have a gRPC client using the default DnsNameResolver and 
RoundRobinLoadBalancer that is connected to gRPC servers on Kubernetes 
using the Kube DNS endpoint.  The servers are deployed as Kube pods and may 
fail.  I see that when a pod fails, the onStateChange gets called to 
refresh the DnsNameResolver.  The problem is that the new Kube pod that 
gets spun up in the old pod's place is not up yet when the resolver is 
trying to refresh the subchannel state and doesn't see the new pod.  And 
thus, the client is not able to see the new pod and does not connect to it.

Is there a configuration I am missing or is there a way to refresh the 
resolver on a scheduled timer?

Thanks,

Yee-Ning

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/93d12e96-0d44-4a28-a8e4-8131d9ea5c8c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to