I’m observing the following behavior: Service S1 (java microservice) 
communicates with Service S2 (java microservice)  using gRPC unary calls, 
and both services run in k8s. The gRPC client in S1 uses keepalive and 
resolves a headless Service (which returns multiple IP addresses). After 
scaling S2 down and then back up, the gRPC client in S1 stops 
communicating, UNAVAILABLE error,  logs indicate it continues using stale 
IP addresses.
Problem does not resolve until restart of the S1. k8s headless service has 
correct IP addresses, and name resolution from the pod ( nslookup/dig) 
shows correct IPs as well, so this is not an infrastructure problem.

What could be causing this, and how can I force the gRPC client to refresh 
its DNS cache?

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion visit 
https://groups.google.com/d/msgid/grpc-io/ef24ab7f-48b8-4ce2-a830-f5ee5eef6bf4n%40googlegroups.com.

Reply via email to