> It happens that sometimes, the GOAWAY signal isn't received by the client.

Just curious, how has this been determined that the GOAWAY frame wasn't 
received? Also what are your values of MAX_CONNECTION_AGE and 
MAX_CONNECTION_AGE_GRACE ?

A guess: one possible thing to look for is if IP packets to/from the pod's 
address stopped forwarding, rendering the TCP connection to it a "black 
hole". In that case, a grpc client will, by default, realize that a 
connection is bad only after the TCP connection times out (typically ~15 
minutes). You may set keepalive parameters to notice the brokenness of such 
connections faster -- see references to keepalive 
in https://github.com/grpc/proposal/blob/master/A9-server-side-conn-mgt.md 
for more details.



On Tuesday, December 22, 2020 at 11:30:44 AM UTC-8 Emmanuel Delmas wrote:

> Thank you. I've setup MAX_CONNECTION_AGE and it seems to work well.
>
> I was looking for a way to refresh the name resolution because I'm facing 
> another issue.
> It happens that sometimes, the GOAWAY signal isn't received by the client.
> In this case, I receive a bunch of DeadlineExceeded errors, the client 
> still sending message to a deleted Kubernetes pod.
> I wanted to trigger a refresh at this time but I understand it is not 
> possible.
>
> Do you already get this kind of issue?
> Do you have any advice to handle a not received GOAWAY signal?
>
> Le lundi 21 décembre 2020 à 19:42:17 UTC+1, [email protected] a écrit :
>
>> > "But when I create new pods after the connection or a reconnection, 
>> calls are not load balanced on these new servers."
>>
>> Can you elaborate a bit on what exactly is done here and the expected 
>> behavior?
>>
>> In general, one thing to note about gRPC's client channel/stub is that in 
>> general a client will not refresh the name resolution process unless it 
>> encounters a problem with the current connection(s) that it has. So for 
>> example if the following events happen:
>> 1) client stub resolves 
>> headless-test-grpc-master.test-grpc.svc.cluster.local in DNS, to addresses 
>> 1.1.1.1, 2.2.2.2, and 3.3.3.3
>> 2) client stub establishes connections to 1.1.1.1, 2.2.2.2, and 3.3.3.3, 
>> and begins round robining RPCs across them
>> 3) a new host, 4.4.4.4, starts up, and is added behind the 
>> headless-test-grpc-master.test-grpc.svc.cluster.local DNS name
>>
>> Then the client will continue to just round robin its RPCs across 
>> 1.1.1.1, 2.2.2.2, and 3.3.3.3 indefinitely -- so long as it doesn't 
>> encounter a problem with those connections. It will only re-query the DNS, 
>> and so learn about 4.4.4.4, if it encounters a problem.
>>
>> There's some possibly interesting discussion about this behavior in 
>> https://github.com/grpc/grpc/issues/12295 and in 
>> https://github.com/grpc/proposal/blob/master/A9-server-side-conn-mgt.md.
>>
>> On Thursday, December 3, 2020 at 8:57:03 AM UTC-8 Emmanuel Delmas wrote:
>>
>>> Hi
>>>
>>> *Question*
>>> I'm wondering how to refresh the IP list in order to update subchannel 
>>> list, after creating gRPC channel in Ruby using DNS resolution (which 
>>> created several subchannels).
>>>
>>> *Context*
>>> I've setup gRPC communication between our services in a Kubernetes 
>>> environnement two years ago but we are facing issues after pods restart.
>>>
>>> I've setup a Kubernetes headless service (in order to get all pod IPs 
>>> from the DNS).
>>> I've managed to use load balancing with the following piece of code.
>>> stub = 
>>> ExampleService::Stub.new("headless-test-grpc-master.test-grpc.svc.cluster.local:50051",
>>>  
>>> :this_channel_is_insecure, timeout: 5, channel_args: {'grpc.lb_policy_name' 
>>> => 'round_robin'})
>>>
>>> But when I create new pods after the connection or a reconnection, calls 
>>> are not load balanced on these new servers.
>>> That why I'm wondering what should I do to make the gRPC resolver 
>>> refresh the list of IP and create expected new subchannels.
>>>
>>> Is it something achievable? Which configuration should I use?
>>>
>>> Thanks for your help
>>>
>>> *Emmanuel Delmas* 
>>> Backend Developer
>>> CSE Member
>>> https://github.com/papa-cool
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/b52785e7-60a9-4cf8-a4c1-34339c089393n%40googlegroups.com.

Reply via email to