Hi - This is issue is re-occurring even when there are no deployments on 
the upstream servers.

My gRPC service is a unary service , Batch application calls my gRPC 
service using blocking stub.

My gRPC service intern calls two different gRPC services to retrieve the 
data (upstream gRPC service 1) and do some calculation using the data 
retrieved((upstream gRPC service 2)

Since it runs in batch mode, earlier My gRPC service was not able to limit 
the request that it gets, and it went to unhealthy status so as per the 
suggestion #10164 <https://github.com/grpc/grpc-java/issues/10164> from 
this ticket, Have implemented netflix concurrency limiter at my gRPC 
service so it allows the request based on the system capacity and for the 
rest of the other request it throws Server limit reached error.

Upstream services that my GRPC calls has also has limitation so it can 
handle a subset of request what i get mygRPC service can handle. To apply 
backoff on the client calls from my application have used same netflix 
client concurrency limiter. Even after having proper concurrency limiting , 
I'm getting the NoRouteToHostException. I understand this is something due 
to overloading the service , is that understanding right or what else are 
the possibility of getting into this exception?

Caused by: 
io.grpc.netty.shaded.io.netty.channel.AbstractChannel$AnnotatedNoRouteToHostException:
 
No route to host: applicationALB/*.*.*.*:443
Caused by: java.net.NoRouteToHostException: No route to host
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)
at 
io.grpc.netty.shaded.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
at 
io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at 
io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:707)
at 
io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
at 
io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
at 
io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at 
io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at 
io.grpc.netty.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at 
io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Unknown Source)

On Thursday, 17 August 2023 at 16:09:50 UTC+5:30 Bhuvi Viji wrote:

> Thanks Eric. Will check this option
>
> On Monday, 14 August 2023 at 21:57:14 UTC+5:30 Eric Anderson wrote:
>
>> You get that exception because all backends failed to be connected to. I 
>> expect this is a server deployment-approach problem. Does the service do a 
>> rolling restart?
>>
>> The only thing the client can do to avoid the error is use 
>> stub.withWaitForReady(). But that implies you care more about reliability 
>> than latency. I'd consider using it for this purpose to be a hack.
>>
>> On Fri, Aug 11, 2023 at 7:48 AM Bhuvi Viji <viji....@gmail.com> wrote:
>>
>>> My application calls a gRPC service(application B) using blocking stub 
>>> to fulfill the request. End to End flow works fine without any issues.
>>>
>>> If there are any deployments in the gRPC service (application B) which 
>>> my application consumes we get into NoRouteToHostException. We are trying 
>>> the request only after the successful deployment of Service B. I understand 
>>> the channel is interrupted and it needs time to recreate the connections 
>>> and re-establish. Is there any recommendation for get ridding of this kind 
>>> of exception during the upstream servers deployments/new pod creation?
>>>
>>> Caused by: 
>>> io.grpc.netty.shaded.io.netty.channel.AbstractChannel$AnnotatedNoRouteToHostException:
>>>  
>>> No route to host: *******.com/Ipaddr:443
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "grpc.io" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to grpc-io+u...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/grpc-io/398ac163-8a7f-470b-8edb-df79014cd423n%40googlegroups.com
>>>  
>>> <https://groups.google.com/d/msgid/grpc-io/398ac163-8a7f-470b-8edb-df79014cd423n%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/ac3b290d-3745-41af-909b-b6494ceda94cn%40googlegroups.com.

Reply via email to