On Thu, Feb 18, 2021 at 7:06 PM Vitaly <[email protected]> wrote:

> 1. Connection management on the client side - do something to reset the
> channel (like [enterIdle](
> https://grpc.github.io/grpc-java/javadoc/io/grpc/ManagedChannel.html#enterIdle)
> in grpc-java). Downside - it seems that this feature has been developed for
> android and I can't find similar functionality in grpc-go.
>

Go doesn't go into IDLE at all today. But even so, this isn't an approach
we'd encourage. The enterIdle() is really for re-choosing which network to
use, and would be a hack to use it in this case.

2. Connection management on the server side - drop connections periodically
> on the server. Downside - this approach looks less graceful than the client
> side one and may impact request latency and result in request failures on
> the client side.
>

L4 proxy is *exactly* the use-case for server-side connection age (as you
should have seen in gRFC A9
<https://github.com/grpc/proposal/blob/master/A9-server-side-conn-mgt.md>).
The impact on request latency is the connection handshake, which is no
worse than if you were using HTTP/1. The shutdown should avoid races
on-the-wire, which should prevent most errors and some latency. There are
some races client-side that could cause *very rare* failures; it should be
well below the normal noise level of failures.

We have seen issues with GOAWAYs introducing disappointing latency, but in
large part because of cold caches in the *backend*.

3. Use request based grpc-aware L7 LB, this way client would connect to the
> LB, which would fan out requests to the servers. Downside - I've been told
> by our infra guys that it is hard to implement in our setup due to the way
> we use TLS and manage certificates.
> 4. Expose our servers outside and use grpc-lb or client side load
> balancing. Downside - it seems less secure and would make it harder to
> protect against DDoS attacks if we go this route. I think this downside
> makes this approach unviable.
>

Option 3 is the most common solution for serious load balancing across
trust domains (like public internet vs data center). Option 4 depends on
how much you trust your clients.

1. Which approach is generally preferable?
>

For a "public" service, the "normal" preference would be (3) L7 proxy
(highest), (2) L4 proxy + MAX_CONNECTION_AGE, (1) manual code on
client-side hard-coded with special magical numbers. (4) gRPC-LB/xDS could
actually go anywhere, depending on how you felt about your client and your
LB needs; it's more about how *you* feel about (4). (4) is the highest
performance and lowest latency solution, although it is rarely used for
public services that receive traffic from the Internet.

2. Are there other options to consider?
>

You could go with option (2), but expose two L4 proxy IP addresses to your
clients and have the clients use round-robin. Since MAX_CONNECTION_AGE uses
jitter, the connections are unlikely to both go down at the same time and
so it'd hide the connection establishment latency.

3. Is it possible to influence grpc channel state in grpc-go, which would
> trigger resolver and balancer to establish a new connection similar to what
> enterIdle does in java?
>

You'd have to shut down the ClientConn and replace it.

4. Is there a way to implement server side connection management cleanly
> without impacting client-side severely?
>

I'd suggest giving option (2) a try and informing us if you have poor
experiences. Option (2) is actually pretty common, even when using L7
proxies, as you may need to *load balance the proxies*.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CA%2B4M1oPDjdaNWSaq-dPAfjq0AVkLDFYXtP2H4%3Di6d0hRQjDNrg%40mail.gmail.com.

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to