Hi, my understanding, and it seems to be correct in testing is that grpc always 
makes a new connection (via tcp) due to load balancing. The only way I’ve been 
able to get grpc to use a single connection is if I use the “streaming” mode.

What am I doing wrong then?

> On Oct 17, 2018, at 8:24 AM, Josh Humphries <[email protected]> wrote:
> 
> [email protected]
> moving [email protected] to BCC
> 
> In general, connections are not cheap, but stubs are. Actual implementations 
> for some languages differ, but Go complies with this.
> 
> What that means is that, generally speaking, you should not try creating the 
> *grpc.ClientConn for each request. Instead create it once and cache it. You 
> can create the stub just once and cache it (they are safe to use concurrently 
> form multiple goroutines). But that is not necessary; you could also create 
> the stub for each request, using the cached connection.
> 
> In practice, creating a new connection for each request will have overhead in 
> terms of allocations, creating and tearing down goroutines, and also in terms 
> of latency, to establish a new network connection every time. So it is 
> advisable to cache and re-use them. However, if you are not using TLS, it may 
> be acceptable to create a new connection per request (since the network 
> connection latency is often low, at least if the client and server are in the 
> same region/cloud provider). If you are using TLS, however, creating a 
> connection per request is a bit of an atrocity: you are not only adding the 
> extra latency of a TLS handshake to every request (typically 10s of 
> milliseconds IIRC), but you are also inducing a potentially huge amount of 
> load on the server, by making it perform many more digital signatures (one of 
> the handshake steps) than if the clients cached and re-used connections.
> 
> Historically, the only reason it might be useful to create a new connection 
> per request in Go was if you were using a layer-4(TCP) load balancer. In that 
> case, the standard DNS resolver would resolve to a single IP address (that of 
> the load balancer) and then only maintain a single connection. This would 
> result in very poor load balancing since 100% of that client's requests would 
> all route to the same backend. This would also happen when using standard 
> Kubernetes services (when using gRPC for server-to-serve communication), as 
> kubedns resolves a service name into a single virtual IP. I'm not sure if the 
> current state of the world regarding TCP load balancers and the grpc-go 
> project, but if it's still an issue and you run services in Kubernetes, you 
> can use a 3rd party resolver: https://github.com/sercand/kuberesolver.
> 
> ----
> Josh Humphries
> [email protected]
> 
> 
>> On Wed, Oct 17, 2018 at 2:13 AM <[email protected]> wrote:
>> Hello,
>> 
>> I intend to use grpc between two fixed endpoints (client and server) where 
>> the client receives multiple requests (the client serves as a proxy) which 
>> in turn sends a grpc request to the server. I wanted to know of the 
>> following would be considered good practice:
>> 
>> a) For every request that comes in at the client, do the following in the 
>> http handler:
>>        a) conn := grpc.Dial(...)            // establish a grpc connection
>>        b) client := NewClient(conn)    // instantiate a new client
>>        c) client.Something(..)             // invoke the grpc method on the 
>> client
>> 
>> i.e Establish a new connection and client in handling every request
>> 
>> b) Establish a single grpc connection between client and server at init() 
>> time and then inside the handler, instantiate a new client and invoke the 
>> grpc method
>>        a) client := NewClient(conn)    // instantiate a new client
>>        b) client.Something(..)             // invoke the grpc method on the 
>> client 
>> 
>> c) Establish a connection and instantiate a client at init() and then in 
>> every handler, just invoke the grpc method.
>>        a) client.Something(..)
>> 
>> The emphasis here is on performance as I expect the the client to process a 
>> large volume of requests coming in. I do know that grpc underneath creates 
>> streams but at the end of the day a single
>> logical grpc connection runs on a single TCP connection (multiplexing the 
>> streams) on it and having just one connection for all clients might not cut 
>> it. Thoughts and ideas appreciated !
>> 
>> Thanks,
>> Nakul
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected].
>> For more options, visit https://groups.google.com/d/optout.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/B1017A50-8F2D-4F0B-9770-D48BF93C272E%40ix.netcom.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to