On Mon, Dec 5, 2016 at 5:35 PM, Arya Asemanfar
wrote:
> Since a TCP load balancer is only aware of TCP packets and not HTTP2
> frames, it cannot multiplex requests from multiple clients onto 1
> connection. A TCP load balancer makes a load balancing decision at
>
Since a TCP load balancer is only aware of TCP packets and not HTTP2
frames, it cannot multiplex requests from multiple clients onto 1
connection. A TCP load balancer makes a load balancing decision at
connection establishment, not per stream, request, or packet.
Re calling Close after a timer
I am not clear how the TCP load balancer works. Is it a TCP proxy which
forwards the traffic from the client? If yes, your description is still
confusing to me because there should be 100 connections from the clients to
the TCP proxy and 10 connections from the TCP proxy to the servers. Then I
am
Thanks for the response.
>>On server-side it should be exposed as a client cancellation.
Is that available in C++ implementation? I looked around and did not find
it exposed. My further experiments show that in case a bi-directional
stream is broken abruptly, the 'ok' bool on the AsyncNext call
On Thu, Dec 1, 2016 at 3:40 PM, Arpit Baldeva wrote:
> 1. *Detecting a dead client on server:* Is there a way or recommended
> mechanism to detect a client who is no longer connected to the server after
> an abrupt client shutdown? In our current framework, we get a TCP level
Scenario A, and when I said "restarting server" I mean grpc server.
On Mon, Dec 5, 2016 at 2:59 PM Qi Zhao wrote:
> I am still confused. The scenario you want to serve as an example is
> a) grpc clients --> TCP LB --> grpc servers;
> or
> b) grpc clients --> grpc servers?
>
>
I am still confused. The scenario you want to serve as an example is
a) grpc clients --> TCP LB --> grpc servers;
or
b) grpc clients --> grpc servers?
On Mon, Dec 5, 2016 at 2:49 PM, Arya Asemanfar
wrote:
> Sorry, I meant grpc server. Yes you are right if the TCP
Sorry, I meant grpc server. Yes you are right if the TCP load balancer
restarts there is no problem, so my scenario only applies if the grpc
server restarts.
On Mon, Dec 5, 2016 at 2:17 PM Qi Zhao wrote:
> On Mon, Dec 5, 2016 at 12:13 PM, Arya Asemanfar <
>
On Mon, Dec 5, 2016 at 12:13 PM, Arya Asemanfar wrote:
> Thanks for the feedback. Good idea re metadata for getting the Balancer to
> treat the connections as different. Will take a look at that.
>
> Some clarifications/questions inline:
>
> On Mon, Dec 5, 2016 at
Thanks for the feedback. Good idea re metadata for getting the Balancer to
treat the connections as different. Will take a look at that.
Some clarifications/questions inline:
On Mon, Dec 5, 2016 at 11:11 AM, 'Qi Zhao' via grpc.io <
grpc-io@googlegroups.com> wrote:
> Thanks for the info. My
seems that Lyft's Envoy did the same way as Josh said when proxying http/2.
在 2016年12月3日星期六 UTC+8上午3:35:10,Louis Ryan写道:
>
> I would take a look at Lyft's Envoy to perform the same role here too if
> you're not too married to Java for the proxy
>
> On Thu, Dec 1, 2016 at 6:18 PM, killjason
11 matches
Mail list logo