You are correct that load balancing is at the RPC scope. Option A is probably the best if there is long delay between messages. I am not sure how Option B would work, since typically the reason for having a long lived streaming RPC is to maintain state between messages. Are you just doing long lived streams to minimize the number of reconnects?
On Thursday, January 19, 2017 at 12:59:38 AM UTC-8, Jozef Vilcek wrote: > > Hello, > > I have a long running grpc call, where client constantly streams data to > ingestion server. There are multiple instances of ingestion servers and > client can choose any of them. I would like each client to load balance > data between available servers. I see that grpc has a load balancer, but my > understanding is that it works over grpc calls, not data within long > running call. What is the best way to approach this with grpc? > > a) Do not have long running calls. Make sure to close and reopen call > after some time or amount of data. Should I expect any noticable overhead > or is grpc tuned for this to be neglectable? > b) Open multiple calls and do my own load balance from top level, when > sending the data. > c) Something else ... ? > > Many thanks for suggestions. > > Best, > Jozef > -- You received this message because you are subscribed to the Google Groups "grpc.io" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/grpc-io. To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/9dc622d8-2d98-4947-aa46-3febb0be6801%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
