I believe the connection is maintained even if an RPC isn't active at the moment, so you can have short lived RPCs, but long lived connections. They are about as expensive.
Having long lived connections have some downsides. Load balancing doesn't work as well as servers are entering and exiting the backend pool. If there isn't any traffic for a long time the connection cannot be freed up. Also, long lived connections tend to break if there are NATs on the path (often for cell or home connections, but elsewhere too). On Tuesday, January 24, 2017 at 10:59:31 AM UTC-8, Jozef Vilcek wrote: > > I am shooting for option a). Should be fine for my use case in terms of > performance. I should recreate rcp call depending on load strategy. > For your question, I am doing long lived streams because there is > typically always something to send. Feels natural because it is streaming > live data, diagnostics and statistics bound to the lifecycle of a server > process and it's activity. > > On Monday, January 23, 2017 at 11:36:08 PM UTC+1, Carl Mastrangelo wrote: >> >> You are correct that load balancing is at the RPC scope. Option A is >> probably the best if there is long delay between messages. I am not sure >> how Option B would work, since typically the reason for having a long lived >> streaming RPC is to maintain state between messages. Are you just doing >> long lived streams to minimize the number of reconnects? >> >> On Thursday, January 19, 2017 at 12:59:38 AM UTC-8, Jozef Vilcek wrote: >>> >>> Hello, >>> >>> I have a long running grpc call, where client constantly streams data to >>> ingestion server. There are multiple instances of ingestion servers and >>> client can choose any of them. I would like each client to load balance >>> data between available servers. I see that grpc has a load balancer, but my >>> understanding is that it works over grpc calls, not data within long >>> running call. What is the best way to approach this with grpc? >>> >>> a) Do not have long running calls. Make sure to close and reopen call >>> after some time or amount of data. Should I expect any noticable overhead >>> or is grpc tuned for this to be neglectable? >>> b) Open multiple calls and do my own load balance from top level, when >>> sending the data. >>> c) Something else ... ? >>> >>> Many thanks for suggestions. >>> >>> Best, >>> Jozef >>> >> -- You received this message because you are subscribed to the Google Groups "grpc.io" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/grpc-io. To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/b50e5ea6-6206-4d80-9f06-4734558dff54%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
