If I understand the scenario here correctly, each client has many different server names, and those server names change over time. What determines this list of server names for a given client to use, and how does the client get the list of server names to use? Can you just publish all of the addresses under a single server name via your name service (whether it be DNS or zookeeper or whatever)? That would probably be the easiest way to deal with this, because then you could just have a single channel with a single name, and you'd automatically get a separate subchannel for every address, which you could use with the round_robin LB policy.
If for some reason you can't do that, then architecturally, you would need to create a custom resolver that would get the list of names to resolve, resolve each one, and return the combined list of addresses to the channel. But I'd recommend against this approach, for the following reasons: - It's a much hackier solution than simply providing the addresses under one name in your name service. This is exactly the problem that a name service was designed to solve, so trying to solve it in a different way seems very odd to me. - C-core does not currently provide a public API for implementing resolvers in wrapped languages like C#; this is something that we want to do, but it hasn't bubbled up to the top of our priority list. You could probably do it by writing some code against our internal C++ API, but that API is not supported for external use and may change from version to version, so it's not a great option right now. I don't think there's any reason to need a custom LB policy for this scenario. But even if there was, C-core does not provide a public API for implementing LB policies in wrapped languages. And unlike resolvers, this is probably not an API that we will ever provide, because the LB policy is in the performance-critical path, and hopping out of core and into the wrapped language from the LB policy would kill performance. I hope this information is helpful. On Fri, Oct 23, 2020 at 10:34 AM Dapeng (Penn) Zhang <[email protected]> wrote: > I believe customizing LB logic (for instance, server names grabbed from > Zookeeper) for c#, you will need to use c-core load balancing APIs. > > [email protected] who can provide more guidance. > > On Wednesday, October 21, 2020 at 11:03:37 PM UTC-7 [email protected] > wrote: > >> Thanks for the replies! the language I'm looking at right now is c#. I >> understand retry might not fully supported yet, while still hope I can >> understand the design correctly as the implementation will be there sooner >> or later which follows the design idea. >> >> Thanks for explanation of how subchannels work with LB and name resolver. >> However, since the doc mentioned "Thick client" load balancing >> <https://grpc.io/blog/grpc-load-balancing/#thick-client> with gRpc is >> supported, any recommendation how can I have a client customized LB logic >> (for instance, server names grabbed from Zookeeper) work with gRpc client >> LB? I guess there is something Application code can notify LB about backend >> changes as well? like how name resolver does. If I plan to use something >> other than name resolver to manage backend servers list? >> >> thanks a lot! >> 在2020年10月22日星期四 UTC+8 上午1:44:24<[email protected] >> <https://groups.google.com/>> 写道: >> >>> On Wednesday, October 21, 2020 at 1:27:27 AM UTC-7 [email protected] >>> wrote: >>> >>>> To ask my question in another word. If to build a "Thick client" load >>>> balancing <https://grpc.io/blog/grpc-load-balancing/#thick-client> with >>>> gRpc, the client is responsible to keep track of available servers. While >>>> when client detected available servers changed, how does it manages >>>> corresponding SubChannels under existing Channels? I searched around docs >>>> today again, didn't find APIs for that. I'm looking into csharp. Sorry if I >>>> overlooked something. >>>> >>>> The client's load balancer manages subchannels and shuts down old and >>> creates new subchannels if the backend servers changed (The name resolver >>> notifies the loadbalancer about the backend changes). The client RPC will >>> choose one of the current list of subchannels based on the load balancing >>> policy. You could use the grpc library built-in round-robin load balancer >>> if you don't have special requirement. >>> >>> >>> thanks a lot >>>> >>>> 在2020年10月19日星期一 UTC+8 下午8:07:33<li yabo> 写道: >>>> >>>>> While considering moving a http client/service call to using gRpc, I'm >>>>> looking at load balancing solutions. We currently have each client manages >>>>> a list of server names (for one service VIP) and connection pools. The >>>>> server names changes from time to time, so each client has it's logic to >>>>> maintain the server names list and http connection pools. >>>>> >>>>> If move to using gRpc, I think the easiest change regarding LB might >>>>> be to let client fill the server names list to gRpc as Subchannels of a >>>>> Channel to the service VIP. So that client sends request to one Channel >>>>> and >>>>> get requests well load balanced. Once there is changes in the client >>>>> server >>>>> names list, the client Application layer code goes to update Subchannels >>>>> in >>>>> gRpc again. >>>>> While I'm not sure if that's a do-able or ok approach that doesn't >>>>> violate gRpc design? >>>>> >>>>> managing one Channel for each server name might be a solution, but >>>>> that possibly won't work well with gRpc retries-policy, because we hope >>>>> the >>>>> retry request issued by retries-policy hit another server name of the >>>>> server-names list. >>>>> >>>>> Since current client self-managed server names idea works in existing >>>>> env, we hope we don't have to setup new roll like lookaside load balancer >>>>> in the cluster only for the purpose to using gRpc. Does this idea make >>>>> sense? >>>>> >>>>> Thanks a lot! >>>>> >>>> -- Mark D. Roth <[email protected]> Software Engineer Google, Inc. -- You received this message because you are subscribed to the Google Groups "grpc.io" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/CAJgPXp586ufPOnupLMi8sT9jLyESa_pynnzSmvq4cjMiu-bQgg%40mail.gmail.com.
