I believe customizing LB logic (for instance, server names grabbed from
Zookeeper)  for c#, you will need to use c-core load balancing APIs.

[email protected] who can provide more guidance.

On Wednesday, October 21, 2020 at 11:03:37 PM UTC-7 [email protected] wrote:

> Thanks for the replies! the language I'm looking at right now is c#. I
> understand retry might not fully supported yet, while still hope I can
> understand the design correctly as the implementation will be there sooner
> or later which follows the design idea.
>
> Thanks for explanation of how subchannels work with LB and name resolver.
> However, since the doc mentioned "Thick client" load balancing
> <https://grpc.io/blog/grpc-load-balancing/#thick-client> with gRpc is
> supported, any recommendation how can I have a client customized LB logic
> (for instance, server names grabbed from Zookeeper) work with gRpc client
> LB? I guess there is something Application code can notify LB about backend
> changes as well? like how name resolver does. If I plan to use something
> other than name resolver to manage backend servers list?
>
> thanks a lot!
> 在2020年10月22日星期四 UTC+8 上午1:44:24<[email protected]
> <https://groups.google.com/>> 写道:
>
>> On Wednesday, October 21, 2020 at 1:27:27 AM UTC-7 [email protected]
>>  wrote:
>>
>>> To ask my question in another word. If to build a "Thick client" load
>>> balancing <https://grpc.io/blog/grpc-load-balancing/#thick-client> with
>>> gRpc, the client is responsible to keep track of available servers. While
>>> when client detected available servers changed, how does it manages
>>> corresponding SubChannels under existing Channels? I searched around docs
>>> today again, didn't find APIs for that. I'm looking into csharp. Sorry if I
>>> overlooked something.
>>>
>>> The client's load balancer manages subchannels and shuts down old and
>> creates new subchannels if the backend servers changed (The name resolver
>> notifies the loadbalancer about the backend changes). The client RPC will
>> choose one of the current list of subchannels based on the load balancing
>> policy. You could use the grpc library built-in round-robin load balancer
>> if you don't have special requirement.
>>
>>
>> thanks a lot
>>>
>>> 在2020年10月19日星期一 UTC+8 下午8:07:33<li yabo> 写道:
>>>
>>>> While considering moving a http client/service call to using gRpc, I'm
>>>> looking at load balancing solutions. We currently have each client manages
>>>> a list of server names (for one service VIP) and connection pools. The
>>>> server names changes from time to time, so each client has it's logic to
>>>> maintain the server names list and http connection pools.
>>>>
>>>> If move to using gRpc, I think the easiest change regarding LB might be
>>>> to let client fill the server names list to gRpc as Subchannels of a
>>>> Channel to the service VIP. So that client sends request to one Channel and
>>>> get requests well load balanced. Once there is changes in the client server
>>>> names list, the client Application layer code goes to update Subchannels in
>>>> gRpc again.
>>>> While I'm not sure if that's a do-able or ok approach that doesn't
>>>> violate gRpc design?
>>>>
>>>> managing one Channel for each server name might be a solution, but that
>>>> possibly won't work well with gRpc retries-policy, because we hope the
>>>> retry request issued by retries-policy hit another server name of the
>>>> server-names list.
>>>>
>>>> Since current client self-managed server names idea works in existing
>>>> env, we hope we don't have to setup new roll like lookaside load balancer
>>>> in the cluster only for the purpose to using gRpc. Does this idea make
>>>> sense?
>>>>
>>>> Thanks a lot!
>>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAEUibbts1RpGmskVw5DyyYD-8jTYfLD-PwSJS28gTp2EXqNEjw%40mail.gmail.com.

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to