I would not emulate the benchmark code, except that it is async.  The 
benchmark code is pretty heavily tuned to the usage pattern of benchmark 
work, which may not be ideal for your use.      

>From your work description, it sounds like to want to make the workers be 
servers, and the master be a client.   A client can contact the workers 
with a work unit, and each worker will respond to it.  That would be the 
Unary usage.   If you want each worker to be working on at most n things as 
a time, you can make a map of gRPC stubs to work items.   When a stub 
returns false for "CallStreamObserver.isReady()"  you can register a 
callback (setOnReadyHandler() )on the observer to re add itself to the map 
of available workers.  That way you limit workers to at most n work items, 
and only those which are not in flow control push back.   

I am kind of guessing what you want from your description, and I don't know 
the specifics for your problem, but the async stub usage sounds right for 
you.

On Friday, March 9, 2018 at 12:21:46 PM UTC-8, [email protected] wrote:
>
> Hey Carl, thank you for responding! I'm really liking grpc!
>
> I want to distribute work from one service to multiple backend services. 
> It's computational geometry queries, so the work can take a little time and 
> the client would benefit from passing the work out to worker services. I 
> can see now that streaming is not the way to do that as it is pinned (has 
> to be to guaruntee order). 
>
> So to distribute work to multiple backends can I use an async stub and 
> some form of future? That way I send work out, it's round robin 
> loadbalanced. Is this benchmark code the one I want to emulate? Are there 
> other places of documentation for this work?
>
>
> https://github.com/grpc/grpc-java/blob/master/benchmarks/src/main/java/io/grpc/benchmarks/qps/AsyncClient.java
>
> Thanks for the help!
> D
>
> On Thursday, January 4, 2018 at 4:42:05 PM UTC-8, Carl Mastrangelo wrote:
>>
>> To clarify: are you asking about doing streaming RPCs to multiple 
>> backends?   If so, each RPC (which consists of multiple messages) will be 
>> sent to different backends.  Once a streaming RPC is started, it will be 
>> pinned to a particular backend and not change.   
>>
>>
>>
>> On Friday, December 22, 2017 at 6:13:02 AM UTC-8, [email protected] 
>> wrote:
>>>
>>>
>>> I've been successfully used the Manual Flow Control 
>>> <https://github.com/grpc/grpc-java/tree/master/examples/src/main/java/io/grpc/examples/manualflowcontrol>
>>>  
>>> example to make streaming async requests from one client to one server. I 
>>> have also have been successful using the kubernetes load balancer 
>>> example 
>>> <https://github.com/saturnism/grpc-java-by-example/tree/master/kubernetes-lb-example/echo-client-lb-api/src/main/java/com/example/grpc/client>
>>>  
>>> to allow one client to make round robin blocking stub requests to multiple 
>>> servers. But I'm not able to combine the two examples to allow one client 
>>> to make async requests to multiple servers. Is the Manual flow control 
>>> example by it's nature a single server technique since it uses a 
>>> ClientResponseServer relationship?
>>>
>>> What would be the proper way for me to throttle my streaming async 
>>> requests to multiple servers?
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/c4bff4c9-50cb-46ed-ac7c-60a013fb56d9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to