Inline responses

On Thursday, February 14, 2019 at 3:10:45 PM UTC-8, Geoff Groos wrote:
>
> Thanks Carl,
>
> I think the client-server naming is only causing me problems, so Instead 
> I'll use the real names which are optimizer (server above) and simulator 
> (client above). They are more like peers than client/server, because each 
> offers functionality.
>

In gRPC, clients always initiate, so they aren't peers so much.   You can 
use streaming to work around this, by having the client "prime" the RPC by 
sending a dummy request, and having the server send responses.  These 
responses are handled by the client, which then sends more "requests", 
inverting the relationship.  

Alternatively, you could have your client be combined with a local server, 
and then advertise the ip:port to the actual server, which then is combined 
with its own client.  

Not clean I know, but bidirectional streaming RPCs are the closest thing to 
peers that gRPC can offer.  

 

>
> If its the case that one process can start a service, have clients connect 
> to it, and then register services that they offer with that server, then 
> you're correct, and I do only need one server. The key is that the client 
> needs to be able to state "I offer this service, and heres how you can send 
> me messages". I'm just not sure how to implement that in GRPC.
>
> I think I did indeed have the terminology correct: I do want multiple 
> servers, each offering one service. The idea is that a simulator would 
> connect to an already running optimizer, start their own server running a 
> single instance of the service 'EndpointClientMustImplement', bind it 
> with protobuf, then call 'register' on our optimizer with a token that 
> contains details on "heres how you can connect to the service I just 
> started".
>
> The only downside to your suggestion is that it would require 
> multi-threading, because user code would have to call `register`, and then 
> produce two threads (or coroutines) to consume all the messages in both the 
> `featureC` stream and the `featureD` stream. But it does address some of my 
> concerns. 
>

I would not consider threading to be a serious concern here (and I say that 
as someone who has spent significant time optimizing gRPC).  You will 
likely need to give up the blocking API anyways, which means you can have 
requests and responses happen on the same thread, keeping a clear 
synchronization order between the two.  
 

>
> Still, I like the elegance of the solution I was asking for: when a 
> client-simulator connects to a server-optimizer, it starts its own service 
> and tells the optimizer it connects to that it should call back to this 
> service at some token.
>
> Can it be done?
>
> On Thursday, 14 February 2019 08:59:35 UTC-8, Carl Mastrangelo wrote:
>>
>> Some comments / questions:
>>
>> 1.  Why doesn't "rpc register" get split into two methods, one per type?  
>> Like "rpc registerCee (CeeRegRequest) returns (CeeRegResponse);"
>>
>> 2.  Being careful with terminology, you have multiple "services" on a 
>> singe "server", and the "server" is at one address.   
>>
>> 3.  You can find all services, methods, and types using the reflection 
>> api, typically by adding ProtoReflectionService to your Server.  
>>
>> 4.  BindableService and ServerServiceDefinition are standard and stable 
>> API, you can make them if you want.  The Protobuf generated code makes its 
>> own (and is complicated for other reasons) but you can safely and easily 
>> construct one that you prefer.
>>
>> 5.  Multiple ports is usually for something special, like different 
>> socket options per port, or different security levels.  That is a more 
>> advanced feature less related to API. 
>>
>> On Wednesday, February 13, 2019 at 10:58:51 AM UTC-8, Geoff Groos wrote:
>>>
>>> Hey everyone
>>>
>>> I'm building an API with GRPC which currently looks like this:
>>>
>>> serivce OurEndpoint {
>>>    rpc register (RegistrationForFeatureCeeAndDee) returns (stream 
>>> FeatureCeeOrDeeRequest) {}
>>>      
>>>    rpc featureA (FeatureAyeRequest) returns (FeatureAyeReponse) {}
>>>    rpc featureB (FeatureBeeRequest) returns (FeatureBeeResponse) {}
>>>    
>>>    rpc offerFeatureC(FeatureCeeResponse) returns (Confirmation) {}
>>>    rpc offerFeatureD(FeatureDeeResponse) returns (Confirmation) {}
>>>    rpc offerCeeOrDeeFailed(FailureResponse) returns (Confirmation) {}
>>> }
>>>
>>>
>>> message FeatureCeeOrDeeRequest {
>>>     oneof request {
>>>         FeatureDeeRequest deeRequest = 1;
>>>         FeatureCeeRequest ceeRequest = 2;      
>>>     }
>>> }
>>>
>>>
>>> message Confirmation {}
>>>
>>> Note that features A and B are fairly traditional client-driven 
>>> request-response pairs.
>>>
>>> Features C and D are callbacks; the client registers with
>>>
>>> I can provide answers to C and D, send me a message and I'll call 
>>> offerFeatureResponse 
>>>> as appropriate.
>>>
>>>
>>> I don't like this. It makes our application code complex. We effectively 
>>> have to build our own multiplexer for things like offerCeeOrDeeFailed
>>>
>>> What I'd really rather do is this:
>>>
>>> serivce OurEndpoint {
>>>    rpc register (RegistrationForFeatureCeeAndDee) returns (Confirmation) 
>>> {}
>>>      
>>>    rpc featureA (FeatureAyeRequest) returns (FeatureAyeReponse) {}
>>>    rpc featureB (FeatureBeeRequest) returns (FeatureBeeResponse) {}  
>>> }
>>> service EndpointClientMustImplement {
>>>    rpc featureC(FeatureCeeRequest) returns (FeatureCeeResponse) {}
>>>    rpc featureD(FeatureDeeRequest) returns (FeatureDeeResponse) {}
>>> }
>>>
>>>
>>> message RegistrationForFeatureCeeAndDee {
>>>    ConnectionToken name = 1;
>>> }
>>>
>>>
>>> message Confirmation {}
>>>
>>>
>>> The problem here is how to go about implementing ConnectionToken and 
>>> its handler. Ideally I'd like some code like this:
>>>
>>> //kotlin, which is on the jvm.
>>> override fun register(request: RegistrationForFeatureCeeAndDee, response
>>> : ResponseObserver<Confirmation>) {
>>>    
>>>     //...
>>>    
>>>     val channel: Channel = ManagedChannelBuilder
>>>             .for("localhost", 5551) // a port shared by the service 
>>> handling this very response
>>>             .build()
>>>            
>>>     val stub: EndpointClientMustImplement = EndpointClientMustImplement.
>>> newBuilder()
>>>             .withServiceNameOrSimilar(request.name)
>>>             .build()
>>>            
>>>     //....
>>> }
>>>
>>> What is the best way to go about this?
>>> 1. Can I have multiple servers at a single address?
>>> 2. Whats the best way to find a service instance by name at runtime 
>>> rather than by a type-derived (and thus by statically bound) name? I 
>>> suspect the BindableService and ServerServiceDefinitions will help me 
>>> here, but I really don't want to mess with the method-table building and 
>>> the code generating system seems opaque. 
>>>
>>> I guess my idea solution would be to ask the code generator to generate 
>>> code that is open on its service name, --ideally open on a constructor 
>>> param such that there is no way to instance the service without specifying 
>>> its service name.
>>>
>>> Or, perhalps there's some other strategy I should be using? I could of 
>>> course specify port numbers and then instance grpc services once-per-port, 
>>> but that means I'm bounded on the number of ports I'm using by the number 
>>> of active API users I have, which is very strange.
>>>
>>> Many thanks!
>>>
>>> -Geoff
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To post to this group, send email to grpc-io@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/562cd2b7-fb1e-4ca3-b1a1-a616b58dcc62%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to