Hi Josh,

I cc'd a few people who might be able to help.

On Friday, December 16, 2016 at 9:09:59 AM UTC-8, Josh Humphries wrote:
>
> I've seen the idea proposed more than once that the generated stubs be 
> backed by an interface -- something along the lines of a channel 
> <https://github.com/grpc/grpc-java/blob/master/core/src/main/java/io/grpc/Channel.java>.
>  
> Most recently, it was during discussion of client interceptors 
> <https://github.com/grpc/grpc-go/issues/240>. It's also come up as a way 
> of doing in-process dispatch <https://github.com/grpc/grpc-go/issues/247> 
> without having to go through the network stack and (much more importantly) 
> serialization and deserialization.
>
> There have been objections to the idea, and I just wanted to understand 
> the rationale behind them. I have a few ideas as to what the arguments for 
> the current approach might be. Is it one or more of these? Or are there 
> other arguments tat I am overlooking or nuance/detail I missed in the 
> bullets below?
>
>    1. *Surface Area*. The main argument I can think of is that the API 
>    isn't yet sufficiently mature to lock down the interface now. So exporting 
>    only a single concrete type to be used by stubs makes the API surface area 
>    smaller, allowing more flexibility in changes later. To me, this implies 
>    that introduction of such an interface is an option in the future. (I 
> don't 
>    particularly agree with this argument since the interface surface area 
>    could have been exposed *instead of* the existing grpc.Invoke and 
>    grpc.NewClientStream methods.)
>    2. *Overhead*. It could be argued that the level of indirection 
>    introduced by the use of an interface could be too much overhead. I'd 
>    really like to see a benchmark that shows this if this is the case. It 
>    seems hard to imagine that a single interface vtable-dispatch would be 
>    measurable overhead considering what else happens in the course of a call. 
>    (Perhaps my imagination is broke...)
>    3. *Complexity*. I suppose it might be argued that introducing another 
>    type, such as a channel interface, complicates the library and the 
> existing 
>    flow. I happen to *strongly* disagree with such an argument. I think 
>    the interface could be added in a fairly painless way that would still 
>    support older generated code. This was described in this document 
>    
> <https://docs.google.com/document/d/1weUMpVfXO2isThsbHU8_AWTjUetHdoFe6ziW0n5ukVg/edit#>.
>  
>    But were this part of the objection, I'd like to hear more.
>
>
> For context: I have some ideas I want to build for other kinds of stubs -- 
> like providing special stubs that make batch streaming calls look like just 
> issuing a bunch of unary RPCs, or for making a single bidi-stream 
> conversation resemble a sequence of normal service calls (for some other 
> service) that happen to be pinned to the same stream.
>
> All of these currently require* non-trivial code generation* -- either 
> specialized to the use, or I just provide my own interface-based dispatch 
> and build all of these things on top of that. But it feels like a 
> fundamental hole in the existing APIs that I cannot do this already.
>
> The Java implementation has a layered architecture with Stubs on top, 
> Transports on the bottom, and Channel in-between. The Go implementation 
> exposes nothing along the lines of channel, instead combining it with the 
> transport into a single clientConn. This is incredibly limiting.
>
> *----*
> *Josh Humphries*
> Software Engineer
> *[email protected] <javascript:>*
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/0f4e61ec-5ce3-43c5-8504-a8513669dc5f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to