Hi there,

This is a followup of our discussion on the PR. I think it would be better 
to move the discussion here.

The intention with the recent and forthcoming PRs 
(https://github.com/grpc/grpc/pull/16302, 
https://github.com/grpc/grpc/pull/16414, 
https://github.com/grpc/grpc/pull/16492) is 
to support a callback-based C++ async API, which has been a user request 
since almost day 1 after the gRPC alpha release. Making this fully 
feasible, however, is an ongoing effort that will substantially change iomgr, 
codegen, the C++ language binding, and ultimately coders. The core surface 
for this has the interface of a completion queue (but which doesn't 
actually have any queue) and does not use next. Instead, callbacks get 
invoked when operation batches are complete, which is typically realized 
with the aid of the iomgr (except for non-polling transports, like inproc). 
We are moving toward providing an iomgr in OSS that will allow callbacks to 
be triggered in a separate threadpool independently of application control.

It may become possible to consider using this in other language bindings as 
well and perhaps it could be suitable for interfacing with another async 
library; that is an interest of ours but not a priority so we'd certainly 
welcome any input or contributions. When this is a little less 
experimental, I can discuss it in one of our biweekly video meetups and 
collect feedback there for further use. I will also propose this officially 
through our gRFC process when we are ready to consider this for 
stabilization. In the meanwhile, though, feel free to kick the tires, and 
we'll keep watching this thread and our issues to gather any early input.

Regards,
vjpai

On Tuesday, August 21, 2018 at 4:20:48 PM UTC-7, Pau Freixes wrote:
>
> Thanks for the summary, 
>
> I'm starting an initiative to analyze how Grpc can be implemented on 
> top of Asyncio. One of the starting points is the current 
> implementation of node-grpc, that relays on grpc_completion_queue_next 
> [1] to achieve the needed cooperation without blocking the loop. 
>
> The usage of the queue_next and replicate the same pattern implemented 
> by node-grpc has different concerns, what worries me more is the 
> chances of having blocking calls calling the 
> grpc_completion_queue_next function [2]. 
> So I was wondering if I might implement the asynchrony using a simple 
> callback pattern, so no having implicitly blocking calls. 
>
> I will contact the author of the MR to get more info about the goal of 
> the callback variant, at least the name sounds appealing to me :). 
> Also, I could consider implementing the asynchronous layer on top of 
> the grpc_completion_queue_pluck interface. So any advice will be 
> welcomed. 
>
> PD: unfortunately replicate the Node use case to implement the Asyncio 
> support might have some red-flags that would not be possible to 
> circumvent. For example, Grpc implements the IO manager for libuv 
> achieving automatically cooperation between the Grpc code and the Node 
> code, this is not portable to the Asyncio use case. 
>
>
> [1] 
> https://github.com/grpc/grpc-node/blob/master/packages/grpc-native-core/ext/completion_queue.cc#L43
>  
> [2] 
> https://github.com/grpc/grpc/blob/master/src/core/lib/surface/completion_queue.cc#L1030
>  
> On Tue, Aug 21, 2018 at 11:39 PM 'Christopher Warrington - MSFT' via 
> grpc.io <[email protected] <javascript:>> wrote: 
> > 
> > On Tuesday, August 21, 2018 at 3:28:24 AM UTC-7, Pau Freixes wrote: 
> > 
> > > I've realized reading the current Grpc code that exists [1] other 
> > > alternatives to the completion queue next, the pluck and the callback 
> > > one. 
> > > 
> > > I've been trying to seek some information, or usage, of these 
> > > alternatives and I found nothing. 
> > 
> > There's some documentation in grpc.h [1] for 
> > grpc_completion_queue_pluck. Most of the documentation for the core C 
> > library is in grpc.h. The difference is that grpc_completion_queue_pluck 
> > takes a tag to wait for, while grpc_completion_queue_next does not: it 
> > returns some ready tag, whatever it may be. 
> > 
> > grpc_completion_queue_pluck is often used to implement synchronous 
> > processing, while grpc_completion_queue_next is used to implement 
> > asynchronous processing. 
> > 
> > The callback variant is very new and is still experimental. That's 
> > likely why it is lacking documentation. From vjpai's pull request that 
> > added the initial implementation [2]: 
> > 
> > vjpai > This is not ready for public use at the current time. There are 
> no 
> > vjpai > end2end tests possible until after #16298 lands and is 
> implemented 
> > vjpai > with a real backing poller, but there is now a unit test added. 
> > 
> > [1]: 
> https://github.com/grpc/grpc/blob/82bc60c0e13bfb00213b3a94ba72893d044e4c9a/include/grpc/grpc.h#L115-L140
>  
> > [2]: https://github.com/grpc/grpc/pull/16302 
> > 
> > -- 
> > Christopher Warrington 
> > Microsoft Corp. 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups "grpc.io" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to [email protected] <javascript:>. 
> > To post to this group, send email to [email protected] 
> <javascript:>. 
> > Visit this group at https://groups.google.com/group/grpc-io. 
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/grpc-io/020b4deb-177f-40f2-9943-c36c4c090670%40googlegroups.com.
>  
>
> > For more options, visit https://groups.google.com/d/optout. 
>
>
>
> -- 
> --pau 
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/eb3e379a-fd54-4b6c-83f5-48315e41eacd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to