By executing, you mean you are calling methods on the Future Stub?  If so, 
the completion of the future, and other callbacks, are executed on the 
executor provided to the channel when it was created (you can also fork the 
stub with your own executor).  gRPC will always complete the future on 
executor passed in, as there may be requirements (like the presence of 
Thread Locals, etc.) on that executor.  If you are worried about the app 
taking too long on one of the threads *you* provided to gRPC, you can 
always ask the application to provide you with an executor.  If this is not 
possible, an you don't particularly care about threading overhead, you can 
use a cached threadpool which will bring new threads into existence if the 
app is blocking for too long.  (Cached is also the default for gRPC itself, 
for the same reason). 

On Thursday, February 21, 2019 at 1:08:08 PM UTC-8, [email protected] wrote:
>
> If as a library you're executing grpc future calls on behave on an 
> application is there an issue just using the grpc callback threads? Would 
> there be any need to *transfer* this back to an application provided thread 
> to return the grpc thread back to it's pool?  As a library there's really 
> no control over what the application may do or how long it will use that 
> thread. 
>
> Thanks.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/e79e6b9e-6127-46ab-9905-af130d39abcc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to