lidavidm commented on PR #36517:
URL: https://github.com/apache/arrow/pull/36517#issuecomment-1640845181

   The overview of the callback-based async API is here, but does not explain 
the finer usage details: 
https://github.com/grpc/proposal/blob/master/L67-cpp-callback-api.md
   
   Their own implementation that uses `std::function` actually takes a 
completely different code path too, so we can use that to determine 'proper' 
behavior. Looking at unit tests, they just stack allocate the callbacks, too.
   
   > the problem is that dropping resources when the async system tells us the 
IO is finished is the logical thing to do
   
   I think this still works, you just can't have the gRPC/Flight own the 
listener. That lines up with gRPC opting to take `T*` and not `shared_ptr<T>`. 
And the apparent `join()` call in gRPC makes sense there: so long as the 
destruction happens somewhere else, the `join()` call will be fine.
   
   That does make it hard to tie to Future, though, since I guess you'd build 
up a chain of callbacks and the only extant Future instance will be in Flight's 
hands, which is not what we want.
   
   It's worth noting that actually, gRPC doesn't care about the callbacks here 
specifically. Instead it cares about that `ClientRpc`/`grpc::ClientContext` 
object, which in normal usage is actually separated from the callbacks, i.e. 
the usage is actually meant to be 
   
   ```cpp
   grpc::ClientContext ctx;
   stub->experimental_async()->GetFlightInfo(&ctx, ...);
   // you are responsible for keeping ctx alive until the callback is finished
   ```
   
   This is true of the synchronous API, too. We've just hidden it from the user 
by allocating it as part of the callback to gRPC. We could split this out 
somehow and let the user manage it explicitly (inconvenient, though...)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to