westonpace commented on issue #15054:
URL: https://github.com/apache/arrow/issues/15054#issuecomment-1402270725

   > How would that work in this case? Something which keeps the AWS SDK alive 
until after all S3 clients have been destroyed?
   
   * S3's finalize would be guarded by a shared_ptr (or some kind of reference 
count). One copy would live with the thread pools (as a resource).  Well, one 
copy with each thread pool.  One copy would live with `S3Finalize`.
   * Python's atexit would call `S3Finalize` which would decrement the shared 
pointer by 1
   * As the process closes the thread pool is shut down as part of the normal 
destruction of static state.  Once it finishes it then clears it's "resources" 
which would include the S3 finalizer.
   * The reference count would now drop to 0 and it should be safe to finalize 
S3 (presumable the user is done because we have hit python atexit and 
presumably arrow is done because the thread pools are shutdown).
   * We would then call `Aws::ShutdownAPI` as the destructor for the shared_ptr.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to