li-leyang commented on PR #5160:
URL: https://github.com/apache/hadoop/pull/5160#issuecomment-1353840572

   > > For some cases that KeyProviderCache is not used, I agree that we can 
swallow the exception and not add shutdownhook.
   > 
   > Right now we skip adding the shutdown hook regardless of whether or not 
`KeyProviderCache` is actually used. Will this be an issue? There is a chance 
that, during the execution of shutdown hooks, we add something to the cache 
which will never be closed. Is this an acceptable scenario?
   > 
   > If it's not, we can make it so that `KeyProviderCache` is non-functional 
if the shutdown hook couldn't be added (set some flag inside of the cache which 
rejects future calls to `get()`, or something along those lines). But obviously 
this is added complexity. Do you think it is necessary @li-leyang ?
   
   I think this should be fine. The KeyProviderCache also has expiration 
mechanism so it will get cleared eventually. This was introduced to tackle a 
bug where each DFSClient instance closes a global KeyProviderCache.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to