[
https://issues.apache.org/jira/browse/HADOOP-4829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12705166#action_12705166
]
Todd Lipcon commented on HADOOP-4829:
-------------------------------------
Would a new configuration boolean 'fs.manual.shutdown' be adequate for your
needs? You could programatically set this before getting your filesystems, and
when set true, the shutdown hook would not be added.
> Allow FileSystem shutdown hook to be disabled
> ---------------------------------------------
>
> Key: HADOOP-4829
> URL: https://issues.apache.org/jira/browse/HADOOP-4829
> Project: Hadoop Core
> Issue Type: New Feature
> Components: fs
> Affects Versions: 0.18.1
> Reporter: Bryan Duxbury
> Priority: Minor
>
> FileSystem sets a JVM shutdown hook so that it can clean up the FileSystem
> cache. This is great behavior when you are writing a client application, but
> when you're writing a server application, like the Collector or an HBase
> RegionServer, you need to control the shutdown of the application and HDFS
> much more closely. If you set your own shutdown hook, there's no guarantee
> that your hook will run before the HDFS one, preventing you from taking some
> shutdown actions.
> The current workaround I've used is to snag the FileSystem shutdown hook via
> Java reflection, disable it, and then run it on my own schedule. I'd really
> appreciate not having to do take this hacky approach. It seems like the right
> way to go about this is to just to add a method to disable the hook directly
> on FileSystem. That way, server applications can elect to disable the
> automatic cleanup and just call FileSystem.closeAll themselves when the time
> is right.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.