[ https://issues.apache.org/jira/browse/HADOOP-4829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Todd Lipcon updated HADOOP-4829: -------------------------------- Attachment: HADOOP-4829-0.18.3.patch Here's a patch against branch 18 if anyone wants this feature backported. However, it probably does not need to be committed to the Apache 18 branch since it's a new feature rather than a fix. > Allow FileSystem shutdown hook to be disabled > --------------------------------------------- > > Key: HADOOP-4829 > URL: https://issues.apache.org/jira/browse/HADOOP-4829 > Project: Hadoop Core > Issue Type: New Feature > Components: fs > Affects Versions: 0.18.1 > Reporter: Bryan Duxbury > Priority: Minor > Attachments: HADOOP-4829-0.18.3.patch, hadoop-4829-v2.txt, > hadoop-4829.txt > > > FileSystem sets a JVM shutdown hook so that it can clean up the FileSystem > cache. This is great behavior when you are writing a client application, but > when you're writing a server application, like the Collector or an HBase > RegionServer, you need to control the shutdown of the application and HDFS > much more closely. If you set your own shutdown hook, there's no guarantee > that your hook will run before the HDFS one, preventing you from taking some > shutdown actions. > The current workaround I've used is to snag the FileSystem shutdown hook via > Java reflection, disable it, and then run it on my own schedule. I'd really > appreciate not having to do take this hacky approach. It seems like the right > way to go about this is to just to add a method to disable the hook directly > on FileSystem. That way, server applications can elect to disable the > automatic cleanup and just call FileSystem.closeAll themselves when the time > is right. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.