We saw similar issue because we have our own shutdown hook.
Here is the stack trace (hadoop 0.20.1):
TERM trapped. Shutting down.
Exception in thread "Thread-6" java.lang.NullPointerException
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeThreads(DFSClient.java:3164)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3207)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3152)
at
org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:1032)
at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:233)
at
org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:269)
at
org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:1419)
at org.apache.hadoop.fs.FileSystem.closeAll(FileSystem.java:212)
at
org.apache.hadoop.fs.FileSystem$ClientFinalizer.run(FileSystem.java:197)
<-- Wrapper Stopped
Hopefully in 0.21.0 there would be no NPE.
On Wed, Mar 10, 2010 at 10:00 AM, Todd Lipcon <[email protected]> wrote:
> Hi,
>
> The issue here is that Hadoop itself uses a shutdown hook to close all open
> filesystems when the JVM shuts down. Since JVM shutdown hooks don't have a
> specified order, you shouldn't access Hadoop filesystem objects from a
> shutdown hook.
>
> To get around this you can use the fs.automatic.close configuration
> variable
> (provided by this patch: https://issues.apache.org/jira/browse/HADOOP-4829)
> to disable the Hadoop shutdown hook. This patch is applied to CDH2 (or else
> you'll have to apply it manually)
>
> Note that if you disable the shutdown hook, you'll need to manually close
> the filesystems using FileSystem.closeAll
>
> Thanks
> -Todd
>
> On Tue, Mar 9, 2010 at 9:39 PM, Silllllence <[email protected]> wrote:
>
> >
> > Hi fellows
> > Below code segment add a shutdown hook to JVM, but when I got a strange
> > exception,
> > java.lang.IllegalStateException: Shutdown in progress
> > at
> > java.lang.ApplicationShutdownHooks.add(ApplicationShutdownHooks.java:39)
> > at java.lang.Runtime.addShutdownHook(Runtime.java:192)
> > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1387)
> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
> > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:180)
> > at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
> > at young.Main$1.run(Main.java:21)
> > Java doc said this exception is threw when the virtual machine is already
> > in
> > the process of shutting down, (http://java.sun.com/j2se/1.5.0/docs/api/
> ),
> > what does this mean? Why this happen? How to fix ?
> > I'm really appreciate if you can try this code, and help me to figure out
> > what's going on here, thank you !
> >
> >
> ---------------------------------------------------------------------------------------
> > import org.apache.hadoop.conf.Configuration;
> > import org.apache.hadoop.fs.FileSystem;
> > import org.apache.hadoop.fs.Path;
> > import org.apache.hadoop.mapred.JobConf;
> >
> > @SuppressWarnings("deprecation")
> > public class Main {
> >
> > public static void main(String[] args) {
> > Runtime.getRuntime().addShutdownHook(new Thread() {
> > @Override
> > public void run() {
> > Path path = new
> Path("/temp/hadoop-young");
> > System.out.println("Thread run : " +
> path);
> > Configuration conf = new JobConf();
> > FileSystem fs;
> > try {
> > fs = path.getFileSystem(conf);
> > if(fs.exists(path)){
> > fs.delete(path);
> > }
> > } catch (Exception e) {
> >
> System.err.println(e.getMessage());
> > e.printStackTrace();
> > }
> > };
> > });
> > }
> > }
> > --
> > View this message in context:
> >
> http://old.nabble.com/%28Strange%21%29getFileSystem-in-JVM-shutdown-hook-throws-shutdown-in-progress-exception-tp27845803p27845803.html
> > Sent from the Hadoop core-user mailing list archive at Nabble.com.
> >
> >
>
>
> --
> Todd Lipcon
> Software Engineer, Cloudera
>