Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/1326#issuecomment-48611951
Caused by this issue, we can see IOException at Driver's(Application
Maser's) log like as follows.
14/07/09 18:20:15 INFO spark.MapOutputTrackerMasterActor:
MapOutputTrackerActor stopped!
14/07/09 18:20:15 INFO network.ConnectionManager: Selector thread was
interrupted!
14/07/09 18:20:15 INFO network.ConnectionManager: ConnectionManager stopped
14/07/09 18:20:15 INFO storage.MemoryStore: MemoryStore cleared
14/07/09 18:20:15 INFO storage.BlockManager: BlockManager stopped
14/07/09 18:20:15 INFO storage.BlockManagerMasterActor: Stopping
BlockManagerMaster
14/07/09 18:20:15 INFO storage.BlockManagerMaster: BlockManagerMaster
stopped
14/07/09 18:20:15 INFO remote.RemoteActorRefProvider$RemotingTerminator:
Shutting down remote daemon.
14/07/09 18:20:15 INFO remote.RemoteActorRefProvider$RemotingTerminator:
Remote daemon shut down; proceeding with flushing remote transports.
14/07/09 18:20:15 INFO spark.SparkContext: Successfully stopped SparkContext
14/07/09 18:20:15 INFO yarn.ApplicationMaster: Unregistering
ApplicationMaster with SUCCEEDED
14/07/09 18:20:15 INFO impl.AMRMClientImpl: Waiting for application to be
successfully unregistered.
14/07/09 18:20:15 INFO Remoting: Remoting shut down
14/07/09 18:20:15 INFO remote.RemoteActorRefProvider$RemotingTerminator:
Remoting shut down.
14/07/09 18:20:15 INFO yarn.ApplicationMaster$$anon$1: Invoking sc stop
from shutdown hook
14/07/09 18:20:15 INFO ui.SparkUI: Stopped Spark web UI at
http://spark-slave01:37382
14/07/09 18:20:15 INFO yarn.ApplicationMaster: AppMaster received a signal.
14/07/09 18:20:15 INFO yarn.ApplicationMaster: Deleting staging directory
.sparkStaging/application_1404875428360_0024
14/07/09 18:20:15 ERROR yarn.ApplicationMaster: Failed to cleanup staging
dir .sparkStaging/application_1404875428360_0024
java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:629)
at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1618)
at
org.apache.hadoop.hdfs.DistributedFileSystem$11.doCall(DistributedFileSystem.java:585)
at
org.apache.hadoop.hdfs.DistributedFileSystem$11.doCall(DistributedFileSystem.java:581)
at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at
org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:581)
at
org.apache.spark.deploy.yarn.ApplicationMaster.org$apache$spark$deploy$yarn$ApplicationMaster$$cleanupStagingDir(ApplicationMaster.scala:345)
at
org.apache.spark.deploy.yarn.ApplicationMaster$AppMasterShutdownHook.run(ApplicationMaster.scala:360)
at
org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---