Marcelo Vanzin created SPARK-27094:
--------------------------------------

             Summary: Thread interrupt being swallowed while launching 
executors in YarnAllocator
                 Key: SPARK-27094
                 URL: https://issues.apache.org/jira/browse/SPARK-27094
             Project: Spark
          Issue Type: Bug
          Components: YARN
    Affects Versions: 2.4.0
            Reporter: Marcelo Vanzin


When shutting down a SparkContext, the YarnAllocator thread is interrupted. If 
the interrupt happens just at the wrong time, you'll see something like this:

{noformat}
19/03/05 07:04:20 WARN ScriptBasedMapping: Exception running blah
java.io.IOException: java.lang.InterruptedException
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:578)
        at org.apache.hadoop.util.Shell.run(Shell.java:478)
        at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:766)
        at 
org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:251)
        at 
org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:188)
        at 
org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
        at 
org.apache.hadoop.yarn.util.RackResolver.coreResolve(RackResolver.java:101)
        at 
org.apache.hadoop.yarn.util.RackResolver.resolve(RackResolver.java:81)
        at 
org.apache.spark.deploy.yarn.SparkRackResolver.resolve(SparkRackResolver.scala:37)
        at 
org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$handleAllocatedContainers$2.apply(YarnAllocator.scala:431)
        at 
org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$handleAllocatedContainers$2.apply(YarnAllocator.scala:430)
        at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at 
org.apache.spark.deploy.yarn.YarnAllocator.handleAllocatedContainers(YarnAllocator.scala:430)
        at 
org.apache.spark.deploy.yarn.YarnAllocator.allocateResources(YarnAllocator.scala:281)
        at 
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$3.run(ApplicationMaster.scala:556)
{noformat}

That means the YARN code being called ({{RackResolver}}) is swallowing the 
interrupt , so the Spark allocator thread never exits. In this particular app, 
the allocator was in the middle of allocating a very large number of executors, 
so it seemed like the application was hung, and there were a lot of executor 
coming up even though the context was being shut down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to