Dongjoon Hyun created SPARK-33104:
-------------------------------------

             Summary: Fix YarnClusterSuite.yarn-cluster should respect conf 
overrides in SparkHadoopUtil
                 Key: SPARK-33104
                 URL: https://issues.apache.org/jira/browse/SPARK-33104
             Project: Spark
          Issue Type: Bug
          Components: Tests
    Affects Versions: 3.1.0
            Reporter: Dongjoon Hyun


- 
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-sbt-hadoop-2.7-hive-2.3/1377/testReport/org.apache.spark.deploy.yarn/YarnClusterSuite/yarn_cluster_should_respect_conf_overrides_in_SparkHadoopUtil__SPARK_16414__SPARK_23630_/

{code}
20/10/09 05:18:13.211 ContainersLauncher #0 WARN DefaultContainerExecutor: Exit 
code from container container_1602245728426_0006_02_000001 is : 15
20/10/09 05:18:13.211 ContainersLauncher #0 WARN DefaultContainerExecutor: 
Exception from container-launch with container ID: 
container_1602245728426_0006_02_000001 and exit code: 15
ExitCodeException exitCode=15: 
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:585)
        at org.apache.hadoop.util.Shell.run(Shell.java:482)
        at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:776)
        at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
        at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
20/10/09 05:18:13.211 ContainersLauncher #0 WARN ContainerLaunch: Container 
exited with a non-zero exit code 15
20/10/09 05:18:13.237 AsyncDispatcher event handler WARN NMAuditLogger: 
USER=jenkins    OPERATION=Container Finished - Failed   TARGET=ContainerImpl    
RESULT=FAILURE  DESCRIPTION=Container failed with state: EXITED_WITH_FAILURE    
APPID=application_1602245728426_0006    
CONTAINERID=container_1602245728426_0006_02_000001
20/10/09 05:18:13.244 Socket Reader #1 for port 37112 INFO Server: Auth 
successful for appattempt_1602245728426_0006_000002 (auth:SIMPLE)
20/10/09 05:18:13.326 IPC Parameter Sending Thread #0 DEBUG Client: IPC Client 
(1123559518) connection to amp-jenkins-worker-04.amp/192.168.10.24:43090 from 
jenkins sending #37
20/10/09 05:18:13.327 IPC Client (1123559518) connection to 
amp-jenkins-worker-04.amp/192.168.10.24:43090 from jenkins DEBUG Client: IPC 
Client (1123559518) connection to amp-jenkins-worker-04.amp/192.168.10.24:43090 
from jenkins got value #37
20/10/09 05:18:13.328 main DEBUG ProtobufRpcEngine: Call: getApplicationReport 
took 2ms
20/10/09 05:18:13.328 main INFO Client: Application report for 
application_1602245728426_0006 (state: FINISHED)
20/10/09 05:18:13.328 main DEBUG Client: 
         client token: N/A
         diagnostics: User class threw exception: 
org.scalatest.exceptions.TestFailedException: null was not equal to "testvalue"
        at 
org.scalatest.matchers.MatchersHelper$.indicateFailure(MatchersHelper.scala:344)
        at 
org.scalatest.matchers.should.Matchers$ShouldMethodHelperClass.shouldMatcher(Matchers.scala:6778)
        at 
org.scalatest.matchers.should.Matchers$AnyShouldWrapper.should(Matchers.scala:6822)
        at 
org.apache.spark.deploy.yarn.YarnClusterDriverUseSparkHadoopUtilConf$.$anonfun$main$2(YarnClusterSuite.scala:383)
        at 
scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
        at 
scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
        at 
org.apache.spark.deploy.yarn.YarnClusterDriverUseSparkHadoopUtilConf$.main(YarnClusterSuite.scala:382)
        at 
org.apache.spark.deploy.yarn.YarnClusterDriverUseSparkHadoopUtilConf.main(YarnClusterSuite.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:732)

         ApplicationMaster host: amp-jenkins-worker-04.amp
         ApplicationMaster RPC port: 36200
         queue: default
         start time: 1602245859148
         final status: FAILED
         tracking URL: 
http://amp-jenkins-worker-04.amp:39546/proxy/application_1602245728426_0006/
         user: jenkins
20/10/09 05:18:13.331 main ERROR Client: Application diagnostics message: User 
class threw exception: org.scalatest.exceptions.TestFailedException: null was 
not equal to "testvalue"
        at 
org.scalatest.matchers.MatchersHelper$.indicateFailure(MatchersHelper.scala:344)
        at 
org.scalatest.matchers.should.Matchers$ShouldMethodHelperClass.shouldMatcher(Matchers.scala:6778)
        at 
org.scalatest.matchers.should.Matchers$AnyShouldWrapper.should(Matchers.scala:6822)
        at 
org.apache.spark.deploy.yarn.YarnClusterDriverUseSparkHadoopUtilConf$.$anonfun$main$2(YarnClusterSuite.scala:383)
        at 
scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
        at 
scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
        at 
org.apache.spark.deploy.yarn.YarnClusterDriverUseSparkHadoopUtilConf$.main(YarnClusterSuite.scala:382)
        at 
org.apache.spark.deploy.yarn.YarnClusterDriverUseSparkHadoopUtilConf.main(YarnClusterSuite.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:732)

20/10/09 05:18:13.332 launcher-proc-6 INFO 
YarnClusterDriverUseSparkHadoopUtilConf: Exception in thread "main" 
org.apache.spark.SparkException: Application application_1602245728426_0006 
finished with failed status
20/10/09 05:18:13.332 launcher-proc-6 INFO 
YarnClusterDriverUseSparkHadoopUtilConf:     at 
org.apache.spark.deploy.yarn.Client.run(Client.scala:1199)
20/10/09 05:18:13.332 launcher-proc-6 INFO 
YarnClusterDriverUseSparkHadoopUtilConf:     at 
org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1590)
20/10/09 05:18:13.332 launcher-proc-6 INFO 
YarnClusterDriverUseSparkHadoopUtilConf:     at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:934)
20/10/09 05:18:13.332 launcher-proc-6 INFO 
YarnClusterDriverUseSparkHadoopUtilConf:     at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
20/10/09 05:18:13.332 launcher-proc-6 INFO 
YarnClusterDriverUseSparkHadoopUtilConf:     at 
org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
20/10/09 05:18:13.332 launcher-proc-6 INFO 
YarnClusterDriverUseSparkHadoopUtilConf:     at 
org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
20/10/09 05:18:13.332 launcher-proc-6 INFO 
YarnClusterDriverUseSparkHadoopUtilConf:     at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1013)
20/10/09 05:18:13.332 launcher-proc-6 INFO 
YarnClusterDriverUseSparkHadoopUtilConf:     at 
org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1022)
20/10/09 05:18:13.332 launcher-proc-6 INFO 
YarnClusterDriverUseSparkHadoopUtilConf:     at 
org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
20/10/09 05:18:13.335 Thread-1 INFO ShutdownHookManager: Shutdown hook called
20/10/09 05:18:13.337 Thread-1 INFO ShutdownHookManager: Deleting directory 
/tmp/spark-74d7ab7c-4fd7-4980-ac22-9b73e3f8955c
20/10/09 05:18:13.343 Thread-1 INFO ShutdownHookManager: Deleting directory 
/tmp/spark-27d8a061-ba44-4a0e-a7d4-985443f7b4ca
20/10/09 05:18:14.176 pool-1-thread-1-ScalaTest-running-YarnClusterSuite INFO 
YarnClusterSuite: 

===== FINISHED o.a.s.deploy.yarn.YarnClusterSuite: 'yarn-cluster should respect 
conf overrides in SparkHadoopUtil (SPARK-16414, SPARK-23630)' =====
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to