[ 
https://issues.apache.org/jira/browse/SPARK-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-6689:
-----------------------------
    Priority: Minor  (was: Major)

I imagine this is a problem because you are building with SBT, and it can't 
fully parse the Maven build. The fix depends on some Maven profiles which may 
not fully affect the SBT build in the same way. I'm not 100% sure, but I know 
there is some difference.

The build also fails for me with your build command, but succeeds with Maven. 
Since Maven is the build of reference I am not sure if this is such a big deal 
except to developers who have to work specifically with Hadoop 2.2 and want to 
use SBT.

It'd be great if you can figure out a fix but it's not affecting the main build.

> MiniYarnCLuster still test failed with hadoop-2.2
> -------------------------------------------------
>
>                 Key: SPARK-6689
>                 URL: https://issues.apache.org/jira/browse/SPARK-6689
>             Project: Spark
>          Issue Type: Test
>          Components: Tests, YARN
>    Affects Versions: 1.3.0
>            Reporter: Zhang, Liye
>            Priority: Minor
>
> when running unit test *YarnClusterSuite* with *hadoop-2.2*, exception will 
> throw because *Timed out waiting for RM to come up*. Some previously related 
> discussion can be traced in 
> [spark-3710|https://issues.apache.org/jira/browse/SPARK-3710] 
> ([PR2682|https://github.com/apache/spark/pull/2682]) and 
> [spark-2778|https://issues.apache.org/jira/browse/SPARK-2778] 
> ([PR2605|https://github.com/apache/spark/pull/2605]). 
> With command *build/sbt -Pyarn -Phadoop-2.2 "test-only 
> org.apache.spark.deploy.yarn.YarnClusterSuite"*, will get following 
> exceptions: 
> {noformat}
> [info] Exception encountered when attempting to run a suite with class name: 
> org.apache.spark.deploy.yarn.YarnClusterSuite *** ABORTED *** (15 seconds, 
> 799 milliseconds)
> [info]   java.lang.IllegalStateException: Timed out waiting for RM to come up.
> [info]   at 
> org.apache.spark.deploy.yarn.YarnClusterSuite.beforeAll(YarnClusterSuite.scala:114)
> [info]   at 
> org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)
> [info]   at 
> org.apache.spark.deploy.yarn.YarnClusterSuite.beforeAll(YarnClusterSuite.scala:44)
> [info]   at 
> org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
> [info]   at 
> org.apache.spark.deploy.yarn.YarnClusterSuite.run(YarnClusterSuite.scala:44)
> [info]   at 
> org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
> [info]   at 
> org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
> [info]   at sbt.ForkMain$Run$2.call(ForkMain.java:294)
> [info]   at sbt.ForkMain$Run$2.call(ForkMain.java:284)
> [info]   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> [info]   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [info]   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [info]   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> And without *-Phadoop-2.2* or replace it with "*-Dhadoop.version*" (e.g. 
> build/sbt -Pyarn "test-only org.apache.spark.deploy.yarn.YarnClusterSuite") 
> more info will come out:
> {noformat}
> Exception in thread "Thread-7" java.lang.NoClassDefFoundError: 
> org/mortbay/jetty/servlet/Context
>       at org.apache.hadoop.yarn.webapp.WebApps.$for(WebApps.java:309)
>       at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:602)
>       at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:655)
>       at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>       at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper$2.run(MiniYARNCluster.java:219)
> Caused by: java.lang.ClassNotFoundException: org.mortbay.jetty.servlet.Context
>       at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
>       at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
>       at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>       at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>       at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> [info] Resolving org.apache.hadoop#hadoop-yarn-server-common;2.2.0 ...
> Exception in thread "Thread-18" java.lang.NoClassDefFoundError: 
> org/mortbay/jetty/servlet/Context
>       at org.apache.hadoop.yarn.webapp.WebApps.$for(WebApps.java:309)
>       at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:62)
>       at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>       at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>       at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:199)
>       at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>       at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper$1.run(MiniYARNCluster.java:337)
> Caused by: java.lang.ClassNotFoundException: org.mortbay.jetty.servlet.Context
>       at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
>       at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
>       at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>       at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>       at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> [info] Resolving org.apache.hadoop#hadoop-yarn;2.2.0 ...
> [info] Exception encountered when attempting to run a suite with class name: 
> org.apache.spark.deploy.yarn.YarnClusterSuite *** ABORTED *** (15 seconds, 
> 799 milliseconds)
> [info]   java.lang.IllegalStateException: Timed out waiting for RM to come up.
> [info]   at 
> org.apache.spark.deploy.yarn.YarnClusterSuite.beforeAll(YarnClusterSuite.scala:114)
> [info]   at 
> org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)
> [info]   at 
> org.apache.spark.deploy.yarn.YarnClusterSuite.beforeAll(YarnClusterSuite.scala:44)
> [info]   at 
> org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
> [info]   at 
> org.apache.spark.deploy.yarn.YarnClusterSuite.run(YarnClusterSuite.scala:44)
> [info]   at 
> org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
> [info]   at 
> org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
> [info]   at sbt.ForkMain$Run$2.call(ForkMain.java:294)
> [info]   at sbt.ForkMain$Run$2.call(ForkMain.java:284)
> [info]   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> [info]   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [info]   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [info]   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> If we add the following code into *yarn/pom.xml*,
> {code:xml}
>     <dependency>
>       <groupId>javax.ws.rs</groupId>
>       <artifactId>jsr311-api</artifactId>
>       <version>1.1.1</version>
>       <scope>test</scope>
>     </dependency>
>     <dependency>
>       <groupId>org.mortbay.jetty</groupId>
>       <artifactId>jetty</artifactId>
>       <version>6.1.26</version>
>       <scope>test</scope>
>     </dependency>
> {code}
> The error message would be :
> {noformat}
> Exception in thread "Thread-188" 
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: NMWebapps failed to 
> start.
>       at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:76)
>       at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>       at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
>       at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStart(NodeManager.java:199)
>       at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
>       at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper$1.run(MiniYARNCluster.java:337)
> Caused by: org.apache.hadoop.yarn.webapp.WebAppException: Error starting http 
> server
>       at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:262)
>       at 
> org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:62)
>       ... 5 more
> Caused by: java.io.IOException: Problem in starting http server. Server 
> handlers failed
>       at org.apache.hadoop.http.HttpServer.start(HttpServer.java:700)
>       at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:257)
> [info] Resolving org.apache.hadoop#hadoop-yarn-common;2.2.0 ...
> [info] Exception encountered when attempting to run a suite with class name: 
> org.apache.spark.deploy.yarn.YarnClusterSuite *** ABORTED *** (3 seconds, 455 
> milliseconds)
> [info]   org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.io.IOException: NodeManager 0 failed to start
> [info]   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper.serviceStart(MiniYARNCluster.java:352)
> [info]   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> [info]   at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
> [info]   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> [info]   at 
> org.apache.spark.deploy.yarn.YarnClusterSuite.beforeAll(YarnClusterSuite.scala:94)
> [info]   at 
> org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)
> [info]   at 
> org.apache.spark.deploy.yarn.YarnClusterSuite.beforeAll(YarnClusterSuite.scala:44)
> [info]   at 
> org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
> [info]   at 
> org.apache.spark.deploy.yarn.YarnClusterSuite.run(YarnClusterSuite.scala:44)
> [info]   at 
> org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
> [info]   at 
> org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
> [info]   at sbt.ForkMain$Run$2.call(ForkMain.java:294)
> [info]   at sbt.ForkMain$Run$2.call(ForkMain.java:284)
> [info]   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> [info]   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [info]   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [info]   at java.lang.Thread.run(Thread.java:745)
> [info]   Cause: java.io.IOException: NodeManager 0 failed to start
> [info]   at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$NodeManagerWrapper.serviceStart(MiniYARNCluster.java:348)
> [info]   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> [info]   at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
> [info]   at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> [info]   at 
> org.apache.spark.deploy.yarn.YarnClusterSuite.beforeAll(YarnClusterSuite.scala:94)
> [info]   at 
> org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)
> [info]   at 
> org.apache.spark.deploy.yarn.YarnClusterSuite.beforeAll(YarnClusterSuite.scala:44)
> [info]   at 
> org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
> [info]   at 
> org.apache.spark.deploy.yarn.YarnClusterSuite.run(YarnClusterSuite.scala:44)
> [info]   at 
> org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
> [info]   at 
> org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
> [info]   at sbt.ForkMain$Run$2.call(ForkMain.java:294)
> [info]   at sbt.ForkMain$Run$2.call(ForkMain.java:284)
> [info]   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> [info]   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [info]   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [info]   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> *NOTE: This issue does not exist for hadoop-2.2+*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to