Github user shaneknapp commented on the pull request:

    https://github.com/apache/spark/pull/5432#issuecomment-91358326
  
    so i just grepped through the code and found stuff like this:
    
    yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala:        
YarnSparkHadoopUtil.expandEnvironment(Environment.JAVA_HOME) + "/bin/java", 
"-server"
    yarn/src/main/scala/org/apache/spark/deploy/yarn/ExecutorRunnable.scala:    
  YarnSparkHadoopUtil.expandEnvironment(Environment.JAVA_HOME) + "/bin/java",
    
    i've never explicitly set JAVA_HOME in jenkins' slave user space before, 
but that's obviously why it's failing.  that's pretty bad code imo.
    
    solutions:
    * explicitly set JAVA_HOME in each slave's config (bad, as it ties that 
slave to whatever is on system java)
    * if JAVA_HOME isn't set, use whatever java is in the path (good)
    * explicitly define which java version to test against in the jenkins 
build's config


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to