[ https://issues.apache.org/jira/browse/SPARK-16781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15420547#comment-15420547 ]
Jeff Zhang commented on SPARK-16781: ------------------------------------ JAVA_HOME will be set by yarn, not sure about other cluster managers. > java launched by PySpark as gateway may not be the same java used in the > spark environment > ------------------------------------------------------------------------------------------ > > Key: SPARK-16781 > URL: https://issues.apache.org/jira/browse/SPARK-16781 > Project: Spark > Issue Type: Bug > Components: PySpark > Affects Versions: 1.6.2 > Reporter: Michael Berman > > When launching spark on a system with multiple javas installed, there are a > few options for choosing which JRE to use, setting `JAVA_HOME` being the most > straightforward. > However, when pyspark's internal py4j launches its JavaGateway, it always > invokes `java` directly, without qualification. This means you get whatever > java's first on your path, which is not necessarily the same one in spark's > JAVA_HOME. > This could be seen as a py4j issue, but from their point of view, the fix is > easy: make sure the java you want is first on your path. I can't figure out a > way to make that reliably happen through the pyspark executor launch path, > and it seems like something that would ideally happen automatically. If I set > JAVA_HOME when launching spark, I would expect that to be the only java used > throughout the stack. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org