[ 
https://issues.apache.org/jira/browse/OOZIE-3404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16731971#comment-16731971
 ] 

Andras Piros commented on OOZIE-3404:
-------------------------------------

Thanks for the contribution so far [~zuston].

A couple of questions and remarks:
 * new unit tests missing
 * are you sure you need exactly these MapReduce settings for all Hadoop 
versions 2.0+? This means all the Hadoop versions Oozie currently supports as 
this is in the range [2.6.0, 3.1.1]
 * please provide tested Oozie and Hadoop versions, as well as relevant pieces 
of the PySpark workflow definition / job properties file / Spark version / 
PySpark sources. As [~asalamon74] already pointed out, PySpark works under a 
wide variety of circumstances
 * I don't understand why setting exactly these MapReduce properties would help 
the PySpark jobs (and only those) to succeed. Can you please explain?

> The env variable of SPARK_HOME needs to be set when running pySpark
> -------------------------------------------------------------------
>
>                 Key: OOZIE-3404
>                 URL: https://issues.apache.org/jira/browse/OOZIE-3404
>             Project: Oozie
>          Issue Type: Bug
>    Affects Versions: 5.1.0
>            Reporter: Junfan Zhang
>            Assignee: Junfan Zhang
>            Priority: Major
>         Attachments: oozie-3404-1.patch
>
>
> When we run spark in a cluster, we rely on the spark jars on hdfs. We don't 
> deploy Spark on the cluster server. So running pySpark according to the Oozie 
> documentation is not successful.
>  
> I found that when Hadoop is a 2.0+ version, although Oozie sets the 
> {{SPARK_HOME}}  variable in {{mapred.child.env}} , the {{mapreduce.map.env}} 
> variable is read first in Hadoop ([source 
> code|https://github.com/apache/hadoop/blob/f95b390df2ca7d599f0ad82cf6e8d980469e7abb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/MapReduceChildJVM.java#L45])
>  . So when we don't set {{SPARK_HOME}} env in {{mapreduce.map.env}} , pySpark 
> doesn't work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to