[ 
https://issues.apache.org/jira/browse/PIG-5246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16036460#comment-16036460
 ] 

Jeff Zhang commented on PIG-5246:
---------------------------------

bq. If SPARK_ASSEMBLY_JAR is not a must-have thing for spark1, how to judge 
spark1 or spark2?
There's lot of ways to judge spark1 and spark2. e.g. we can run command 
'spark-submit --version' under SPARK_HOME/bin to get the version number. 

bq. Pig on Spark use spark installation and will copy 
$SPARK_HOME/lib/spark-assembly*jar(spark1) and $SPARK_HOME/jars/*jar to the 
classpath of pig. But we don't read spark-defaults.conf. We parse 
pig.properties and save the configuration about spark to SparkContext.

Why copying the assembly jar instead of including it in the classpath of pig ? 
And it is also weird to me not loading spark-defaults.conf as this would cause 
extra administration overhead. If I am a cluster administrator, I only want to 
maintenance one copy of spark configuration in spark-defaults.conf, rather than 
copying the same configuration from spark-defaults.conf to pig.properties.



> Modify bin/pig about SPARK_HOME, SPARK_ASSEMBLY_JAR after upgrading spark to 2
> ------------------------------------------------------------------------------
>
>                 Key: PIG-5246
>                 URL: https://issues.apache.org/jira/browse/PIG-5246
>             Project: Pig
>          Issue Type: Bug
>            Reporter: liyunzhang_intel
>            Assignee: liyunzhang_intel
>         Attachments: HBase9498.patch, PIG-5246.1.patch, PIG-5246.patch
>
>
> in bin/pig.
> we copy assembly jar to pig's classpath in spark1.6.
> {code}
> # For spark mode:
> # Please specify SPARK_HOME first so that we can locate 
> $SPARK_HOME/lib/spark-assembly*.jar,
> # we will add spark-assembly*.jar to the classpath.
> if [ "$isSparkMode"  == "true" ]; then
>     if [ -z "$SPARK_HOME" ]; then
>        echo "Error: SPARK_HOME is not set!"
>        exit 1
>     fi
>     # Please specify SPARK_JAR which is the hdfs path of spark-assembly*.jar 
> to allow YARN to cache spark-assembly*.jar on nodes so that it doesn't need 
> to be distributed each time an application runs.
>     if [ -z "$SPARK_JAR" ]; then
>        echo "Error: SPARK_JAR is not set, SPARK_JAR stands for the hdfs 
> location of spark-assembly*.jar. This allows YARN to cache 
> spark-assembly*.jar on nodes so that it doesn't need to be distributed each 
> time an application runs."
>        exit 1
>     fi
>     if [ -n "$SPARK_HOME" ]; then
>         echo "Using Spark Home: " ${SPARK_HOME}
>         SPARK_ASSEMBLY_JAR=`ls ${SPARK_HOME}/lib/spark-assembly*`
>         CLASSPATH=${CLASSPATH}:$SPARK_ASSEMBLY_JAR
>     fi
> fi
> {code}
> after upgrade to spark2.0, we may modify it



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to