[ 
https://issues.apache.org/jira/browse/SPARK-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14324672#comment-14324672
 ] 

anuj commented on SPARK-5005:
-----------------------------

would you mindsharing your spark-env.sh?
mine is this
#!/usr/bin/env bash
export SPARK_HOME="/appl/idc/spark"
export PATH=$SPARK_HOME/bin:$PATH
export YARN_CONF_DIR="/etc/gphd/hadoop/conf"
export HADOOP_CONF_DIR="/etc/gphd/hadoop/conf"
export HADOOP_HOME="/usr/lib/gphd/hadoop"
export SPARK_EXECUTOR_INSTANCES=3
export SPARK_EXECUTOR_CORES=2
export SPARK_EXECUTOR_MEMORY=3G
export SPARK_DRIVER_MEMORY=3G
export 
SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:/usr/lib/gphd/hadoop/lib:$SPARK_HOME/lib:/usr/lib/gphd/hadoop/lib/native/Linux-amd64-64:/usr/lib/gphd/hadoop/lib/native
CLASSPATH=/appl/idc/guava-14.0.1.jar:$SPARK_HOME/*:$SPARK_HOME/lib/*:$CLASSPATH
CLASSPATH=$CLASSPATH:$HADOOP_HOME/*:$HADOOP_HOME/lib/*
CLASSPATH=$CLASSPATH:/usr/lib/gphd/hadoop-mapreduce/*:/usr/lib/gphd/hadoop-mapreduce/lib/*
CLASSPATH=$CLASSPATH:/usr/lib/gphd/hadoop-yarn/*:/usr/lib/gphd/hadoop-yarn/lib/*
CLASSPATH=$CLASSPATH:/usr/lib/gphd/hadoop-hdfs/*:/usr/lib/gphd/hadoop-hdfs/lib/*
#export CLASSPATH=/appl/idc/spark/lib/guava-14.0.1.jar:$CLASSPATH
export CLASSPATH=$CLASSPATH
export JAVA_LIBRARY_PATH=$JAVA_LIBRARY_PATH:/usr/lib/gphd/hadoop/lib/native/
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native/
export SPARK_CLASSPATH=$CLASSPATH:$SPARK_CLASSPATH
#export PYSPARK_PYTHON=$PYSPARK_PYTHON:$SPARK_HOME/python
export PYTHONPATH=$PYSPARK_PYTHON:$PYTHONPATH
export SPARK_YARN_USER_ENV=PYSPARK_PYTHON=/appl/idc/spark/bin/pyspark

as i mentioned, this commenad worked in 1.1.1 version but not in 2.10


> Failed to start spark-shell when using  yarn-client mode with the Spark1.2.0
> ----------------------------------------------------------------------------
>
>                 Key: SPARK-5005
>                 URL: https://issues.apache.org/jira/browse/SPARK-5005
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core, Spark Shell, YARN
>    Affects Versions: 1.2.0
>         Environment: Spark 1.2.0
> Hadoop 2.2.0
>            Reporter: yangping wu
>            Priority: Minor
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> I am using Spark 1.2.0, but when I starting spark-shell with yarn-client 
> mode({code}MASTER=yarn-client bin/spark-shell{code}), It Failed and the error 
> message is
> {code}
> Unknown/unsupported param List(--executor-memory, 1024m, --executor-cores, 8, 
> --num-executors, 2)
> Usage: org.apache.spark.deploy.yarn.ApplicationMaster [options] 
> Options:
>   --jar JAR_PATH       Path to your application's JAR file (required)
>   --class CLASS_NAME   Name of your application's main class (required)
>   --args ARGS          Arguments to be passed to your application's main 
> class.
>                        Mutliple invocations are possible, each will be passed 
> in order.
>   --num-executors NUM    Number of executors to start (Default: 2)
>   --executor-cores NUM   Number of cores for the executors (Default: 1)
>   --executor-memory MEM  Memory per executor (e.g. 1000M, 2G) (Default: 1G)
> {code}
> But when I using Spark 1.1.0,and also using {code}MASTER=yarn-client 
> bin/spark-shell{code} to starting spark-shell,it works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to