I think you have to use the keyword *set* to set an environment variable in
windows. Check the section Setting environment variables from
http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/ntcmds_shelloverview.mspx?mfr=true

Thanks
Best Regards

On Tue, Aug 25, 2015 at 1:25 PM, Yann ROBIN <me.s...@gmail.com> wrote:

> Hi,
>
> We have a spark standalone cluster running on linux.
> We have a job that we submit to the spark cluster on windows. When
> submitting this job using windows the execution failed with this error
> in the Notes "java.lang.IllegalArgumentException: Invalid environment
> variable name: "=::"". When submitting from linux it works fine.
>
> I thought that this might be the result of one of the ENV variable on
> my system so I've modify the submit cmd to remove all env variable
> except the one needed by Java. This is the env before executing java
> command :
> ASSEMBLY_DIR=c:\spark\spark-1.4.0-bin-hadoop2.6\bin\..\lib
>
> ASSEMBLY_DIR1=c:\spark\spark-1.4.0-bin-hadoop2.6\bin\../assembly/target/scala-2.10
>
> ASSEMBLY_DIR2=c:\spark\spark-1.4.0-bin-hadoop2.6\bin\../assembly/target/scala-2.11
> CLASS=org.apache.spark.deploy.SparkSubmit
> CLASSPATH=.;
> JAVA_HOME=C:\Program Files\Java\jre1.8.0_51
> LAUNCHER_OUTPUT=\spark-class-launcher-output-23386.txt
>
> LAUNCH_CLASSPATH=c:\spark\spark-1.4.0-bin-hadoop2.6\bin\..\lib\spark-assembly-1.4.0-hadoop2.6.0.jar
> PYTHONHASHSEED=0
> RUNNER=C:\Program Files\Java\jre1.8.0_51\bin\java
>
> SPARK_ASSEMBLY_JAR=c:\spark\spark-1.4.0-bin-hadoop2.6\bin\..\lib\spark-assembly-1.4.0-hadoop2.6.0.jar
> SPARK_CMD="C:\Program Files\Java\jre1.8.0_51\bin\java" -cp
>
> "c:\spark\spark-1.4.0-bin-hadoop2.6\bin\..\conf\;c:\spark\spark-1.4.0-bin-hadoop2.6\bin\..\lib\spark-assembly-1.4.0-hadoop2.6.0.jar;c:\spark\spark-1.4.0-bin-hadoop2.6\bin\..\lib\datanucleus-api-jdo-3.2.6.jar;c:\spark\spark-1.4.0-bin-hadoop2.6\bin\..\lib\datanucleus-core-3.2.10.jar;c:\spark\spark-1.4.0-bin-hadoop2.6\bin\..\lib\datanucleus-rdbms-3.2.9.jar"
> org.apache.spark.deploy.SparkSubmit --master spark://172.16.8.21:7077
> --deploy-mode cluster --conf "spark.driver.memory=4G" --conf
>
> "spark.driver.extraClassPath=/opt/local/spark/lib/spark-assembly-1.4.0-hadoop2.6.0.jar"
> --class com.publica.Accounts --verbose
> http://server/data-analytics/data-analytics.jar
> spark://172.16.8.21:7077 data-analysis
> http://server/data-analytics/data-analytics.jar 23 8 2015
> SPARK_ENV_LOADED=1
> SPARK_HOME=c:\spark\spark-1.4.0-bin-hadoop2.6\bin\..
> SPARK_SCALA_VERSION=2.10
> SystemRoot=C:\Windows
> user_conf_dir=c:\spark\spark-1.4.0-bin-hadoop2.6\bin\..\..\conf
>
> _SPARK_ASSEMBLY=c:\spark\spark-1.4.0-bin-hadoop2.6\bin\..\lib\spark-assembly-1.4.0-hadoop2.6.0.jar
>
> Is there a way to make this works ?
>
> --
> Yann
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to