[ 
https://issues.apache.org/jira/browse/SPARK-22028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169663#comment-16169663
 ] 

Franz Wimmer commented on SPARK-22028:
--------------------------------------

Even if I could unset this - which I cannot on first sight - it's a reoccuring 
"feature" of Windows. Please have a look at 
[java.lang.ProcessEnvironment.validateVariable()|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/lang/ProcessEnvironment.java#ProcessEnvironment.validateVariable%28java.lang.String%29].
 There's a check if the variable name contains a =. So I think it would be 
easier for Spark to not pass this variable than Oracle changing their Java 
source code.

> spark-submit trips over environment variables
> ---------------------------------------------
>
>                 Key: SPARK-22028
>                 URL: https://issues.apache.org/jira/browse/SPARK-22028
>             Project: Spark
>          Issue Type: Bug
>          Components: Deploy
>    Affects Versions: 2.1.1
>         Environment: Operating System: Windows 10
> Shell: CMD or bash.exe, both with the same result
>            Reporter: Franz Wimmer
>              Labels: windows
>
> I have a strange environment variable in my Windows operating system:
> {code:none}
> C:\Path>set ""
> =::=::\
> {code}
> According to [this issue at 
> stackexchange|https://unix.stackexchange.com/a/251215/251326], this is some 
> sort of old MS-DOS relict that interacts with cygwin shells.
> Leaving that aside for a moment, Spark tries to read environment variables on 
> submit and trips over it: 
> {code:none}
> ./spark-submit.cmd
> Running Spark using the REST application submission protocol.
> Using Spark's default log4j profile: 
> org/apache/spark/log4j-defaults.properties
> 17/09/15 15:57:51 INFO RestSubmissionClient: Submitting a request to launch 
> an application in spark://********:31824.
> 17/09/15 15:58:01 WARN RestSubmissionClient: Unable to connect to server 
> spark://*******:31824.
> Warning: Master endpoint spark://********:31824 was not a REST server. 
> Falling back to legacy submission gateway instead.
> 17/09/15 15:58:02 ERROR Shell: Failed to locate the winutils binary in the 
> hadoop binary path
> [ ... ]
> 17/09/15 15:58:02 WARN NativeCodeLoader: Unable to load native-hadoop library 
> for your platform... using builtin-java classes where applicable
> 17/09/15 15:58:08 ERROR ClientEndpoint: Exception from cluster was: 
> java.lang.IllegalArgumentException: Invalid environment variable name: "=::"
> java.lang.IllegalArgumentException: Invalid environment variable name: "=::"
>         at 
> java.lang.ProcessEnvironment.validateVariable(ProcessEnvironment.java:114)
>         at java.lang.ProcessEnvironment.access$200(ProcessEnvironment.java:61)
>         at 
> java.lang.ProcessEnvironment$Variable.valueOf(ProcessEnvironment.java:170)
>         at 
> java.lang.ProcessEnvironment$StringEnvironment.put(ProcessEnvironment.java:242)
>         at 
> java.lang.ProcessEnvironment$StringEnvironment.put(ProcessEnvironment.java:221)
>         at 
> org.apache.spark.deploy.worker.CommandUtils$$anonfun$buildProcessBuilder$2.apply(CommandUtils.scala:55)
>         at 
> org.apache.spark.deploy.worker.CommandUtils$$anonfun$buildProcessBuilder$2.apply(CommandUtils.scala:54)
>         at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>         at 
> scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
>         at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
>         at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
>         at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>         at 
> org.apache.spark.deploy.worker.CommandUtils$.buildProcessBuilder(CommandUtils.scala:54)
>         at 
> org.apache.spark.deploy.worker.DriverRunner.prepareAndRunDriver(DriverRunner.scala:181)
>         at 
> org.apache.spark.deploy.worker.DriverRunner$$anon$1.run(DriverRunner.scala:91)
> {code}
> Please note that _spark-submit.cmd_ is in this case my own script calling the 
> _spark-submit.cmd_ from the spark distribution.
> I think that shouldn't happen. Spark should handle such a malformed 
> environment variable gracefully.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to