[ 
https://issues.apache.org/jira/browse/SPARK-10436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15923175#comment-15923175
 ] 

Naresh commented on SPARK-10436:
--------------------------------

This issue got fixed in spark 2.0.0 version.

file:https://github.com/apache/spark/blob/branch-2.0/core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala

line 176:files = Option(files).orElse(sparkProperties.get("spark.files")).orNull
Fixes this issue.


> spark-submit overwrites spark.files defaults with the job script filename
> -------------------------------------------------------------------------
>
>                 Key: SPARK-10436
>                 URL: https://issues.apache.org/jira/browse/SPARK-10436
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Submit
>    Affects Versions: 1.4.0
>         Environment: Ubuntu, Spark 1.4.0 Standalone
>            Reporter: axel dahl
>            Priority: Minor
>              Labels: easyfix, feature
>
> In my spark-defaults.conf I have configured a set of libararies to be 
> uploaded to my Spark 1.4.0 Standalone cluster.  The entry appears as:
> spark.files              libarary.zip,file1.py,file2.py
> When I execute spark-submit -v test.py
> I see that spark-submit reads the defaults correctly, but that it overwrites 
> the "spark.files" default entry and replaces it with the name if the job 
> script, i.e. "test.py".
> This behavior doesn't seem intuitive.  test.py, should be added to the spark 
> working folder, but it should not overwrite the "spark.files" defaults.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to