Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2516#discussion_r17998489
  
    --- Diff: 
core/src/main/resources/org/apache/spark/deploy/spark-submit-defaults.prop ---
    @@ -0,0 +1,90 @@
    +# The master URL for the cluster egspark://host:port, mesos://host:port, 
yarn, or local.
    +# legacy env variable MASTER
    +spark.master = local[*]
    +
    +# Should spark submit run in verbose mode: default is false
    +spark.verbose = false
    +
    +# Comma-separated list of files to be placed in the working directory of 
each executor
    +# spark.files =
    +
    +# Comma-separated list of .zip, .egg, or .py files to place on the 
PYTHONPATH for Python apps.
    +# spark.submit.pyfiles =
    +
    +# Comma-separated list of local jars to include on the driver and executor 
classpaths.
    +# spark.jars =
    +
    +
    +# Path to a bundled jar including your application and all dependencies.
    +# The URL must be globally visible inside of your cluster, for instance,
    +# an hdfs:// path or a file:// path that is present on all nodes.
    +# spark.app.primaryResource =
    +
    +# A name of your application.
    +spark.app.name = Unknown Application
    --- End diff --
    
    SparkSubmit will come up with a proper default name for the application if 
it's not provided - a name that is based on other command line parameters (such 
as the class to run). So I don't see much sense in having a default.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to