Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16975#discussion_r101851064
  
    --- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
    @@ -466,7 +466,7 @@ object SparkSubmit extends CommandLineUtils {
           // Other options
           OptionAssigner(args.executorCores, STANDALONE | YARN, 
ALL_DEPLOY_MODES,
             sysProp = "spark.executor.cores"),
    -      OptionAssigner(args.executorMemory, STANDALONE | MESOS | YARN, 
ALL_DEPLOY_MODES,
    +      OptionAssigner(args.executorMemory, ALL_CLUSTER_MGRS, 
ALL_DEPLOY_MODES,
    --- End diff --
    
    Is the change in `SparkContext` needed? Seems like this should be all 
that's needed.
    
    As far as I understand, the last value in the local-cluster master is the 
amount of memory the worker has available; you may, for whatever reason, want 
to run executors with less than that, which your change doesn't seem to allow.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to