Github user andrewor14 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16975#discussion_r101857385
  
    --- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
    @@ -466,7 +466,7 @@ object SparkSubmit extends CommandLineUtils {
           // Other options
           OptionAssigner(args.executorCores, STANDALONE | YARN, 
ALL_DEPLOY_MODES,
             sysProp = "spark.executor.cores"),
    -      OptionAssigner(args.executorMemory, STANDALONE | MESOS | YARN, 
ALL_DEPLOY_MODES,
    +      OptionAssigner(args.executorMemory, ALL_CLUSTER_MGRS, 
ALL_DEPLOY_MODES,
    --- End diff --
    
    ```
    You may, for whatever reason, want to run executors with less than that, 
which your change doesn't seem to allow.
    ```
    Yeah, I thought about this long and hard but I just couldn't come up with a 
case where you would possibly want the worker size to be different from 
executor size in local-cluster mode. If you want two launch 5 workers (2GB), 
each with 2 executors (1GB), then you might as well just launch 10 executors 
(1GB) or run real standalone mode locally. I think it's better to fix the 
out-of-the-box case than to try to cover all potentially non-existent corner 
cases.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to