Github user jerryshao commented on the issue:

    https://github.com/apache/spark/pull/18711
  
    @srowen due to the current design of standalone cluster manager, if we 
don't set `--total-executor-cores`, then Spark application will try to acquire 
all the free cores on this cluster, and it will continue to acquire cores (and 
launch executors) later on freed by other applications.
    
    I think maybe we could put the explanation here 
https://spark.apache.org/docs/latest/spark-standalone.html, and and add a link 
in `configuration.md` to point to here. So we could have a new paragraph to 
fully explain the behaviors.
    
    What do you think @jiangxb1987 ?
    



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to