Github user nishkamravi2 commented on the pull request:

    https://github.com/apache/spark/pull/731#issuecomment-49956235
  
    I think it would be better to reuse the same parameters to minimize 
discrepancy across different scheduling modes at the interface level. Also, 
once this PR gets merged, do we have a compelling use case for starting 
multiple workers per node or can we retire params like SPARK_WORKER_INSTANCES? 
    
    Minor comment:
    // allow user to run multiple executors in the same worker
     // (within the same worker JVM process)
    
    could be modified to:
    
    //allow user to run multiple executor processes on a worker node (managed 
by a single worker daemon/JVM)
    
    Have this PR been tested beyond automated unit tests?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to