Github user mgummelt commented on the pull request:

    https://github.com/apache/spark/pull/10993#issuecomment-180023827
  
    @Astralidea You can't guarantee that receivers run on different nodes even 
with Coarse-Grained Spark as it exists today.  One executor running on a slave 
does not guarantee that one Spark task will run on a slave.
    
    I have some new config vars in mind that will solve this problem, as well 
as other scheduling problems, though:
    spark.mesos.executor.max_memory
    spark.mesos.memory.min_per_core
    spark.mesos.memory.max_per_core
    spark.mesos.cores.max_per_node
    
    I think these 4 new config vars will capture any constraints a user has.  
For example, you can guarantee one receiver per node by setting 
spark.mesos.cores.max_per_node == spark.task.cores
    
    But this is a discussion that should be moved to JIRA



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to