Github user li-zhihui commented on the pull request:
https://github.com/apache/spark/pull/1462#issuecomment-49572335
@tgravescs
I tested it on a cluster with mesos-0.18.1(fine-grained and
coarse-grained), it work well.
I think you are right. In fact, user don't have any idea about expected
executors in mesos mode (and standalone mode), they only expect CPU
cores(<code>spark.cores.max</code>). So we need check total registered
executors' cores and <code>spark.cores.max</code> to judge whether
SchedulerBackend is ready, and modify
<code>spark.scheduler.minRegisteredExecutorsRatio</code> to
<code>spark.scheduler.minRegisteredResourcesRatio</code>.
How do you think about it?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---