[
https://issues.apache.org/jira/browse/SPARK-4940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14264579#comment-14264579
]
Gerard Maas commented on SPARK-4940:
------------------------------------
>From the perspective of evenly allocating Spark Streaming consumers
>(network-bound), the ideal solution would be to explicitly set the number of
>hosts.
With the current resource allocation policy, we can have eg. (4),(1),(1)
consumers over 3 hosts, instead of the ideal (2),(2),(2). Given that the
resource allocation is dynamic at job startup time, this results in variable
performance characteristic for the job being submitted.
In practice, we have been restarting the job (using Marathon) until we get a
favorable resource allocation.
Not sure how well the requirement of a fix amount of executors would fit with
the node transparency offered by Mesos. I'm just trying to elaborate on the
requirements from the Spark Streaming job perspective.
> Document or Support more evenly distributing cores for Mesos mode
> -----------------------------------------------------------------
>
> Key: SPARK-4940
> URL: https://issues.apache.org/jira/browse/SPARK-4940
> Project: Spark
> Issue Type: Improvement
> Components: Mesos
> Reporter: Timothy Chen
>
> Currently in Coarse grain mode the spark scheduler simply takes all the
> resources it can on each node, but can cause uneven distribution based on
> resources available on each slave.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]