Gerard,

Are you familiar with spark.deploy.spreadOut
<http://spark.apache.org/docs/latest/spark-standalone.html> in Standalone
mode?  It sounds like you want the same thing in Mesos mode.

On Thu, Dec 11, 2014 at 6:48 AM, Tathagata Das <tathagata.das1...@gmail.com>
wrote:

> Not that I am aware of. Spark will try to spread the tasks evenly
> across executors, its not aware of the workers at all. So if the
> executors to worker allocation is uneven, I am not sure what can be
> done. Maybe others can get smoe ideas.
>
> On Tue, Dec 9, 2014 at 6:20 AM, Gerard Maas <gerard.m...@gmail.com> wrote:
> > Hi,
> >
> > We've a number of Spark Streaming /Kafka jobs that would benefit of an
> even
> > spread of consumers over physical hosts in order to maximize network
> usage.
> > As far as I can see, the Spark Mesos scheduler accepts resource offers
> until
> > all required Mem + CPU allocation has been satisfied.
> >
> > This basic resource allocation policy results in large executors spread
> over
> > few nodes, resulting in many Kafka consumers in a single node (e.g. from
> 12
> > consumers, I've seen allocations of 7/3/2)
> >
> > Is there a way to tune this behavior to achieve executor allocation on a
> > given number of hosts?
> >
> > -kr, Gerard.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to