We Autoscale our Mesos cluster in EC2 from within our framework. Scaling up
can be easy via watching demand Vs supply. However, scaling down requires
bin packing the tasks tightly onto as few servers as possible.
Do you have any specific ideas on how you would leverage Mantis/Mesos for
Spark based jobs? Fenzo, the scheduler part of Mantis, could be another
point of leverage, which could give a framework the ability to autoscale
the cluster among other benefits.



On Thu, Jun 4, 2015 at 1:06 PM, Dmitry Goldenberg <[email protected]>
wrote:

> Thanks, Vinod. I'm really interested in how we could leverage something
> like Mantis and Mesos to achieve autoscaling in a Spark-based data
> processing system...
>
> On Jun 4, 2015, at 3:54 PM, Vinod Kone <[email protected]> wrote:
>
> Hey Dmitry. At the current time there is no built-in support for Mesos to
> autoscale nodes in the cluster. I've heard people (Netflix?) do it out of
> band on EC2.
>
> On Thu, Jun 4, 2015 at 9:08 AM, Dmitry Goldenberg <
> [email protected]> wrote:
>
>> A Mesos noob here. Could someone point me at the doc or summary for the
>> cluster autoscaling capabilities in Mesos?
>>
>> Is there a way to feed it events and have it detect the need to bring in
>> more machines or decommission machines?  Is there a way to receive events
>> back that notify you that machines have been allocated or decommissioned?
>>
>> Would this work within a certain set of
>> "preallocated"/pre-provisioned/"stand-by" machines or will Mesos go and
>> grab machines from the cloud?
>>
>> What are the integration points of Apache Spark and Mesos?  What are the
>> true advantages of running Spark on Mesos?
>>
>> Can Mesos autoscale the cluster based on some signals/events coming out
>> of Spark runtime or Spark consumers, then cause the consumers to run on the
>> updated cluster, or signal to the consumers to restart themselves into an
>> updated cluster?
>>
>> Thanks.
>>
>
>

Reply via email to