Thanks, Andrew! I will search for that and good to know Jenkins Mesos framework also does that work.
Kenneth On Fri, Feb 27, 2015 at 2:37 PM, Andrew Langhorn <[email protected]> wrote: > Thanks for the slides, Sharma. I'll have a look this weekend! > > One thing you might find interesting, Kenneth, is the Jenkins Mesos > framework which does automatic slave provisioning and horizontal scaling. > > Andrew > > Sent from my iPhone > > On 27 Feb 2015, at 21:16, "Sharma Podila" <[email protected]> wrote: > > Hello Kenneth, > > There is a little bit of work needed in the framework to do autoscaling > of the slave cluster. Theoretically, scaling up can be relatively easy by > watching the utilization and adding nodes. However, in order to scale down, > the framework must support two things - some kind of bin packing so it uses > as few slaves as possible, and a call out to which slaves can be shutdown. > I discussed how we achieve this at last year's MesosCon and also at AWS > re:Invent, slides from which are at > http://www.slideshare.net/spodila/aws-reinvent-2014-talk-scheduling-using-apache-mesos-in-the-cloud > in case that helps you with ideas. > > > > On Fri, Feb 27, 2015 at 12:52 PM, Kenneth Su <[email protected]> wrote: > >> Hi all, >> >> I am new to Mesos/Mesosphere, I have tried a test from the tutorials >> and successfully built up a single master with two slaves, also dispatched >> the tasks through Marathon to all slaves. It run as expected and it is >> great to scaling app to as many instance as it needs. >> >> However, I have a question came up and I tried to find out the related >> information to see how Mesos could automatically scaling the slaves as many >> as need on the hardware/machines, but seems not many details on how it >> works, how the process. >> >> Do we need to have another layer to watch, provision nodes on demand on >> Paas so the new nodes could automatically join Mesos cluster, or Mesos >> could also handle that kind of task. >> >> Appreciated if any of related information/documents. >> >> Thanks! >> Kenneth >> > >

