I would like to chime in and just say from experience Aurora scales to the
number of instances across all jobs. From a scale perspective there isn't
much difference between having a thousand 1 instance jobs and a single job
that has 1k instances as both cases roughly take up the same amount of
memory.

As Josh mentioned Twitter is running thousands of jobs and these jobs span
hundreds of thousands of instances.

On Fri, Mar 18, 2016 at 9:27 AM, Joshua Cohen <jco...@apache.org> wrote:

> Hi Christopher,
>
> I think you already got an answer from Stephan in IRC, but just wanted to
> follow up for the sake of posterity (in case anyone in the future has a
> similar question and finds this thread). The only limit on the number of
> jobs that Aurora can run would currently be the amount of memory available
> to the Scheduler. Suffice it to say that at Twitter we're running thousands
> of jobs with no issues.
>
> Let us know if you have any follow up questions.
>
> Cheers,
>
> Joshua
>
> On Fri, Mar 18, 2016 at 9:12 AM, Christopher M Luciano <
> cmluci...@us.ibm.com
> > wrote:
>
> >  Hi all. It seems that we may be outgrowing Marathon. We have a problem
> > with the amount of application that we are using, causing us to not
> exactly
> > be "compliant" with Marathon goals. It seems that the unit for Aurora is
> a
> > job+instance of that job. Does job map to a Marathon application? If
> > similar is there a known limitation to how many jobs one can have?
> >
> >  What we discovered for Marathon more application+bigger env_vars =
> bigger
> > zknode size and we come dangerously close to hitting the 1 MB default of
> > the zk zknode size. I'm wondering if this type of a thing has potentially
> > been fixed already in Aurora.
> >
> >
> >
> >
> >
> >
> > Christopher M Luciano
> >
> > Staff Software Engineer, Platform Services
> >
> > IBM Watson Core Technology
> >
> >
> >
> >
> >
>
> --
> Zameer Manji
>
>

Reply via email to