Ah, so it only practically makes sense when the dedicated attribute is
*/something, but * would not make much sense. Seems reasonable to me.
On Wed, Mar 9, 2016 at 2:32 PM, Maxim Khutornenko wrote:
> It's an *easy* way to get a virtual cluster with specific
> requirements.
It's an *easy* way to get a virtual cluster with specific
requirements. One example: have a set of machines in a shared pool
with a different OS. This would let any existing or new customers try
their services for compliance. The alternative would be spinning off a
completely new physical cluster,
What does it mean to have a 'dedicated' host that's free-for-all like that?
On Wed, Mar 9, 2016 at 2:16 PM, Maxim Khutornenko wrote:
> Reactivating this thread. I like Bill's suggestion to have scheduler
> dedicated constraint management system. It will, however, require a
>
Reactivating this thread. I like Bill's suggestion to have scheduler
dedicated constraint management system. It will, however, require a
substantial effort to get done properly. Would anyone oppose adopting
Steve's patch in the meantime? The ROI is so high it would be a crime
NOT to take it :)
On
Thanks for the info, Steve! Yes, it would accomplish the same goal but
at the price of removing the exclusive dedicated constraint
enforcement. With this patch any job could target a fully dedicated
exclusive pool, which may be undesirable for dedicated pool owners.
On Wed, Jan 20, 2016 at 7:13
An arbitrary job can't target a fully dedicated role with this patch, it
will still get a "constraint not satisfied: dedicated" error. The code in
the scheduler that matches the constraints does a simple string match, so
"*/test" will not match "role1/test" when trying to place the task, it will
On Tue, Jan 19, 2016 at 7:22 PM, Maxim Khutornenko wrote:
> Has anyone explored an idea of having a non-exclusive (wrt job role)
> dedicated constraint in Aurora before?
> We do have a dedicated constraint now but it assumes a 1:1
> relationship between a job role and a slave
Has anyone explored an idea of having a non-exclusive (wrt job role)
dedicated constraint in Aurora before?
We do have a dedicated constraint now but it assumes a 1:1
relationship between a job role and a slave attribute [1]. For
example: a 'www-data/prod/hello' job with a dedicated constraint of
Also, regarding dedicated constraints necessitating a slave restart - i've
pondered moving dedicated machine management to the scheduler for similar
purposes. There's not really much forcing that behavior to be managed with
a slave attribute.
On Tue, Jan 19, 2016 at 7:05 PM, John Sirois
>
> Can't this just be any old Constraint (not named "dedicated"). In other
> words, doesn't this code already deal with non-dedicated constraints?:
>
> https://github.com/apache/aurora/blob/master/src/main/java/org/apache/aurora/scheduler/filter/SchedulingFilterImpl.java#L193-L197
Not really.
What do you mean by GC burden? What i'm proposing is effectively
Map. Even with an extremely forgetful operator (even more
than Joe!), it would require a huge oversight to put a dent in heap usage.
I'm sure there are ways we could even expose a useful stat to flag such an
As an operator, that'd be a relatively simple change in tooling, and the
benefits of not forcing a slave restart would be _huge_.
Keeping the dedicated semantics (but adding non-exclusive) would be ideal if
possible.
> On Jan 19, 2016, at 19:09, Bill Farner wrote:
>
>
Right, that's what I thought. Yes, it sounds interesting. My only
concern is the GC burden of getting rid of hostnames that are obsolete
and no longer exist. Relying on offers to update hostname 'relevance'
may not work as dedicated hosts may be fully packed and not release
any resources for a
Oh, I didn't mean the memory GC pressure in the pure sense, rather a
logical garbage of orphaned hosts that never leave the scheduler. It's
not something to be concerned about from the performance standpoint.
It's, however, something operators need to be aware of when a host
from a dedicated pool
Not a host->attribute mapping (attribute in the mesos sense, anyway). Rather
an out-of-band API for marking machines as reserved. For task->offer
mapping it's just a matter of another data source. Does that make sense?
On Tuesday, January 19, 2016, Maxim Khutornenko wrote:
15 matches
Mail list logo