> On Sept. 10, 2019, 8:16 a.m., Andrei Sekretenko wrote:
> > src/master/allocator/mesos/hierarchical.cpp
> > Lines 416-422 (patched)
> > <https://reviews.apache.org/r/71460/diff/1/?file=2164502#file2164502line416>
> >
> >     Hmm... when `rolesConsumedQuota` was calculated using 
> > `Sorter::allocation()`, quantities of shared resources belonging to the 
> > same slave were accounted for only once, but quantitoes of shared resources 
> > belonging to different slaves were simply added.
> >     
> >     Does this still hold?
> >     
> >     If yes, then how exactly is `RoleTree` now capable of this without 
> > knowing `SlaveID`s?
> >     
> >     If no - then does quota still work correctly for shared resources? (If 
> > it doesn't, this makes me wonder about our test coverage as well).

Shared resources currently are only limited to persistent volumes which are 
uniquely identifiable. Thus:

- Shared resources can always be added/subtracted without duplication check. If 
they are of the same resource, only shared count will change; if they come from 
different resources, they will not be combined;
- Duplication check is needed when aggregating quantities: 
https://github.com/apache/mesos/blob/4f7ba4101375ac8c8b2cbd1e855ceea72f03d58b/src/master/allocator/mesos/sorter/drf/sorter.hpp#L320-L325

Thus Roletree does not need to know about slaveIds. We could turn 
`offeredOrAllocatedScalars_` as a map of slave -> resources, but so far, that 
could only serve performance reasons.


> On Sept. 10, 2019, 8:16 a.m., Andrei Sekretenko wrote:
> > src/master/allocator/mesos/hierarchical.cpp
> > Lines 417-420 (patched)
> > <https://reviews.apache.org/r/71460/diff/1/?file=2164502#file2164502line417>
> >
> >     Shouldn't this be `CHECK_CONTAINS(roles_, role)` and   
> > `roles_.at(role)` instead?
> >     
> >     If, due to some bug, this method is called for a non-existing role, 
> > this code will add an empty role with some `offeredOrAllocated_`, which 
> > violates the contract you are introducting and might crash `tryRemove`.
> >      
> >     I'd rather prefer to fail here - things will be simpler to debug.

Good point!


- Meng


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71460/#review217673
-----------------------------------------------------------


On Sept. 10, 2019, 11 a.m., Meng Zhu wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71460/
> -----------------------------------------------------------
> 
> (Updated Sept. 10, 2019, 11 a.m.)
> 
> 
> Review request for mesos, Andrei Sekretenko and Benjamin Mahler.
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> This helpers simplify the quota tracking logic and also paves
> the way to reduce duplicated states in the sorter.
> 
> Small performance degradation when making allocations due to
> duplicated map construction in `(un)trackAllocatedResources`.
> This will be removed once embeded the sorter in the role tree.
> 
> Benchmark `LargeAndSmallQuota/2`:
> 
> Master:
> 
> Added 3000 agents in 80.648188ms
> Added 3000 frameworks in 19.7006984secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks,
> with drf sorter
> Made 3500 allocations in 16.044274434secs
> Made 0 allocation in 14.476429451secs
> 
> Master + this patch:
> Added 3000 agents in 80.110817ms
> Added 3000 frameworks in 17.25974094secs
> Benchmark setup: 3000 agents, 3000 roles, 3000 frameworks,
> with drf sorter
> Made 3500 allocations in 16.91971379secs
> Made 0 allocation in 13.784476154secs
> 
> 
> Diffs
> -----
> 
>   src/master/allocator/mesos/hierarchical.hpp 
> 48ba399e12129b3348b5c43b793f3428acf26d4e 
>   src/master/allocator/mesos/hierarchical.cpp 
> 187de173ee2f563e2ff0f7e5c96695dd94a73970 
> 
> 
> Diff: https://reviews.apache.org/r/71460/diff/2/
> 
> 
> Testing
> -------
> 
> make check
> Benchmark result in the description
> 
> 
> Thanks,
> 
> Meng Zhu
> 
>

Reply via email to