> On July 5, 2018, 2:22 p.m., Benjamin Mahler wrote:
> > Looks like a great cleanup!
> > 
> > Persist tends to carry the connotation of writing something to durable 
> > storage. How about:
> > 
> > ```
> >     Made quota consumption tracking event-driven in the allocator.
> > 
> >     The allocator needs to keep track of role consumed quota
> >     to maintain quota headroom. Currently this info is
> >     constructed on the fly prior to each allocation cycle.
> > 
> >     This patch lets the allocator track quota consumption
> >     across allocation cycles in an event-driven manner to improve
> >     performance and reduce code complexity.
> > ```

Sounds good. Thanks!


> On July 5, 2018, 2:22 p.m., Benjamin Mahler wrote:
> > src/master/allocator/mesos/hierarchical.cpp
> > Lines 1404 (patched)
> > <https://reviews.apache.org/r/67444/diff/1/?file=2046973#file2046973line1404>
> >
> >     It wasn't clear to me when reading this why it needs to be done on each 
> > agent, so we should probably clarify that?
> >     
> >     ```
> >     // Lastly, subtract allocated reservations. This needs to be done on a 
> > per-agent basis because ...
> >     ```

I do not think we have anything interesting to say here. We subtract on a 
per-agent basis because currently there is no aggregated bookkeeping info. In 
the future, it is possible for us to track aggregated allocated reservations 
and subtract all at once. Dropping.


> On July 5, 2018, 2:22 p.m., Benjamin Mahler wrote:
> > src/master/allocator/mesos/hierarchical.cpp
> > Lines 1405-1406 (patched)
> > <https://reviews.apache.org/r/67444/diff/1/?file=2046973#file2046973line1405>
> >
> >     We probably want a TODO to optimize this by avoiding the copy by 
> > returning a const reference to the map?

Looks like the `quotaRoleSorter->allocation(role)` is already returning a const 
ref, must be a slip earlier. Will update these in an immediate followup patch.


> On July 5, 2018, 2:22 p.m., Benjamin Mahler wrote:
> > src/master/allocator/mesos/hierarchical.cpp
> > Lines 1385-1390 (original), 1413-1418 (patched)
> > <https://reviews.apache.org/r/67444/diff/1/?file=2046973#file2046973line1413>
> >
> >     Just an unrelated observation, this note looks misleading, since the 
> > master does try to rescind offers to re-balance.

"Re-balancing" means a change of allocation? Anyway, added a todo here:

  // TODO(mzhu): Re-evaluate the above assumption and consider triggering an
  // allocation here.


> On July 5, 2018, 2:22 p.m., Benjamin Mahler wrote:
> > src/master/allocator/mesos/hierarchical.cpp
> > Lines 1439-1440 (patched)
> > <https://reviews.apache.org/r/67444/diff/1/?file=2046973#file2046973line1439>
> >
> >     It looks like it follows the same order as operations in setQuota, so I 
> > would expect this to be done after metrics.removeQuota. Any reason not to?
> >     
> >     Actually, it seems like the metric should get added after quota 
> > tracking and removed before quota tracking is removed, since I would 
> > imagine the metrics look at the tracking information. It looks like we 
> > currently only expose 'allocated' instead of 'consumed' for now, but we 
> > probably want to expose 'consumed' soon, is there a ticket?

Filed MESOS-9123 for exposing role consumed quota.

Updated `metrics.setQuota` to be done after the tracking info update. But for 
`metrics.removeQuota`, it does not depend on any of the tracking info since it 
is all about removing the metric entries. I think it can be done either before 
or after the tracking info update. For consistency, let's update the metrics 
after the tracking info update in both add and remove.


> On July 5, 2018, 2:22 p.m., Benjamin Mahler wrote:
> > src/master/allocator/mesos/hierarchical.cpp
> > Lines 2586-2588 (patched)
> > <https://reviews.apache.org/r/67444/diff/1/?file=2046973#file2046973line2654>
> >
> >     It's pretty hard to reason from here about whether this is correct. For 
> > example, how do we know that these quantities were not already tracked 
> > because they were allocated prior to becoming reserved? If that invariant 
> > doesn't hold we'll double count?
> 
> Meng Zhu wrote:
>     Thanks for catching this!
>     
>     The invariant should be:
>     (1) when tracking reservation, only track unallocated reservations
>     (2) when track allocation, only track unreserved allocations
>     
>     (2) is already being done. (1) is hard to do given the current 
> abstraction, we will need more than raw "reservations". Will need more 
> refactoring on the Slave struct in the allocator to make this work and 
> readable. Will come back to this later.
>     
>     In addition, this reminds me that we do not have tests for the above 
> scenario yet. Will add some.

Now fixed.


- Meng


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/67444/#review205762
-----------------------------------------------------------


On July 31, 2018, 4:56 p.m., Meng Zhu wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/67444/
> -----------------------------------------------------------
> 
> (Updated July 31, 2018, 4:56 p.m.)
> 
> 
> Review request for mesos and Benjamin Mahler.
> 
> 
> Bugs: MESOS-8802
>     https://issues.apache.org/jira/browse/MESOS-8802
> 
> 
> Repository: mesos
> 
> 
> Description
> -------
> 
> The allocator needs to keep track of role consumed quota
> to maintain quota headroom. Currently this info is
> constructed on the fly prior to each allocation cycle.
> 
> This patch lets the allocator track quota consumption
> across allocation cycles in an event-driven manner to improve
> performance and reduce code complexity.
> 
> 
> Diffs
> -----
> 
>   src/master/allocator/mesos/hierarchical.hpp 
> 0cd2fac17f09110c46b8540523a9c935f156f858 
>   src/master/allocator/mesos/hierarchical.cpp 
> 35992474eacb8b14ae57e1dc23307e1542f63cb5 
> 
> 
> Diff: https://reviews.apache.org/r/67444/diff/2/
> 
> 
> Testing
> -------
> 
> make check
> 
> 
> Thanks,
> 
> Meng Zhu
> 
>

Reply via email to