> On 一月 16, 2016, 2:21 a.m., Joseph Wu wrote: > > src/master/allocator/mesos/hierarchical.cpp, lines 529-535 > > <https://reviews.apache.org/r/41847/diff/4/?file=1195636#file1195636line529> > > > > What if you did this? > > ``` > > slaves[slaveId]total = slaves[slaveId].total.nonUsageSlack() + > > oversubscribed; > > ``` > > > > Where `nonUsageSlack` is a helper that filters out usage slack. (You > > could just use a Resource filter directly here.) > > > > --- > > > > This would have fewer operations too (1 filter operation instead of 2).
I think that the current logic is that when enableReservationOversubscription is true, we only need to add the allocation slack to the total resources, there is no other change except this. The `allocationSlack()` also has only 1 filter `isAllocationSlack` , comments? - Guangya ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/41847/#review114832 ----------------------------------------------------------- On 一月 13, 2016, 12:55 p.m., Guangya Liu wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/41847/ > ----------------------------------------------------------- > > (Updated 一月 13, 2016, 12:55 p.m.) > > > Review request for mesos, Ben Mahler, Artem Harutyunyan, Joris Van > Remoortere, Joseph Wu, Klaus Ma, and Jian Qiu. > > > Bugs: MESOS-4145 > https://issues.apache.org/jira/browse/MESOS-4145 > > > Repository: mesos > > > Description > ------- > > Updated allocation slack when slave was updated. > > > Diffs > ----- > > src/master/allocator/mesos/hierarchical.cpp > d541bfa3f4190865c65d35c9d1ffdb8a3f194056 > src/tests/hierarchical_allocator_tests.cpp > e044f832c2c16e53e663c6ced5452649bb0dcb59 > > Diff: https://reviews.apache.org/r/41847/diff/ > > > Testing > ------- > > make > make check > GLOG_v=2 ./bin/mesos-tests.sh --gtest_filter="HierarchicalAllocatorTest.*" > --verbose > > > Thanks, > > Guangya Liu > >
