----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/67826/#review205770 -----------------------------------------------------------
Can you add Yan Xu to this review? This looks interesting, can you clarify a few things in the description / code? * Why is this being done? (E.g. this is more accurately representing what's available? It helps remove the one off logic for shared resources in the allocation loops?) * Shouldn't the summary / code document that we only consider a single copy of shared resources to be available at a given time? Meaning it takes 1 round per additional reference of the shared resource that gets allocated? * Does this patch preserved correctness in a standalone way? Or does it need the subsequent patch to preserve the shared resources behavior? * Are there any implications for metrics or endpoint information? I'm not sure how we currently represent shared resources in allocated/available metrics as well as allocated/available state json/protobuf, but it would be interesting to know if this affects any of that. - Benjamin Mahler On July 4, 2018, 12:57 a.m., Meng Zhu wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/67826/ > ----------------------------------------------------------- > > (Updated July 4, 2018, 12:57 a.m.) > > > Review request for mesos and Benjamin Mahler. > > > Repository: mesos > > > Description > ------- > > Currently, depending on already allocated resources, > `HierarchicalAllocatorProcess::Slave::getAvailable()` > may not contain all the shared resources. Since shared > resources are always allocatable, we should include all > shared resources in the agent available resources. > > > Diffs > ----- > > src/master/allocator/mesos/hierarchical.hpp > 0f6c0e96a105c64465d3f5db4ff663d8fdfe7e26 > > > Diff: https://reviews.apache.org/r/67826/diff/1/ > > > Testing > ------- > > make check > > > Thanks, > > Meng Zhu > >
