> On Nov. 19, 2012, 5:05 p.m., Vinod Kone wrote:
> > src/slave/cgroups_isolation_module.hpp, line 57
> > <https://reviews.apache.org/r/8108/diff/1/?file=191732#file191732line57>
> >
> >     what does totalUsage mean? total usage across all cpusets or just this 
> > cpuset? if the latter, just call it usage.

the latter


> On Nov. 19, 2012, 5:05 p.m., Vinod Kone wrote:
> > src/slave/cgroups_isolation_module.hpp, line 147
> > <https://reviews.apache.org/r/8108/diff/1/?file=191732#file191732line147>
> >
> >     not yours, but could you s/successes/succeeds/?


> On Nov. 19, 2012, 5:05 p.m., Vinod Kone wrote:
> > src/slave/cgroups_isolation_module.cpp, line 92
> > <https://reviews.apache.org/r/8108/diff/1/?file=191733#file191733line92>
> >
> >     I think the assumption here is that you can always find a feasible 
> > allocation? 
> >     If so, you should do a CHECK(needed <= 0.0).

Done, and added a dump of the state for easier debugging should this occur.


> On Nov. 19, 2012, 5:05 p.m., Vinod Kone wrote:
> > src/slave/cgroups_isolation_module.cpp, line 140
> > <https://reviews.apache.org/r/8108/diff/1/?file=191733#file191733line140>
> >
> >     instead of doing this logic here, you should delete a cpu from cpus if 
> > usage falls to 0 during shrink phase.

Agreed. I like this because it better keeps the abstraction of a CPU"Set" where 
we only hold the CPUs we've allocated to.

I also introduced a CHECK here to enforce the new implicit invariant that all 
values in the map are > 0.0.


> On Nov. 19, 2012, 5:05 p.m., Vinod Kone wrote:
> > src/slave/cgroups_isolation_module.cpp, line 105
> > <https://reviews.apache.org/r/8108/diff/1/?file=191733#file191733line105>
> >
> >     how about calling it least instead?

Also simplified this logic by using an Option instead of the map iterator.


> On Nov. 19, 2012, 5:05 p.m., Vinod Kone wrote:
> > src/slave/cgroups_isolation_module.cpp, line 586
> > <https://reviews.apache.org/r/8108/diff/1/?file=191733#file191733line586>
> >
> >     why fabs here?


> On Nov. 19, 2012, 5:05 p.m., Vinod Kone wrote:
> > src/slave/cgroups_isolation_module.hpp, line 62
> > <https://reviews.apache.org/r/8108/diff/1/?file=191732#file191732line62>
> >
> >     you call this 'usage' when used as a parameter to grow() , but call it 
> > 'cpus' here.
> >     
> >     how about calling to 'global' and 'local' respectively?

I guess this wasn't as obvious as I anticipated. I've added jie-style docs to 
CPUSet to make this more clear.

I'm going to drop this with the following justification:

CPUSet holds cpus, these are local to the CPUSet. There can be many CPUSets 
(each CgroupInfo would contain one).
CgroupIsolationModule holds cpus, these are all the cpus.

In the context you mentioned, the CgroupIsolationModule.cpus is passed into 
grow(), so that the CPUSet can know about the cpu "usage".

So, I think they both make sense to be named as cpus (since the context 
differentiates them). I also liked usage because the Cgroup needs to know about 
the cpu "usage" when performing new allocations.

Please re-open if you disagree, and let me know if you have more naming 
suggestions ;)


> On Nov. 19, 2012, 5:05 p.m., Vinod Kone wrote:
> > src/slave/cgroups_isolation_module.hpp, line 248
> > <https://reviews.apache.org/r/8108/diff/1/?file=191732#file191732line248>
> >
> >     s/cpus/global?

See above, and let me know.


> On Nov. 19, 2012, 5:05 p.m., Vinod Kone wrote:
> > src/slave/cgroups_isolation_module.cpp, line 70
> > <https://reviews.apache.org/r/8108/diff/1/?file=191733#file191733line70>
> >
> >     how about creating its own source file for CPUSet. i think its 
> > complicated enough to not compound it with the isolation module.

Added a TODO for this. I'd like to move cgroups stuff into it's own folder in a 
separate CL.
For this review, I'll leave it as a TODO to avoid polluting the diff.


- Ben


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/8108/#review13570
-----------------------------------------------------------


On Nov. 19, 2012, 11:03 p.m., Ben Mahler wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/8108/
> -----------------------------------------------------------
> 
> (Updated Nov. 19, 2012, 11:03 p.m.)
> 
> 
> Review request for mesos, Benjamin Hindman and Vinod Kone.
> 
> 
> Description
> -------
> 
> This is the first pass at adding cpuset isolation for pinning cgroups to cpus.
> 
> We decided to start with a simplistic grow/shrink allocation technique, as 
> such this initial technique:
>   -Does not take cache locality into account.
>   -Does not actively fight fragmentation*, but does a good job at preventing 
> it in many cases, given it's simplicity.
>   -Note that when cpus resource requests are integral (non-fractional), then 
> fragmentation does not occur.
> 
> *By fragmentation, I'm referring to the case where we've spread a cgroup over 
> more cpus than necessary, due to other cgroups sharing the same cpus.
> High fragmentation would mean a lot of shared cpus across cgroups.
> No fragmentation would mean each cgroup has a unique set of cpus.
> 
> I've punted on documenting the pitfalls of this technique, wiring up the 
> handler, and adding tests for now.
> 
> Note that this is diffed off of benh's changes:
> https://reviews.apache.org/r/8058/
> https://reviews.apache.org/r/8059/
> 
> 
> Diffs
> -----
> 
>   src/linux/proc.hpp 27e15bf8695aa694b0d5bdb6881b9fa55a447528 
>   src/slave/cgroups_isolation_module.hpp 
> 9f80fc5a969b959b34eaea4cac40700662d7f8b2 
>   src/slave/cgroups_isolation_module.cpp 
> 8211618d7729350654e2d17946c5b912ed9dda6a 
>   third_party/libprocess/include/stout/stringify.hpp 
> dcc5b95a54e6f34f93867e015d8c855fd7d6f950 
>   third_party/libprocess/include/stout/strings.hpp 
> 914c280a994733764957d19f37b48d151bb93778 
> 
> Diff: https://reviews.apache.org/r/8108/diff/
> 
> 
> Testing
> -------
> 
> None as of yet.
> 
> 
> Thanks,
> 
> Ben Mahler
> 
>

Reply via email to