On Wed, May 21, 2014 at 10:42 AM, Lennart Poettering <lenn...@poettering.net> wrote: > On Tue, 20.05.14 15:16, Umut Tezduyar Lindskog (u...@tezduyar.com) wrote: > >> > Wouldn't this be solved by telling the kernel to schedule the starting >> > services with high latency (or whatever the terminology is), i.e., >> > give each of them a relatively large timeslice. That would decrease >> > the flushing, but at the same time avoid any issues with deadlocks >> > etc. It should also give us the flexibility to give some services low >> > latency if that is required for them etc (think udev/systemd/dbus and >> > otherthings which would otherwise block boot). >> >> This is exactly what the cpu.shares cgroup property does and that is >> what the patch posted on ML is trying to utilize. In theory we should >> be able to prioritize certain services with the posted patch. But the >> frequent context switching problem still remains for non prioritized >> services. >> >> I am having another thought about this and I might have something else >> here. I am getting inspired by the posted patch and proposing >> something like: "StartupCPUShares=*" (or any other symbol) >> >> What StartupCPUShares=* will tell systemd that the service really >> doesn't care about it's cpu.shares value. If we have 100 services with >> StartupCPUShares=* value, then with combination of some kind of >> NumberOfActiveServices value, systemd will adjust cpu.shares of >> NumberOfActiveServices until they are activated. Hope it makes sense. >> >> Thoughts? > > Not following here... Aren't you describing a best-effort system here? > But the CPUShares= stuff is best-effort stuff anyway, and just tells the > kernel what is more important thant other stuff. Or, to turn this > around, where would the difference be between the system you describe > and one where the unimportant services just set StartupCPuShares= to > some very low value?
Thanks for looking at this. Sole problem is due to the fact that we have too many services in running state in the scheduler's run queue. Changing the manager's run queue from list to priority queue by itself, as far as I can see, will not change anything because programs are still going to be in activating state. 100 "Unimportant" jobs setting their StartupCPUShares= to same some low value is no different either because 100 of them are going to be in "running" state, in scheduler run queue, causing frequent context switches. What I am proposing goes side by side with current StartupCPUShares patch. It might be easier to visualize what I am proposing with a diagram. I have shared this https://docs.google.com/drawings/d/1224tBuCGgkDCT8Vc8x8-f2EUYP5sIgdROMXuEE7Snac/edit?usp=sharing, let me know if it is a bit more clear. Umut > > Lennart > > -- > Lennart Poettering, Red Hat _______________________________________________ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel