I'm going to set a high bar for you here: On Thu, Jan 13, 2011 at 12:29 PM, Daniel Poelzleithner <[email protected]> wrote: > > I'm currently implementing a dynamic linux kernel optimizer called > ulatencyd [1]. In my opinion the desktop experience (which applies to > servers as well) can be much improved by dynamically adjusting the the > kernel. Having a very fair scheduler is a very good thing, but this is > not the best experience for a user. The user for example expects the > current used program to be as fast as possible, not some random > background task getting the same cpu usage.
So it's fine to experiment with a project like this, but I think ultimately it's not a good idea unless you have a reliable benchmark and numbers for it. Otherwise, how do you know you're actually making things better in general? This is a very complicated domain. Have you (or anyone) for example looked at what happens if we were to move tasks between cgroups frequently? What kind of kernel locks does that take? Etc. If you're not careful, I could easily imagine making things *worse*. Basically, don't optimize without performance numbers to back it up. Anecdotes about "updatedb" running in the background are OK as a basis for experimentation, but maybe the right fix is just a strategic "ionice/nice" in a few places in the OS for those things, rather than a daemon. _______________________________________________ xdg mailing list [email protected] http://lists.freedesktop.org/mailman/listinfo/xdg
