On Tue, Nov 29, 2005 at 09:35:24PM +0100, Bas Wijnen wrote: > If there is no memory pressure, the rest will stay right there. But if there > is, it will surely have the pages it needs at the start of the list, so > they're the last to be swapped out. Once the others have been swapped out > (because there was memory pressure) they will not return to physical memory > until the process asks for them again. Note that the process is not notified > when they are swapped out either, but it could continuously request its > physical memory quota and find out semi-instantly anyway. > > > My question, I guess, is how do you handle fairness among processes? > > That is not something that my idea solves. The idea just allows self-paging, > and it tries to narrow the bandwidth of a covert channel. How exactly the > quota degrades with time and who gets more when several processes are asking > is a different question. This is the fairness question. It needs to be > answered anyway, but I wasn't trying to do so. ;-) > > > If I've understood you correctly, I think this way degrades to a game > > of King of the Castle. > > I don't think I understand what you mean with that, so I cannot confirm that > you understood me correctly.
Sorry, should have thought better than to use that reference. It's a game played by children in Canada where everyone tries to race to the top of a snowbank and the "King" is the one who manages to stay up there and push the other kids back down. So I was thinking of it in terms of processes racing to get the largest quota allocations possible up until the point where memory pressure problems start. At that point the processes that moved fastest earlier are in a much better position, i.e. they're metaphorically higher up the snow bank. I have trouble understanding how the fairness issue can be handled separately from the self-paging issues your solving (other than by not solving it). Suppose a process is slow to the punch and hasn't grabbed itself a very large quota before memory pressure sets in. Then as it's addressing new parts of its virtual address space it must page out its own pages rather than win pages from other processes as might happen with global paging (at least until the slow release part kicks in enough that more physical pages are freed up), right? If it happens that this process didn't build up enough of an allocation for its effective working set before memory pressure happened it could find itself thrashing while other processes are fat and happy. Looked at another way, it seems like the memory pressure situation can be brought about from other processes being greedy or making bad estimates of how large their ideal working set should be in a way that wouldn't happen with a global pager. To make things more fair, something external to what you've described would either have to give out quota increases in a way where processes converge to the memory pressure scenario in a more egalitarian way or it would have to intervene after memory pressure had already happened. Could either of those approaches be done in a way that wouldn't introduce the covert channel you wanted to avoid? I guess I'm dwelling on this fairness issue, but it seems as if dealing with it separately might re-open whatever covert channels you'd closed. -- Mike Small [EMAIL PROTECTED] _______________________________________________ L4-hurd mailing list [email protected] http://lists.gnu.org/mailman/listinfo/l4-hurd
