Hi, "Jonathan S. Shapiro" <[EMAIL PROTECTED]> writes:
> Please provide some (any) evidence that self-paging leads to good > resource management, and that other, more secure, more conventional > mechanisms really are not just as good. Intuitively (admittedly, this makes further statements look weak ;-)): the closer resource management is to the resource user, the better. Neal has been advocating this for some time and I find it quite convincing. The application is in best position to decide which memory pages should be evicted and whether they should be saved to backing store. Neal's PDF renderer example illustrates that pretty well. It also seems that a file system server would be glad to know when to evict its cache. And it looks like a GC could also be told to run only when it is really necessary. But of course, these intuitions are nowhere near evidences. I agree that this creates a new covert channel. However, I am unclear about how easily this can be exploited (one has to know what other applications are running, what physical resources are available, what memory management policies have been chosen by the administrator, like the level of "swapiness" in Linux terms, etc.). Also, I tend to think that applications could quite easily annihilate this covert channel, for example by allocating a memory pool prior to using it. As for CPU time, there seem to be evidences that at least some classes of applications benefit from self-scheduling, as opposed to opaque (or, well, "transparent") preemptive scheduling. GNU Pth' manual contains a discussion of that[0]. FairThreads are another similar approach[1]. Similarly, the motivation behind scheduler activations is the idea that applications could greatly benefit from having more control over their scheduling[2]. Thanks, Ludovic. [0] http://www.gnu.org/software/pth/pth-manual.html#threading_background [1] http://www-sop.inria.fr/mimosa/rp/FairThreads/FTC/ft_v1.1/documentation/ft-2.html#container1058 [2] http://dept-info.labri.fr/~danjean/linux-activations.html _______________________________________________ L4-hurd mailing list [email protected] http://lists.gnu.org/mailman/listinfo/l4-hurd
