Latency or throughput wise, it really depends on what you want to achieve. It is a long story to argue. But IMO, if you are just running server applications that are by nature not time critical, there is no point in making your kernel preemptible. It will just add unnecessary complication. But if you believe, your application is time critical (eg i got this dns query i should respond within 50msec by preempting the kernel running this cpu hog netscape) then go for it.
Now suppose you believe your kernel should be preemptible, there is another problem that you should consider. Most server applications are written with ordinary kernel in mind. That means there is no significant difference in performance of your server, throughput or latency wise, *if* the programming construct of your server application does not use a priority scheme. Ordinary fork() does not set the priority of a child process. BUT if your application uses posix threads or real-time programming constructs, then there you can set set the priority of a process. You can take advantage of your preemptible kernel and that can increase both latency and throughput. But then again, which is more important - latency or throughput is highly debatable and it solely depends on your intended applications. Just consider this. High latency means your cpu resources are wasted swapping stacks and ultimately not accomplishing any job at all. High throughput on the other hand means the cpu may be ignoring life-critical process just to finish an unimportant task. However, there is an optimum pt where you can achieve both high latency and throughput by properly considering the resources available and the tasks the server has to do. It has to be included in the way you design your server applications (ie you have to hack the code). rowel > processes handling the traffic management. You increase this with firewall > rules. You increase this when you make a server work as a proxy at the > same time, although one may debate that the proxy is primarily a userland > concern. You increase this when you make the server work as a web server, > too, but again, userland. And so on. > > This seems similar to the MTU/MRU issue. Do you want large blobs of data > going on for longer amounts of time a piece? Or do you want to shuffle > between smaller slices at smaller amounts of time per piece? At the end of > the day with or without kernel preemption you will get all the jobs done, > of course. I think the question here is responsiveness. > > And so I'm back to my unresolved question ... :) > > --> Jijo _ Philippine Linux Users Group. Web site and archives at http://plug.linux.org.ph To leave: send "unsubscribe" in the body to [EMAIL PROTECTED] To subscribe to the Linux Newbies' List: send "subscribe" in the body to [EMAIL PROTECTED]
