On Mon, 21 Dec 1998, Linus Torvalds wrote:
>
>
> On Mon, 21 Dec 1998, Robert M. Hyatt wrote:
> >
> > I am testing this on my quad xeon, and it does look better. IE a compute
> > bound process seems to stick on one cpu for long periods of time. It will
> > occasionally move, when the process does an I/O, but it is far better than
> > it was, in that running xosview would show a single process bouncing
> > around quite frequently...
>
> Umm.. What about interactive feel?
>
> PLEASE PLEASE PLEASE don't think that "stick to one CPU" is automatically
> a good thing. It isn't. It has absolutely no meaning what-so-ever aside
> from cache issues, and can be an extremely _bad_ thing for other reasons.
> One of the other reasons is interactive performance and scheduling latency
> under load.
>
> Any patches that are developed using xosview and looking at the load meter
> are very very suspect. PLEAE don't do that, it is a completely bogus
> metric.
>
> The only thing that matters is:
> - absolute performance (ie NUMBERS, not "xosview says it sticks to a
> CPU")
> - latency and responsiveness.
sorry for not responding to these earlier. A screen resizing seemed to
totally blow pine's mind and it only had the one line interactive feel
question visible.
I don't use xosview to really measure anything, because it is very
"intrusive" as everyone knows... in fact, it causes enough process
swapping around to make it difficult to use at best and impossible at
worse. I simply use it as a first approximation when playing with this
stuff as it does give me some insight. A better test, which I don't
happen to have any results for now was to simply go into the kernel and
grab the cpuid for each thread, and count the number of times it changed.
In kernels back in the early 2.1.10x days, this would show significant
numbers of "bounces" with only 4 compute-bound threads running, plus the
usual hardclock stuff and network things adding to the confusion.
>
> And note that the second one is MORE important - I'd much rather have a
> machine that feels good than one that benchmarks 5% better.
>
what I'd actually like is to be able to "choose". :) IE when I am logged
on, yes, gimme that interactive response time. But when my machine is
just crunching a GM on one of the chess servers, gimme that extra 5%.
:)
> If the only criterion is how xosview looks, then I don't want to see the
> patches, quite frankly. Nice "sticks to one CPU" behaviour on osview does
> NOT automatically mean that performance is actually better, and it can
> easily mean that interactive response is pure crap.
>
wouldn't argue at all, which I sort of said in my last response. IE
what happens with only one compute-bound process running may not be
important at all. Unless you just paid 60 million dollars for your
brand new Cray T932. :)
> Note that if you have a quad PII, interactive response is usually fine -
> and the cross-CPU scheduling stuff doesn't matter unless you have a
> CPU-bound load noticeably over four. Be very very careful.
>
> Linus
>
On both of my quads, the ALR P6/200 X 4 and the intel quad xeon,
interactive response on the default kernel is not what I would call
"peppy". IE if I hit return, a new prompt is not instant. When I
move the mouse, the cursor is very jerky. When I resize windows... well
I don't resize windows when it is loaded because it is messy to do. :)
My question is, if processor affinity is going to trash the interactive
feel, do we really want it? I'd guess "no". However, having run on a
Sequent balance years ago, we had both... and interactive performance
didn't go into the tank. What I'd hope for is a normal interactive
scheduler, that keeps up with priorities and stuff just like unix has
for 25 years, but which will, when possible, keep a process on a specific
cpu. Once you get 4 processes and 4 cpus, this becomes harder with the
unix system stuff going on via daemons and the like, but with one or two
compute-bound processes I'd like to see 'em stick on their processors
without going overboard and sucking the interactive response time into
hades or beyond...
Probably one of the biggest problems is "legacy" code that is sitting
around breaking this and that as Rik discovered in a couple of places.
In any case, if anyone has something they'd like to try, I'm certainly
open for a quick kernel build/reboot to test it, and I can load the
machine as heavily as needed to test anything...
Bob
> -
> Linux SMP list: FIRST see FAQ at http://www.irisa.fr/prive/mentre/smp-faq/
> To Unsubscribe: send "unsubscribe linux-smp" to [EMAIL PROTECTED]
>
-
Linux SMP list: FIRST see FAQ at http://www.irisa.fr/prive/mentre/smp-faq/
To Unsubscribe: send "unsubscribe linux-smp" to [EMAIL PROTECTED]