Andr`ea,
        I benchmarked a patched version of 2.2.11 vs. the reference, and found 
some areas of concern: a slight performance degradation and a potential 
impact on real-time performance.  I'm hoping you can clarify these for 
me.

Using the SDM1.1 057.SDET benchmark I obtained the following.

Reference 2.2.11:
scripts     throughput
32            6133
64            5599
96            4978
128           4387
150           4044

Patched 2.2.11:
scripts     throughput
32            6106
64            5515
96            4928
128           4387
150           4022

*these results were averaged over 20 runs and run under identical 
system states.

Using make I obtained the following:
Reference 2.2.11:
Real: 77.82
User: 198.26
Sys: 20.52

Patched 2.2.11:
Real: 77.94
User: 199.44
Sys: 18.70

As you can see, I found a (slight) performance decrease according to 
the benchmark.

I agree with your conceptual view of the situation, so this is a little 
bit of a surprise.  Are these the same benchmark results you came up 
with, or did I mismeasure somehow?

Also, it looks to me like you removed some code in reschedule_idle 
necessary for good real-time performace, in particular, evaluating the 
preemptive_goodness of a task vs all of the tasks on all other CPU's.   
>From where I sit, it looks like your implementation evaluates the 
preemptive goodness on the best_cpu only after checking avg_timeslice 
vs. cache flush time.  The potential impact of this on real-time 
performance might be pretty drastic -- what do you think?

Maybe it would be worthwhile to evaluate avg_timeslice vs cacheflush 
time in goodness?

-JCN

-
Linux SMP list: FIRST see FAQ at http://www.irisa.fr/prive/mentre/smp-faq/
To Unsubscribe: send "unsubscribe linux-smp" to [EMAIL PROTECTED]

Reply via email to