> On Wednesday 04 April 2001 06:07 pm, Brian Paul wrote:
> > This isn't really unexpected. Benchmarking any application on
> > a time-sharing operating system can reveal some variation in
> > performance. You never know when another process or the kernel
> > itself will need some CPU cycles.
> >
> >
> > -Brian
>
> Well, yes, I understand that there will always be variation
> when benchmarking. What I don't expect and don't understand is
> why there is apparently no variation while it is running. The
> only variance is between different instances. I would expect
> a run to look more like:
>
> 3467 frames in 5 seconds = 693.4 FPS
> 3684 frames in 5.001 seconds = 736.653 FPS
> 3653 frames in 5 seconds = 730.6 FPS
> 3648 frames in 5 seconds = 729.6 FPS
> 3683 frames in 5 seconds = 736.6 FPS
> 3758 frames in 5 seconds = 751.6 FPS
> 3717 frames in 5 seconds = 743.4 FPS
>
> where it varies across the entire range of possibilities
> (which would account for various load factors). But instead
> what I observe is that it is extremely consistent in any
> one given run, only varying between different runs.
Perhaps there's some sort of resource allocation locking? One time the
instance of gears gets 88% of the total CPU time, the next run it can only
get 86, the next run 92. This, of course, goes completely against everything
I know about CPU loading, but I'm just theorizing to fit the facts.
_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
http://lists.sourceforge.net/lists/listinfo/dri-devel