On Thu, 2012-01-26 at 11:58 -0500, Adam Jackson wrote: > I think a more complete solution would involve changing the main loop's > logic slightly: if we get to the end of requests for a single client > within a single timeslice, start pulling requests from any other clients > that have pending requests (in priority order) until the timeslice > expires (at which point we'd loop back around to select and start > again). This would punish absolute throughput for any one client when > contended, but I'm willing to take that if it means we stop dropping > frames.
Just a quick followup since it was an easy experiment to run. Test machine: model name : Intel(R) Core(TM)2 Duo CPU P8600 @ 2.40GHz The reproducer scenario is two apps, one that's throughput-bound and one that's latency-bound. Easiest way to get this is one x11perf -noop and one x11perf -pointer. With the stock tuning on an idle server, -noop performance is about 21.6e6 per second, and -pointer is about 28.7e3 per second. With both running at once, -noop performance is essentially unchanged but -pointer drops to a mere 1.02e3/s. If I change the default timeslice to 2ms (and the max to 16ms, not that this case would hit it), baseline performance is unchanged. Running both -noop and -pointer at once, -noop drops to about 20.6e6/s but -pointer climbs to 3.8e3/s. So, there's that. You lose about 5% in the pathologically throughput-bound case, but your peak latency goes way down. Sounds like a no-brainer to me, honestly. And I'm pretty sure if you wanted that 5% back that the other code change I proposed would recover it. - ajax
signature.asc
Description: This is a digitally signed message part
_______________________________________________ xorg@lists.x.org: X.Org support Archives: http://lists.freedesktop.org/archives/xorg Info: http://lists.x.org/mailman/listinfo/xorg Your subscription address: arch...@mail-archive.com