Good news. I've done some visual testing and there definitely is visibly
reduced lag from double-buffering.
My previous argument was both right and wrong:
"Even the slightest pause then, and you can be certain that the "ready"
sub-queue is emptied and thus it's only one frame lag from client to
server, no matter how long the overall ring of buffers is."
This statement remains true. However what I forgot was that during
scrolling etc when latency matters most, there is no "slightest pause"
for the compositor to catch up. Thus the client is allowed to pre-render
too many frames and the age of a frame by the time it appears on screen
is nbuffers-1.
So it looks like double-buffering is still a good idea. In fact, based
on those findings it seems a more efficient protocol while increasing
throughput, would only increase the problem of perceived lag as clients
can render even further in advance of vsync.
But it's a delicate balance. On desktop for example one can hack around
the parallel render/pageflipping logic and make it wait for vsync
sooner. This reduces lag even further, however it increases the risk of
the compositor missing the frame deadline (as seen in clone mode)
unacceptably.
On 10/07/14 18:01, Gerry Boland wrote:
On 09/07/14 16:39, Kevin Gunn wrote:
First
Not sure we're still on topic necessarily wrt changing from id's to fd's
do we need to conflate that with the double/triple buffering topic ?
let's answer this first...
Second
while we're at it :) triple buffering isn't always a win. In the case of
small, frequent renders (as an example "8x8 pixel square follow my finger")
you'll have potentially 2 extra buffers that need their 16ms of fame on the
screen in the queue, 1 at session server, 1 at system server. Which can
look a little laggy. I'm willing to say in the same breath though, that
this may be lunatic fringe. The win for the triple buffering case is likely
more common, which is spikey render times (14+ms) amongst more normal
render times (9-12ms)
+1 on giving empty buffers back to the clients to allow them to have a
"queue" of empty buffers at their disposal (i'm not sure if RAOF is correct
or duflu in that its "synchronously waiting for a round trip every swap"...can
we already have an empty buffer queue on the client side ?)
I also want to remind everyone that our default shipping configuration
is a root mir server with a nested mir server as a client, and that
nested mir server manages most client apps the user will be interacting
with.
Nesting will increase input latency, as now there's not just 3 buffers
in play, but more (5 yeah?).
I had thought that the double-buffering idea was to try reduce the
number of buffers being used in the nested case. Sounds like Daniel
isn't confident that'll work now, which is a pity.
Thanks
-G
--
Mir-devel mailing list
[email protected]
Modify settings or unsubscribe at:
https://lists.ubuntu.com/mailman/listinfo/mir-devel