On 03/07/2013 12:49 PM, Keith Packard wrote:
* PGP Signed by an unknown key

Aaron Plattner <aplatt...@nvidia.com> writes:

If I'm understanding this correctly, this requires the X server to
receive a notification from the GPU that the swap is complete so it can
send the SwapComplete event.  Is there any chance this could be done
with a Fence instead?  The application could specify the fence in the
Swap request, and then use that fence to block further rendering on the
GPU or wait on the fence from the CPU.

 From what I've heard from application developers, there are two
different operations here:

  1) Throttle application rendering to avoid racing ahead of the screen

  2) Keeping the screen up-to-date with simple application changes, but
     not any faster than frame rate.

The SwapComplete event is designed for this second operation. Imagine a
terminal emulator; it doesn't want to draw any faster than frame rate,
but any particular frame can be drawn in essentially zero time. This
application doesn't want to *block* at all, it wants to keep processing
external events, like getting terminal output and user input events. As
I understand it, a HW fence would cause the terminal emulator to stall
down in the driver, blocking processing of all of the events and
terminal output.

If you associate an X Fence Sync with your swap operation, the driver has the option to trigger it directly from the client command stream and wake up only the applications waiting for that fence. The compositor, if using GL, could have received the swap notification event and already programmed the response compositing based on it before the swap even completes, and just insert a token to make the GPU or kernel wait for the fence to complete before executing the compositing rendering commands.

Thanks,
-James

For simple application throttling, that wouldn't use these SwapComplete
events, rather it would use whatever existing mechanisms exist for
blocking rendering to limit the application frame rate.

We typically try to do the scheduling on the GPU when possible because
triggering an interrupt and waking up the X server burns power and
adds latency for no good reason.

Right, we definitely don't want a high-performance application to block
waiting for an X event to arrive before it starts preparing the next
frame.

_______________________________________________
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

Reply via email to