On Fri, 12 Mar 2010, Keith Packard wrote:
On Fri, 12 Mar 2010 12:47:13 -0800 (PST), Andy Ritger <arit...@nvidia.com>
wrote:
Hi Keith, sorry for the slow response.
Thanks for reading through my proposal.
However, you also mention overcoming rendering engine buffer size
constraints. This proposal seems unrelated to the size of the buffer
being rendered to (I imagine we need something like shatter to solve that
problem?)
Right, it isn't directly related to the rendering buffer size except as
it applies to the scanout buffer. Older Intel scanout engines handle a
stride up to 8192 bytes while the rendering engine only goes to 2048
pixels. At 16bpp, the scanout engine could handle a 4096 pixel buffer,
but because you can't draw to that, we still need to limit the screen
dimensions.
Also, if part of the goal is to overcome scanout buffer size constraints,
how would scanout pixmap creation and compositing work during X server
initialization? The X server has no knowledge during PreInit/ScreenInit
that a composite manager will be used, or whether the composite manager
will exit before the X server exits.
I had not considered this issue at all. Obviously the server will not be
able to show the desired configuration until the compositing manager has
started. I don't have a good suggestion for the perfect solution, but
off-hand, the configuration could either leave the monitor off, or have
it mirror instead of extend the desktop.
Other suggestions would be welcome.
If the initial X server configuration requested in the X configuration
file(s) specifies a configuration that is only possible when using
per-scanout pixmaps, would the X server implicitly create these pixmaps
and implicitly composite from the X screen into the per-scanout
pixmaps?
No, that wouldn't be useful -- our drawing engine could not draw to the
X screen pixmaps.
Don't you have the rendering engine size limits regardless of whether
a composite manager has created scanout pixmaps and is compositing into
them, or if the X server does so implicitly?
Maybe I'm misunderstanding your proposal. I thought the suggested flow was:
a) create scanout pixmap for each head
b) windows get redirected, rendering is done to the windows' backing
pixmaps as today
c) portions of each window's pixmap from b) get composited into the
appropriate scanout pixmap(s) which were created in a)
Even if the scanout pixmaps in a) are less than the rendering and scanout
size limits, couldn't the windows in b) be as large as the X screen?
It seems to me that if you want to create an X screen larger than your
maximum render size, then you somehow need to solve how to generate
content that large. Maybe it is something like Adam's shatter ideas to
split up rendering into smaller buffers, or some other similar technique.
In any case, whatever solution is employed to render into X screen sized
pixmaps-backing-redirected-windows, it seems like the same solution
could be applied to the X screen pixmap.
Once you solve that, then I wouldn't think it would matter if it was
the composite manager or the X server who composited the content into
the scanout pixmaps. It would seem unfortunate for a composite manager
(with the necessary support) to be a requirement in order to use some
X screen sizes.
While we're doing that, we should probably also add a mechanism for
clients to query if the 'multi mode' is valid. This would let savvy
user interfaces do more intelligent presentation of what combinations
of configurations are valid for presentation to the user.
Yes, that seems like a reasonable addition.
Great.
In the case that the pointer sprite is transformed, I assume this would
still utilize the hardware cursor when possible?
Yes, the affine transform for the sprite would be applied before the
hardware cursor was loaded, just as we do today with the rotations. This
makes the desired transform controlled by the client so that they can
construct a suitable compromise that works with the projective transform
for each monitor.
OK, thanks.
Creating a window the size of the scanout pixmap and then plugging
the scanout pixmap in as the window's backing pixmap feels a little
backwards, sequentially. If the intent is just to give the scanout
pixmap double buffering, you should be able to create a GLXPixmap from
the scanout pixmap, and create that GLXPixmap with double buffering
through selection of an appropriate GLXFBConfig. However, the GLX spec
says that glXSwapBuffers is ignored for GLXPixmaps.
Precisely. The window kludge is purely to make existing GL semantics
work as expected.
However, a more flexible solution would be to provide a mechanism to
create a scanout window (or maybe a whole window tree), rather than a
scanout pixmap. The proposed scanout window would not be a child of
the root window.
All windows are children of the root window, and we already have a
mechanism for creating windows which are drawn to alternative
pixmaps. Using this mechanism offers a simple way to express the
semantics of the situation using existing operations.
The goal is to really get applications focused on the notion of pixmaps
containing pixels and windows as being mapped to pixmaps.
A composite manager could then optionally redirect this scanout window
to a pixmap (or, even better, a queue of pixmaps). The pixmaps could
be wrapped by GLXPixmaps for OpenGL rendering. If we also provided
some mechanism to specify which of the backing pixmaps within the queue
should be "presented", then an implementation could even flip between
the pixmaps.
The existing GL window semantics seem to provide what is needed for a
compositing manager today, I don't see a huge benefit to switching to
pixmaps.
I think this lines up with some of the multi-buffer presentation ideas
Aaron Plattner presented at XDC in Portland last fall.
We can discuss multi-buffering of applications separately; let's focus
on how to fix RandR in this context.
That's fair.
Thanks very much for the comments; from what I read, I believe we need
to add a request to test whether a specific configuration can work
without also setting it at the same time. I would like to avoid needing
to enumerate all possible configurations though; that gets huge. I
welcome additional thoughts on how to make this kind of information
available to applications.
Agreed we wouldn't want to enumerate all possible configurations.
I'd think that per-head information (list of modes, available rotations,
etc) could be queried separately like today, and then the client would
construct a description of the desired multi-mode configuration.
The client could either just request that the multi-mode config be set
(and the implementation would fail if the config was invalid), or the
client could optionally ask if the multi-mode config is valid.
Thanks,
- Andy
--
keith.pack...@intel.com
_______________________________________________
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel