Re: Merging DRI interface changes

2007-10-15 Thread Kristian Høgsberg
On 10/13/07, Keith Packard [EMAIL PROTECTED] wrote:
   I do
  think it's worth moving forward with this though.  Personally, I get
  these patches off of my plate and can focus on the next steps.

 I'm all for making forward progress and abandoning broken interfaces as
 early as possible.

 The only people this will inconvenience should be developers and early
 adopters, and we can help them by pushing releases of the related bits
 sooner rather than later. I'm assuming that the X server patches you
 mention will still build and run against older other bits, right?

Yeah, only the X server and mesa need to be upgraded in lock step
here.  All of the git mesa drivers still compile and work with the git
X server and we can pick one pair of DDX and DRI drivers at a time to
port to TTM.

Kristian

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Merging DRI interface changes

2007-10-13 Thread Michel Dänzer

On Fri, 2007-10-12 at 10:36 +0100, Keith Whitwell wrote:
 Michel Dänzer wrote:
  On Fri, 2007-10-12 at 10:19 +0100, Keith Whitwell wrote:
  Michel Dänzer wrote:
  On Thu, 2007-10-11 at 18:44 -0400, Kristian Høgsberg wrote:
  On 10/11/07, Keith Whitwell [EMAIL PROTECTED] wrote:
 
  3) Share buffers with a reference counting scheme.  When a 
  client
  notices the buffer needs a resize, do the resize and adjust refcounts -
  other clients continue with the older buffer.  What happens when a
  client on the older buffer calls swapbuffers -- I'm sure we can figure
  out what the correct behaviour should be.
  3) Sounds like the best solution and it's basically what I'm
  proposing.
  I agree, it looks like this can provide the benefits of shared
  drawable-private renderbuffers (support for cooperative rendering
  schemes, no waste of renderbuffer resources) without compromising the
  general benefits of private renderbuffers.
  Yes, I'm just interested to understand what happens when one of the 
  clients on the old, orphaned buffer calls swapbuffers...  All the 
  buffers should be swapped, right?  Large and small? How does that work?
 
  If the answer is that we just do the swap on the largest buffer, then 
  you have to wonder what the point of keeping the smaller ones around
  is?
  
  To make 3D drivers nice and simple by not having to deal with fun stuff
  like cliprects? :)
 
 Understood.  I'm thinking about a further simplification - rather than 
 keep the old buffers around after the first client requests a resize, 
 just free them.

I see, but how would that work? If it hasn't happened yet, the plan
seems to be to make BOs strictly reference counted. No matter who
creates the new renderbuffers, some clients may keep the old ones
referenced until they catch up. If it's still somehow possible to avoid
wasting resources in this case, that would be nice, but otherwise it
seems like too much of a corner case to worry about.


-- 
Earthling Michel Dänzer   |  http://tungstengraphics.com
Libre software enthusiast |  Debian, X and DRI developer


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Merging DRI interface changes

2007-10-13 Thread Keith Packard

On Fri, 2007-10-12 at 11:53 -0400, Kristian Høgsberg wrote:

 They do drop support, yes, but of course, I'm committing a series of X
 server patches along with this to let AIGLX load the new driver API.
 This means that you can't load a git dri driver with any released X
 server, which is the inconvenience you're referring to I guess. 

We can get an X server release done reasonably quickly to avoid major
inconvenience for developers at least.

  I do
 think it's worth moving forward with this though.  Personally, I get
 these patches off of my plate and can focus on the next steps. 

I'm all for making forward progress and abandoning broken interfaces as
early as possible.

The only people this will inconvenience should be developers and early
adopters, and we can help them by pushing releases of the related bits
sooner rather than later. I'm assuming that the X server patches you
mention will still build and run against older other bits, right?

-- 
[EMAIL PROTECTED]


signature.asc
Description: This is a digitally signed message part
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Merging DRI interface changes

2007-10-12 Thread Keith Whitwell
Michel Dänzer wrote:
 On Fri, 2007-10-12 at 10:19 +0100, Keith Whitwell wrote:
 Michel Dänzer wrote:
 On Thu, 2007-10-11 at 18:44 -0400, Kristian Høgsberg wrote:
 On 10/11/07, Keith Whitwell [EMAIL PROTECTED] wrote:

 3) Share buffers with a reference counting scheme.  When a client
 notices the buffer needs a resize, do the resize and adjust refcounts -
 other clients continue with the older buffer.  What happens when a
 client on the older buffer calls swapbuffers -- I'm sure we can figure
 out what the correct behaviour should be.
 3) Sounds like the best solution and it's basically what I'm
 proposing.
 I agree, it looks like this can provide the benefits of shared
 drawable-private renderbuffers (support for cooperative rendering
 schemes, no waste of renderbuffer resources) without compromising the
 general benefits of private renderbuffers.
 Yes, I'm just interested to understand what happens when one of the 
 clients on the old, orphaned buffer calls swapbuffers...  All the 
 buffers should be swapped, right?  Large and small? How does that work?

 If the answer is that we just do the swap on the largest buffer, then 
 you have to wonder what the point of keeping the smaller ones around
 is?
 
 To make 3D drivers nice and simple by not having to deal with fun stuff
 like cliprects? :)

Understood.  I'm thinking about a further simplification - rather than 
keep the old buffers around after the first client requests a resize, 
just free them.  If/when other clients submit commands targeting the 
old-sized buffers, throw those commands away.

 Seriously though, as I understand Kristian's planned scheme, all buffer
 swaps will be done by the DRM, and I presume it'll only take the
 currently registered back renderbuffer into account, so the contents of
 any previous back renderbuffers will be lost. I think that's fine, and
 should address your concerns?

See above -- if the contents of the previous back renderbuffers are 
going to be lost, what is the point in keeping those buffers around?  Or 
doing any further rendering into them?

Keith





-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Merging DRI interface changes

2007-10-12 Thread Michel Dänzer

On Fri, 2007-10-12 at 11:53 -0400, Kristian Høgsberg wrote:
 
 Finally, along with the X server patches, this does land new features.
  With these patches I can land the X server work to enable GLX 1.4
 support and the visual cleanup, we just wont be able to advertise any
 GLXPixmap or GLXPbuffer capable fbconfigs yet.

Okay, that makes sense then. Thanks for clarifying this.


-- 
Earthling Michel Dänzer   |  http://tungstengraphics.com
Libre software enthusiast |  Debian, X and DRI developer


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Merging DRI interface changes

2007-10-12 Thread Kristian Høgsberg
On 10/12/07, Michel Dänzer [EMAIL PROTECTED] wrote:
...
  The DRI driver interface changes I'm proposing here should not be
  affected by these issues though.  Detecting that the buffers changed
  and allocating and attaching new ones is entirely between the DRI
  driver and the DRM.  When we're ready to add the TTM functionality to
  a driver we add the new createNewScreen entry point I mentioned and
  that's all we need to change.  So, in other words, I believe we can
  move forward with this merge while we figure out the semantics of the
  resizing-while-rendering case.

 Meanwhile though, these changes already drop support for existing
 loaders, right? That's rather inconvenient for AIGLX, not so much for
 libGL. I don't suppose it would be reasonably possible to retain support
 for __driCreateNewScreen_20050727, at least until there's an xserver
 release that supports the new one? If not, I wonder if it might be worth
 holding off a bit longer until the changes will provide real benefits
 such as new GLX features, as otherwise they would seem to require
 inter-component lockstep for little gain.

They do drop support, yes, but of course, I'm committing a series of X
server patches along with this to let AIGLX load the new driver API.
This means that you can't load a git dri driver with any released X
server, which is the inconvenience you're referring to I guess.  I do
think it's worth moving forward with this though.  Personally, I get
these patches off of my plate and can focus on the next steps.  We get
the patches upstream which will get them tested, and I think this is
important, since there's a lot more work in the pipeline from
everybody, so any early testing we can do is very much worth it.
Finally, along with the X server patches, this does land new features.
 With these patches I can land the X server work to enable GLX 1.4
support and the visual cleanup, we just wont be able to advertise any
GLXPixmap or GLXPbuffer capable fbconfigs yet.

 Apart from that, the changes look good to me, with one exception:
 b068af2f3b890bec26a186e9d0bdd3d44c17cd4d ('Key drm_i915_flip_t typedef
 off of the ioctl #define instead.'). DRM_IOCTL_I915_FLIP was already
 defined before drm_i915_flip_t and friends were introduced.

Yup, my bad, I didn't install the libdrm pkg-config file.

cheers,
Kristian
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Merging DRI interface changes

2007-10-11 Thread Kristian Høgsberg
Hi,

I have this branch with DRI interface changes that I've been
threatening to merge on several occasions:

  http://cgit.freedesktop.org/~krh/mesa/log/?h=dri2

I've just rebased to todays mesa and it's ready to merge.  Ian
reviewed the changes a while back gave his ok, and from what we
discussed at XDS2007, I believe the changes there are compatible with
the Gallium plans.

What's been keeping me from merging this is that it breaks the DRI
interface.  I wanted to make sure that the new interface will work for
redirected direct rendering and GLXPixmaps and GLXPbuffers, which I
now know that it does.  The branch above doesn't included these
changes yet, it still uses the sarea and the old shared, static back
buffer setup.  This is all isolated to the createNewScreen entry
point, though, and my plan is to introduce a new createNewScreen entry
point that enables all the TTM features.  This new entry point can
co-exist with the old entry point, and a driver should be able to
support one or the other and probably also both at the same time.

The AIGLX and libGL loaders will look for the new entry point when
initializing the driver, if they have a new enough DRI/DRM available.
If the loader has an old style DRI/DRM available, it will look for the
old entry point.

I'll wait a day or so to let people chime in, but if I don't hear any
stop the press type of comments, I'll merge it tomorrow.

cheers,
Kristian

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Merging DRI interface changes

2007-10-11 Thread Keith Whitwell
Brian Paul wrote:
 Kristian Høgsberg wrote:
 Hi,

 I have this branch with DRI interface changes that I've been
 threatening to merge on several occasions:

   http://cgit.freedesktop.org/~krh/mesa/log/?h=dri2

 I've just rebased to todays mesa and it's ready to merge.  Ian
 reviewed the changes a while back gave his ok, and from what we
 discussed at XDS2007, I believe the changes there are compatible with
 the Gallium plans.

 What's been keeping me from merging this is that it breaks the DRI
 interface.  I wanted to make sure that the new interface will work for
 redirected direct rendering and GLXPixmaps and GLXPbuffers, which I
 now know that it does.  The branch above doesn't included these
 changes yet, it still uses the sarea and the old shared, static back
 buffer setup.  This is all isolated to the createNewScreen entry
 point, though, and my plan is to introduce a new createNewScreen entry
 point that enables all the TTM features.  This new entry point can
 co-exist with the old entry point, and a driver should be able to
 support one or the other and probably also both at the same time.

 The AIGLX and libGL loaders will look for the new entry point when
 initializing the driver, if they have a new enough DRI/DRM available.
 If the loader has an old style DRI/DRM available, it will look for the
 old entry point.

 I'll wait a day or so to let people chime in, but if I don't hear any
 stop the press type of comments, I'll merge it tomorrow.
 
 This is basically what's decsribed in the DRI2 wiki at 
 http://wiki.x.org/wiki/DRI2, right?
 
 The first thing that grabs my attention is the fact that front color 
 buffers are allocated by the X server but back/depth/stencil/etc buffers 
 are allocated by the app/DRI client.
 
 If two GLX clients render to the same double-buffered GLX window, each 
 is going to have a different/private back color buffer, right?  That 
 doesn't really obey the GLX spec.  The renderbuffers which compose a GLX 
 drawable should be accessible/shared by any number of separate GLX 
 clients (like an X window is shared by multiple X clients).

I guess I want to know what this really means in practice.

Suppose 2 clients render to the same backbuffer in a race starting at 
time=0, doing something straightforward like (clear, draw, swapbuffers) 
there's nothing in the spec that says to me that they actually have to 
have been rendering to the same surface in memory, because the 
serialization could just be (clear-a, draw-a, swap-a, clear-b, draw-b, 
swap-b) so that potentially only one client's rendering ends up visible.

So I would say that at least between a fullscreen clear and either 
swap-buffers or some appropriate flush (glXWaitGL ??), we can treat the 
rendering operations as atomic and have a lot of flexibility in terms of 
how we schedule actual rendering and whether we actually share a buffer 
or not.Note that swapbuffers is as good as a clear from this 
perspective as it can leave the backbuffer in an undefined state.

I'm not just splitting hairs for no good reason - the ability for the 3d 
driver to know the size of the window it is rendering to while it is 
emitting commands, and to know that it won't change size until it is 
ready for it to is really crucial to building a solid driver.

The trouble with sharing a backbuffer is what to do about the situation 
where two clients end up with different ideas about what size the buffer 
should be.

So, if it is necessary to share backbuffers, then what I'm saying is 
that it's also necessary to dig into the real details of the spec and 
figure out how to avoid having the drivers being forced to change the 
size of their backbuffer halfway through rendering a frame.

I see a few options:
0) The old DRI semantics - buffers change shape whenever they feel like 
it, drivers are buggy, window resizes cause mis-rendered frames.

1) The current truly private backbuffer semantics - clean drivers but 
some deviation from GLX specs - maybe less deviation than we actually think.

2) Alternate semantics where the X server allocates the buffers but 
drivers just throw away frames when they find the buffer has changed 
shape at the end of rendering.  I'm sure this would be nonconformant, at 
any rate it seems nasty.  (i915 swz driver is forced to do this).

3) Share buffers with a reference counting scheme.  When a client 
notices the buffer needs a resize, do the resize and adjust refcounts - 
other clients continue with the older buffer.  What happens when a 
client on the older buffer calls swapbuffers -- I'm sure we can figure 
out what the correct behaviour should be.

etc.

All of these are superficial approaches.  My belief is that if we really 
make an attempt to understand the sharing semantics encoded in the GLX 
spec, and interpret that in the terms of allowable ordering of rendering 
operations of separate clients, a favorable implementation is possible.

Kristian - I apologize that I 

Re: Merging DRI interface changes

2007-10-11 Thread Kristian Høgsberg
On 10/11/07, Brian Paul [EMAIL PROTECTED] wrote:
 Kristian Høgsberg wrote:
  Hi,
 
  I have this branch with DRI interface changes that I've been
  threatening to merge on several occasions:
 
http://cgit.freedesktop.org/~krh/mesa/log/?h=dri2
 
  I've just rebased to todays mesa and it's ready to merge.  Ian
  reviewed the changes a while back gave his ok, and from what we
  discussed at XDS2007, I believe the changes there are compatible with
  the Gallium plans.
 
  What's been keeping me from merging this is that it breaks the DRI
  interface.  I wanted to make sure that the new interface will work for
  redirected direct rendering and GLXPixmaps and GLXPbuffers, which I
  now know that it does.  The branch above doesn't included these
  changes yet, it still uses the sarea and the old shared, static back
  buffer setup.  This is all isolated to the createNewScreen entry
  point, though, and my plan is to introduce a new createNewScreen entry
  point that enables all the TTM features.  This new entry point can
  co-exist with the old entry point, and a driver should be able to
  support one or the other and probably also both at the same time.
 
  The AIGLX and libGL loaders will look for the new entry point when
  initializing the driver, if they have a new enough DRI/DRM available.
  If the loader has an old style DRI/DRM available, it will look for the
  old entry point.
 
  I'll wait a day or so to let people chime in, but if I don't hear any
  stop the press type of comments, I'll merge it tomorrow.

 This is basically what's decsribed in the DRI2 wiki at
 http://wiki.x.org/wiki/DRI2, right?

It's a step towards it.  The changes I'd like to merge now doesn't
pull in any memory manager integration, but it does introduce the DRI
breakage required to move to GLX1.4.  The reason that I'm proposing to
merge this now is that I'm fairly sure that we can get everything else
(DRM, X server, and DDX drivers) pulled together before the next Mesa
release is up.  In other words, we only break it this one time.

 The first thing that grabs my attention is the fact that front color
 buffers are allocated by the X server but back/depth/stencil/etc buffers
 are allocated by the app/DRI client.

 If two GLX clients render to the same double-buffered GLX window, each
 is going to have a different/private back color buffer, right?  That
 doesn't really obey the GLX spec.  The renderbuffers which compose a GLX
 drawable should be accessible/shared by any number of separate GLX
 clients (like an X window is shared by multiple X clients).

 [Actually, re-reading the wiki part about serial numbers, it sounds like
 a GLX drawable's renderbuffers will be shared.  Maybe that could be
 clarified?]

Yes, this use is considered in the design.  A GLX drawable (window,
pixmap or pbuffer) has an associated drm_drawable_t in the DRM.  When
the DRI driver wants to render to a drawable it asks the X server for
the drm_drawable_t for the X drawable and then asks the DRM (using an
ioctl I will add later) about the buffers currently associated with
the drm_drawable_t.  If the driver gets the buffers it needs, it can
just create the render buffers and then start rendering.  This is
typically the case when some other client is rendering to the drawable
and has already set up the buffers.  Of course, that other client may
not have set up all the buffers the client needs (maybe it doesn't use
a depth buffer) or maybe the client is the first to render to the
drawable, in which case the client must allocate and attach the
missing buffers.

The serial number mechanism is necessary to prevent to clients from
racing to attach buffers.  Suppose two clients start rendering at the
same time and both find that no buffers have yet been attached.  They
will both go and allocate the set they need and try to attach them.
The buffers that overlap (suppose they both allocate a back buffer)
will be set twice.  The serial number lets the kernel know that they
are both trying to set the back buffer for the same instance of the
attached front buffer.  Only one buffer can be attached for each
increment of the serial number and thus the kernel can let one of the
clients know that the buffer it proposed wasn't set.

 Prior to DRI private back buffers we pretty much got this behaviour
 automatically (though, software-allocated accum buffers, for example,
 were not properly sharable).

Yup, the shared back buffers design made this easy.  With private back
buffers it's a little more tricky since we need one place that tracks
the mapping between a drawable and the attached ancillary buffers.
The prototype I demoed at XDS used the DRI module in the server for
this, but we we decided to move it to DRM as described above.

 Suppose all the renderbuffers which compose a GLX drawable were
 allocated and resized by the X server.  The DRI clients would just have
 poll or check the drawable size when appropriate but they wouldn't have
 to allocate them.  I don't know 

Re: Merging DRI interface changes

2007-10-11 Thread Allen Akin
On Thu, Oct 11, 2007 at 10:35:28PM +0100, Keith Whitwell wrote:
| Suppose 2 clients render to the same backbuffer...

The (rare) cases in which I've seen this used, the clients are aware of
one another, and restrict their rendering to non-overlapping portions of
the drawable.  A master client is responsible for swap and clear.

I believe the intent of the spec was to allow CPU-bound apps to make use
of multiple processors.  Rendering to a single drawable, rather than
multiple drawables, allowed swap to be synchronized.

I recall discussions about ways to coordinate multiple command streams
so that rendering to overlapping areas of the drawable could be handled
effectively, but I don't remember any apps that used such methods.

Allen

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Merging DRI interface changes

2007-10-11 Thread Brian Paul
Keith Whitwell wrote:
 Brian Paul wrote:
 Kristian Høgsberg wrote:
 Hi,

 I have this branch with DRI interface changes that I've been
 threatening to merge on several occasions:

   http://cgit.freedesktop.org/~krh/mesa/log/?h=dri2

 I've just rebased to todays mesa and it's ready to merge.  Ian
 reviewed the changes a while back gave his ok, and from what we
 discussed at XDS2007, I believe the changes there are compatible with
 the Gallium plans.

 What's been keeping me from merging this is that it breaks the DRI
 interface.  I wanted to make sure that the new interface will work for
 redirected direct rendering and GLXPixmaps and GLXPbuffers, which I
 now know that it does.  The branch above doesn't included these
 changes yet, it still uses the sarea and the old shared, static back
 buffer setup.  This is all isolated to the createNewScreen entry
 point, though, and my plan is to introduce a new createNewScreen entry
 point that enables all the TTM features.  This new entry point can
 co-exist with the old entry point, and a driver should be able to
 support one or the other and probably also both at the same time.

 The AIGLX and libGL loaders will look for the new entry point when
 initializing the driver, if they have a new enough DRI/DRM available.
 If the loader has an old style DRI/DRM available, it will look for the
 old entry point.

 I'll wait a day or so to let people chime in, but if I don't hear any
 stop the press type of comments, I'll merge it tomorrow.

 This is basically what's decsribed in the DRI2 wiki at 
 http://wiki.x.org/wiki/DRI2, right?

 The first thing that grabs my attention is the fact that front color 
 buffers are allocated by the X server but back/depth/stencil/etc 
 buffers are allocated by the app/DRI client.

 If two GLX clients render to the same double-buffered GLX window, each 
 is going to have a different/private back color buffer, right?  That 
 doesn't really obey the GLX spec.  The renderbuffers which compose a 
 GLX drawable should be accessible/shared by any number of separate GLX 
 clients (like an X window is shared by multiple X clients).
 
 I guess I want to know what this really means in practice.
 
 Suppose 2 clients render to the same backbuffer in a race starting at 
 time=0, doing something straightforward like (clear, draw, swapbuffers) 
 there's nothing in the spec that says to me that they actually have to 
 have been rendering to the same surface in memory, because the 
 serialization could just be (clear-a, draw-a, swap-a, clear-b, draw-b, 
 swap-b) so that potentially only one client's rendering ends up visible.
 
 So I would say that at least between a fullscreen clear and either 
 swap-buffers or some appropriate flush (glXWaitGL ??), we can treat the 
 rendering operations as atomic and have a lot of flexibility in terms of 
 how we schedule actual rendering and whether we actually share a buffer 
 or not.Note that swapbuffers is as good as a clear from this 
 perspective as it can leave the backbuffer in an undefined state.

On the other hand, a pair of purposely-written programs could clear-a, 
draw-a, draw-b, swap-b and the results should be coherent.  That's how I 
read the spec.


 I'm not just splitting hairs for no good reason - the ability for the 3d 
 driver to know the size of the window it is rendering to while it is 
 emitting commands, and to know that it won't change size until it is 
 ready for it to is really crucial to building a solid driver.

Agreed.


 The trouble with sharing a backbuffer is what to do about the situation 
 where two clients end up with different ideas about what size the buffer 
 should be.
 
 So, if it is necessary to share backbuffers, then what I'm saying is 
 that it's also necessary to dig into the real details of the spec and 
 figure out how to avoid having the drivers being forced to change the 
 size of their backbuffer halfway through rendering a frame.
 
 I see a few options:
 0) The old DRI semantics - buffers change shape whenever they feel 
 like it, drivers are buggy, window resizes cause mis-rendered frames.
 
 1) The current truly private backbuffer semantics - clean drivers 
 but some deviation from GLX specs - maybe less deviation than we 
 actually think.
 
 2) Alternate semantics where the X server allocates the buffers but 
 drivers just throw away frames when they find the buffer has changed 
 shape at the end of rendering.  I'm sure this would be nonconformant, at 
 any rate it seems nasty.  (i915 swz driver is forced to do this).
 
 3) Share buffers with a reference counting scheme.  When a client 
 notices the buffer needs a resize, do the resize and adjust refcounts - 
 other clients continue with the older buffer.  What happens when a 
 client on the older buffer calls swapbuffers -- I'm sure we can figure 
 out what the correct behaviour should be.

I don't know the answers to this either.

There's probably very few, if any, GLX programs in existance 

Re: Merging DRI interface changes

2007-10-11 Thread Kristian Høgsberg
On 10/11/07, Keith Whitwell [EMAIL PROTECTED] wrote:
 Brian Paul wrote:
...
  If two GLX clients render to the same double-buffered GLX window, each
  is going to have a different/private back color buffer, right?  That
  doesn't really obey the GLX spec.  The renderbuffers which compose a GLX
  drawable should be accessible/shared by any number of separate GLX
  clients (like an X window is shared by multiple X clients).

 I guess I want to know what this really means in practice.

 Suppose 2 clients render to the same backbuffer in a race starting at
 time=0, doing something straightforward like (clear, draw, swapbuffers)
 there's nothing in the spec that says to me that they actually have to
 have been rendering to the same surface in memory, because the
 serialization could just be (clear-a, draw-a, swap-a, clear-b, draw-b,
 swap-b) so that potentially only one client's rendering ends up visible.

I've read the GLX specification a number of times to try to figure
this out.  It is very vague, but the only way I can make sense of
multiple clients rendering to the same drawable is if they coordinate
between them somehow.  Maybe the scenegraph is split between several
processes: one client draws the backdrop, then passes a token to
another process which then draws the player characters, and then a
third draws a head up display, calls glXSwapBuffers() and passes the
token back to the first process.  Or maybe they render in parallel,
but to different areas of the drawable, synchronize when they're all
done and then one does glXSwapBuffers() and they start over on the
next frame.

...
 So, if it is necessary to share backbuffers, then what I'm saying is
 that it's also necessary to dig into the real details of the spec and
 figure out how to avoid having the drivers being forced to change the
 size of their backbuffer halfway through rendering a frame.

This is a bigger issue to figure out than the shared buffer one.  I
know you're looking to reduce the number of changing factors during
rendering (clip rects, buffer sizes and locations), but the driver
needs to be able to pick up new buffers in a few more places than just
swap buffers.  But I think we agree that we can add that polling in a
couple of places in the driver (before starting a new batch buffer, on
flush, and maybe other places) and it should work.

 I see a few options:
 0) The old DRI semantics - buffers change shape whenever they feel 
 like
 it, drivers are buggy, window resizes cause mis-rendered frames.

 1) The current truly private backbuffer semantics - clean drivers but
 some deviation from GLX specs - maybe less deviation than we actually think.

 2) Alternate semantics where the X server allocates the buffers but
 drivers just throw away frames when they find the buffer has changed
 shape at the end of rendering.  I'm sure this would be nonconformant, at
 any rate it seems nasty.  (i915 swz driver is forced to do this).

 3) Share buffers with a reference counting scheme.  When a client
 notices the buffer needs a resize, do the resize and adjust refcounts -
 other clients continue with the older buffer.  What happens when a
 client on the older buffer calls swapbuffers -- I'm sure we can figure
 out what the correct behaviour should be.

3) Sounds like the best solution and it's basically what I'm
proposing.  For the first implementation (pre-gallium), I'm looking to
just reuse the existing getDrawableInfo polling for detecting whether
new buffers are available.  It won't be more or less broken than the
current SAREA scheme.  When gallium starts to land, we can fine-tune
the polling to a few select points in the driver.

The DRI driver interface changes I'm proposing here should not be
affected by these issues though.  Detecting that the buffers changed
and allocating and attaching new ones is entirely between the DRI
driver and the DRM.  When we're ready to add the TTM functionality to
a driver we add the new createNewScreen entry point I mentioned and
that's all we need to change.  So, in other words, I believe we can
move forward with this merge while we figure out the semantics of the
resizing-while-rendering case.

Kristian

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Merging DRI interface changes

2007-10-11 Thread Keith Whitwell
Allen Akin wrote:
 On Thu, Oct 11, 2007 at 10:35:28PM +0100, Keith Whitwell wrote:
 | Suppose 2 clients render to the same backbuffer...
 
 The (rare) cases in which I've seen this used, the clients are aware of
 one another, and restrict their rendering to non-overlapping portions of
 the drawable.  A master client is responsible for swap and clear.
 
 I believe the intent of the spec was to allow CPU-bound apps to make use
 of multiple processors.  Rendering to a single drawable, rather than
 multiple drawables, allowed swap to be synchronized.
 
 I recall discussions about ways to coordinate multiple command streams
 so that rendering to overlapping areas of the drawable could be handled
 effectively, but I don't remember any apps that used such methods.

Allen,

Just to clarify, would things look a bit like this:

Master:
clear,
glFlush,
signal slaves somehow

Slave0..n:
wait for signal,
don't clear, just draw triangles
glFlush
signal master

Master:
wait for all slaves
glXSwapBuffers

This is fairly sensible and clearly requires a shared buffer.  It's also 
quite a controlled situation that sidesteps some of the questions about 
what happens when two clients are issuing swapbuffers willy-nilly on the 
same drawable at the same time as the user is frantically resizing it...

Keith

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Merging DRI interface changes

2007-10-11 Thread Keith Whitwell
Kristian Høgsberg wrote:
 On 10/11/07, Keith Whitwell [EMAIL PROTECTED] wrote:
 Brian Paul wrote:
 ...
 If two GLX clients render to the same double-buffered GLX window, each
 is going to have a different/private back color buffer, right?  That
 doesn't really obey the GLX spec.  The renderbuffers which compose a GLX
 drawable should be accessible/shared by any number of separate GLX
 clients (like an X window is shared by multiple X clients).
 I guess I want to know what this really means in practice.

 Suppose 2 clients render to the same backbuffer in a race starting at
 time=0, doing something straightforward like (clear, draw, swapbuffers)
 there's nothing in the spec that says to me that they actually have to
 have been rendering to the same surface in memory, because the
 serialization could just be (clear-a, draw-a, swap-a, clear-b, draw-b,
 swap-b) so that potentially only one client's rendering ends up visible.
 
 I've read the GLX specification a number of times to try to figure
 this out.  It is very vague, but the only way I can make sense of
 multiple clients rendering to the same drawable is if they coordinate
 between them somehow.  Maybe the scenegraph is split between several
 processes: one client draws the backdrop, then passes a token to
 another process which then draws the player characters, and then a
 third draws a head up display, calls glXSwapBuffers() and passes the
 token back to the first process.  Or maybe they render in parallel,
 but to different areas of the drawable, synchronize when they're all
 done and then one does glXSwapBuffers() and they start over on the
 next frame.
 
 ...
 So, if it is necessary to share backbuffers, then what I'm saying is
 that it's also necessary to dig into the real details of the spec and
 figure out how to avoid having the drivers being forced to change the
 size of their backbuffer halfway through rendering a frame.
 
 This is a bigger issue to figure out than the shared buffer one.  I
 know you're looking to reduce the number of changing factors during
 rendering (clip rects, buffer sizes and locations), but the driver
 needs to be able to pick up new buffers in a few more places than just
 swap buffers.  But I think we agree that we can add that polling in a
 couple of places in the driver (before starting a new batch buffer, on
 flush, and maybe other places) and it should work.

Yes, there are a few places, but they are very few.  Basically I think 
it is possible to cut a rendering stream up into chunks which are 
effectively atomic.  Drivers do this all the time anyway - just by 
building a dma buffer that is then submitted atomically to hardware for 
processing.

It isn't too hard to figure out where the boundaries of these regions 
are - if we think about a driver with effectively infinite dma space, 
then such a driver only flushes when required to satisfy requirements 
placed on it by the spec.

I also believe that the only sane time to check the size of the 
destination drawable is when the driver is *entering* such an atomic 
region (let's call it a scene).

Swapbuffers terminates a scene, it doesn't really start the next one - 
that doesn't happen until actual rendering starts.  I would even say 
that fullscreen clears don't start a scene, but that's another story...

The things that terminate a scene are:
- swapbuffers
- readpixels and similar
- maybe glFlush() - though I'm sometimes naughty and no-op it for 
backbuffer rendering.

Basically any API-generated event that implies a flush.  Internally 
generated events, like running out of some resource and having to fire 
buffers to recover generally don't count.




 I see a few options:
 0) The old DRI semantics - buffers change shape whenever they feel 
 like
 it, drivers are buggy, window resizes cause mis-rendered frames.

 1) The current truly private backbuffer semantics - clean drivers but
 some deviation from GLX specs - maybe less deviation than we actually think.

 2) Alternate semantics where the X server allocates the buffers but
 drivers just throw away frames when they find the buffer has changed
 shape at the end of rendering.  I'm sure this would be nonconformant, at
 any rate it seems nasty.  (i915 swz driver is forced to do this).

 3) Share buffers with a reference counting scheme.  When a client
 notices the buffer needs a resize, do the resize and adjust refcounts -
 other clients continue with the older buffer.  What happens when a
 client on the older buffer calls swapbuffers -- I'm sure we can figure
 out what the correct behaviour should be.
 
 3) Sounds like the best solution and it's basically what I'm
 proposing.  For the first implementation (pre-gallium), I'm looking to
 just reuse the existing getDrawableInfo polling for detecting whether
 new buffers are available.  It won't be more or less broken than the
 current SAREA scheme.  When gallium starts to land, we can fine-tune
 the polling 

Re: Merging DRI interface changes

2007-10-11 Thread Allen Akin
On Fri, Oct 12, 2007 at 12:08:09AM +0100, Keith Whitwell wrote:
| Just to clarify, would things look a bit like this:
| 
| Master:
|   clear,
|   glFlush,
|   signal slaves somehow
| 
| Slave0..n:
|   wait for signal,
|   don't clear, just draw triangles
|   glFlush
|   signal master
| 
| Master:
|   wait for all slaves
|   glXSwapBuffers

Yes, more or less.  As I look at it now, I wonder if the master really
did a clear, or if the slaves simply drew background polygons over their
respective regions.  It's also possible that the swap guarantees a flush
for commands queued by all the slaves, but I'm unsure of that without
checking the spec.

| This is fairly sensible and clearly requires a shared buffer.  It's also 
| quite a controlled situation that sidesteps some of the questions about 
| what happens when two clients are issuing swapbuffers willy-nilly on the 
| same drawable at the same time as the user is frantically resizing it...

Right.

Allen

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Merging DRI interface changes

2007-10-11 Thread Keith Packard

On Thu, 2007-10-11 at 23:39 +0100, Keith Whitwell wrote:

 Maybe we're examining the wrong spec here.  My concerns are all about 
 what happens when the window changes size -- what does X tell us about 
 the contents of a window under those circumstances?  Does the GLX spec 
 actually specify *anything* about this situation???

As Brian said, X knows exactly when the window changes size, and the
contents of the window at resize are well specified by the protocol. As
X requests are always atomic, and executed as some shuffle of the
request streams from all of the clients, there are no partial resize
states to deal with. Clients can always know when drawing occurred
before or after a resize as the resize events include the serial number
of the most recently executed client request indicating when in the
client's request stream the resize occurred.

Making the resize asynchronous is a huge feature as it means
applications often end up repainting less than once per resize as you
reshape the window with the window manager. It sounds like the DRM needs
to have an event queue for the X server to deliver resize evens into
that is outside the X protocol (and hence not subject to the whims of
the application). I suspect the DRI extension will need a new request
that causes the X server to post events to the DRM module.

Windows always contain their background in areas where expose events are
delivered (again, the request serialization means this is always well
defined in time). Backgrounds can consist of a single pixel value or an
image to be tiled into the window, or they can be left as garbage
(background None). This latter mode is often used to avoid flashing on
the screen, but the actual contents of the window are not defined by the
core protocol to be the parent contents in all cases. The Composite
extension stands on its head to make the parent contents visible though,
so I suppose we now have defined these contents as the parent contents
in all cases.

-- 
[EMAIL PROTECTED]


signature.asc
Description: This is a digitally signed message part
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: Merging DRI interface changes

2007-10-11 Thread Keith Packard

On Fri, 2007-10-12 at 00:19 +0100, Keith Whitwell wrote:

 Basically any API-generated event that implies a flush.  Internally 
 generated events, like running out of some resource and having to fire 
 buffers to recover generally don't count.

If I understand this, then the only time you'll check for window resize
is just before the next drawing occurs after one of these events. That
makes a huge amount of sense to me, and limits polling to once per
scene, instead of once per batchbuffer.

And, we do all of this polling through the DRM, which would allow things
other than the X server to send resize events for non-X buffers.

-- 
[EMAIL PROTECTED]


signature.asc
Description: This is a digitally signed message part
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel