changes for GL_EXT_framebuffer_object

2005-03-04 Thread Brian Paul
This extension can't be easily/cleanly added to Mesa without rewriting 
and changing some existing code.  But ultimately, the changes will be 
for the better, much in the way that GL_NV/ARB_vertex_program improved 
the TNL code.

I'll assume people are familiar with GL_EXT_framebuffer_object.  If 
not, read the spec.

The new code will be C object-oriented so I'll use OOP terms here.
The basic idea is to merge the GLframebuffer class (in mtypes.h) with 
the new gl_framebuffer class (in fbobject.h).

The stencil, depth, accum, aux, etc. buffer pointers in GLframebuffer 
will go away, replaced by gl_renderbuffer_attachment members.

Each of the logical buffers (such as color, depth, stencil, etc) which 
form the overall framebuffer will be represented by a gl_renderbuffer 
object (see fbojbect.h).  These renderbuffers can either be 
allocated/managed by the device driver, or by core Mesa as a software 
fallback.

Device drivers will have the opportunity to derive device-specific 
classes from the gl_renderbuffer and gl_framebuffer classes.  They're 
allocated with the ctx-Driver.NewRenderbuffer() and 
ctx-Driver.NewFramebuffer() functions.  Each of these classes have 
their own Delete member functions.

The gl_framebuffer class will be used both for user-allocated 
framebuffers (made via glBindFramebufferEXT) and for the window-system 
framebuffers that correspond to GLX windows.

gl_renderbuffer objects have an AllocateStorage() method that's used 
for allocating storage for the buffer's depth/color/stencil/etc.  In 
the case of DRI drivers, this would be a no-op for the statically 
allocated window buffers, but would allocate VRAM for user-allocated 
renderbuffers.  There's also a Delete() method.

The gl_renderbuffer's GetPointer(), GetRow(), GetValues(), PutRow(), 
and PutValues() methods will used by the software fallback routines to 
access the pixels in a buffer.  This will replace the current swrast 
span read/write routines.  Note that there's no distinction between 
color, depth or stencil in these routines.  The gl_renderbuffer's 
BaseFormat and DataType will implicitly specify which kind of pixel 
data is being accessed.

Also, a gl_renderbuffer object can be used as a wrapper for texture 
images.  This will allow render-to-texture functionality.  A 
yet-to-be-written routine will be called to set up the wrapper when 
the user wants render-to-texture.  If the device supports rendering 
into texture memory, the wrapper will describe how to do that. 
Otherwise, a software-fallback wrapper would be used.

All this means that core Mesa can treat windows, off-screen pbuffers 
and texture images in a unified manner.

I'll continue to plug away on this stuff in my spare time.
-Brian

---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Mesa3d-dev] Re: changes for GL_EXT_framebuffer_object

2005-03-04 Thread Brian Paul
Ian Romanick wrote:
Brian Paul wrote:
I'll assume people are familiar with GL_EXT_framebuffer_object.  If 
not, read the spec.

If you still have questions after reading the spec, you can ask me on 
#dri-devel on freenode.  I try to be on there as often as I can.

The gl_renderbuffer's BaseFormat and DataType will implicitly specify 
which kind of pixel data is being accessed.

I assume gl_renderbuffer will also have a method like 
ChooseTextureFormat that the driver can over-ride?
Well, I don't think most GPUs allow rendering to as many different 
image buffer formats as they support for texture image formats. 
They're a little different.  But I have to admit I haven't gotten down 
to that level of detail yet.


It seems like there 
might be enough commonality that gl_renderbuffer and gl_texture should 
both derrive from a common, virtual base class.
You mean gl_texture_image?  I hadn't considered that before.  I'll 
have to think about it.


Also, a gl_renderbuffer object can be used as a wrapper for texture 
images.  This will allow render-to-texture functionality.  A 
yet-to-be-written routine will be called to set up the wrapper when 
the user wants render-to-texture.  If the device supports rendering 
into texture memory, the wrapper will describe how to do that. 
Otherwise, a software-fallback wrapper would be used.

Do you have any ideas about how this would work?  One thing that I'm 
curious about, and has come up numerous times in the working-group 
discussions, is supporting render-to-texture when a blit is required. 
For example, on a architecture with separate texture and framebuffer 
memory, the driver would have to render to framebuffer memory, then copy 
that to the texture when the mipmap level was configured to be used as a 
texture source.  Certain restrictions in the spec were crafted 
specifically to handle this case.  It sounds like the wrapper idea 
should cover this, but I just want to be sure. :)
Again, I haven't gotten to that level of detail.  The business of 
wrapping a texture image with a gl_renderbuffer seems like a fairly 
reasonable/flexible approach at this time, but nothing's set in stone yet.


Anyway, it sounds like you've really thought this through.  I'm glad you 
have at least some time to work on it. :)
-Brian
---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: changes for GL_EXT_framebuffer_object

2005-03-03 Thread Ian Romanick
Brian Paul wrote:
I'll assume people are familiar with GL_EXT_framebuffer_object.  If not, 
read the spec.
If you still have questions after reading the spec, you can ask me on 
#dri-devel on freenode.  I try to be on there as often as I can.

The gl_renderbuffer's 
BaseFormat and DataType will implicitly specify which kind of pixel data 
is being accessed.
I assume gl_renderbuffer will also have a method like 
ChooseTextureFormat that the driver can over-ride?  It seems like there 
might be enough commonality that gl_renderbuffer and gl_texture should 
both derrive from a common, virtual base class.

Also, a gl_renderbuffer object can be used as a wrapper for texture 
images.  This will allow render-to-texture functionality.  A 
yet-to-be-written routine will be called to set up the wrapper when the 
user wants render-to-texture.  If the device supports rendering into 
texture memory, the wrapper will describe how to do that. Otherwise, a 
software-fallback wrapper would be used.
Do you have any ideas about how this would work?  One thing that I'm 
curious about, and has come up numerous times in the working-group 
discussions, is supporting render-to-texture when a blit is required. 
For example, on a architecture with separate texture and framebuffer 
memory, the driver would have to render to framebuffer memory, then copy 
that to the texture when the mipmap level was configured to be used as a 
texture source.  Certain restrictions in the spec were crafted 
specifically to handle this case.  It sounds like the wrapper idea 
should cover this, but I just want to be sure. :)

Anyway, it sounds like you've really thought this through.  I'm glad you 
have at least some time to work on it. :)

---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Mesa3d-dev] changes for GL_EXT_framebuffer_object

2005-03-03 Thread Adam Jackson
On Thursday 03 March 2005 10:42, Brian Paul wrote:
 The stencil, depth, accum, aux, etc. buffer pointers in GLframebuffer
 will go away, replaced by gl_renderbuffer_attachment members.

 Each of the logical buffers (such as color, depth, stencil, etc) which
 form the overall framebuffer will be represented by a gl_renderbuffer
 object (see fbojbect.h).  These renderbuffers can either be
 allocated/managed by the device driver, or by core Mesa as a software
 fallback.

Would one side effect of this be that the driver could implement (say) accum 
buffers entirely in swrast but still have hardware acceleration for the 
normal set of buffers?  This might be an interim solution for pbuffers until 
the DRI drivers get decent memory management.

- ajax


pgpgDoBDJbNkP.pgp
Description: PGP signature


Re: [Mesa3d-dev] changes for GL_EXT_framebuffer_object

2005-03-03 Thread Brian Paul
Adam Jackson wrote:
On Thursday 03 March 2005 10:42, Brian Paul wrote:
The stencil, depth, accum, aux, etc. buffer pointers in GLframebuffer
will go away, replaced by gl_renderbuffer_attachment members.
Each of the logical buffers (such as color, depth, stencil, etc) which
form the overall framebuffer will be represented by a gl_renderbuffer
object (see fbojbect.h).  These renderbuffers can either be
allocated/managed by the device driver, or by core Mesa as a software
fallback.

Would one side effect of this be that the driver could implement (say) accum 
buffers entirely in swrast but still have hardware acceleration for the 
normal set of buffers?
Yes.  It'll also allow hardware accumulation buffers in a 
straight-forward manner.

The Mesa fall-back routines for rasterization, accumulation, etc will 
do all their pixel-oriented work through the gl_renderbuffer's methods 
so it won't matter which buffers are hardware or software based.

Currently, the software fallbacks, in some places, are more intimately 
tied to the memory allocation than needed.

In the new scheme, all the code for allocating software buffers will 
be pulled out of the swrast routine and put into a utility routine, 
probably in drivers/common/.


This might be an interim solution for pbuffers until 
the DRI drivers get decent memory management.
Well, GL_EXT_framebuffer_object should be able to replace pbuffers, 
but it'll still require a DRI mechanism for dynamically allocating 
rendering buffers (if you want hardware rendering).

-Brian
---
SF email is sponsored by - The IT Product Guide
Read honest  candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595alloc_id=14396op=click
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel