I've recently started fixing up GGI Mesa (which in the current 4.0 release
of Mesa doesn't even compile). I have it mostly working, but I am a bit
concerned about the interface. Normaly, the usual sequence of calls is
create a visual, then context and then any buffers. The current GGIMesa
creates the context first and then the visual which leads to some
nastiness. I'm planning to change the interface so that it follows a bit
closely the xmesa interface. I'm still rather new to GGI and I don't know
exactly what kind of behaviour wrt ggi visual and mode is expected of an
extension so I hope somebody here will be able to point out any
inconsitencies.

The xmesa interface follows roughly the sequence (this is a crude
simplification): 

1. XMesaCreateVisual
        Create a new visual which is composed out of the standard X11
        visual and extended by the info that is required by OpenGL and
        cannot be described the the X11 visual (depth, stencil, accum
        sizes and so on)

2. XMesaCreateContext
        Using the previously created visual create a new context

3. XMesaCreate{Window|Pixmap}Buffer
        Create a buffer for a window or a pixmap as described by the
        previously create visual

4. XMesaMakeCurrent
        Bind the context to the buffer (and make it current)


I'm assuming that similary sounding things is X mean similar things in
GGI, hopefuly this is not totally incorrect. This led me to the following
interface:

1. int GGIMesaCreateVisual(ggi_visual_t vis, GLboolean alpha_flag,
                           GLboolean db_flag, GLboolean stereo_flag,
                           GLint depth_size, GLint stencil_size,
                           GLint accum_red_size, GLint accum_green_size,
                           GLint accum_blue_size, GLint accum_alpha_size,
                           GLint num_samples)
        This prototype corresponds exactly to XMesaCreateVisual except
        that it applies to ggi_visual_t. The big difference is that in ggi
        visual also corresponds to all drawing buffers so this function
        effectively replaces XMesaCreateWindowBuffer, in that it assumes
        that a valid mode is set and it creates all necessary buffers of
        appropriate size and requested depth. This is also where any
        interaction with libGAlloc would occur to get any necessary
        hardware resources.

2. GGIMesaContext GGIMesaCreateContext(ggi_visual_t vis)
        Assuming that GGIMesaCreateVisual succeeded on the visual this
        function will create the necessary GL context.

3. void GGIMesaMakeCurrent(GGIMesaContext ctx)
        I'm not exactly sure whether this function is strictly necessary.
        Is it even possible to have more than one ggi_visual_t open at a
        time?


I an uncertain about some parts of the above interface: Is extending a
visual through GGIMesaCreateVisual a good idea? Currently it assumes that
a valid mode is set and uses the set dimensions to allocate necessary
buffers. Would it be a better idea to instead create a full blown
GGIMesaSetMode? It would make integration with libGAlloc much easier but
I'm not sure whether it would be flexible enough.


Now for an implemetation issue: Double Buffering. Currently, I have
implemented double buffering using ggiSet*Frame() family of calls. Is this
the only implementation I should provide? Should I provide alternate
implementation using ggiSetOrigin in case the virtual resolution is large
enough? And what about the case when the target does not support any sort
double buffering and the user requests double buffered OpenGL? If the
target didn't bother emulating double buffering should Mesa do it?


-Filip


Reply via email to