Quoting Jon Smirl ([EMAIL PROTECTED]):
> Is there any kind of overview on how graphics contexts
> are implemented? Here a few of the questions I'm
> having...
> 
> When multitasking, how does the context get changed?

I don't know how that's handled by others,
but DirectFBGL applications have to Lock() and Unlock() the context.
Only one context can be locked at the same time (in each DirectFB session).
The Lock() method acquires DirectFB's graphics card lock and makes
the context the current one, like glXMakeCurrent(non-null).
The Unlock() method clears the current context, like glXMakeCurrent(null),
and releases the graphics card lock. When DRI has the graphics card lock
for more than 100 ms, DirectFB uses software rendering even if hardware
rendering was available. Otherwise it waits for the lock with a timeout of
another 100 ms when doing hardware acceleration.

Applications usually lock/unlock the context once around initialization and
once for each complete frame, most probably calling IDirectFBSurface::Flip()
afterwards or doing DirectFB graphics operations on the buffer in between.

This mechanism provides
- easy transition from/to DirectFB/DRI hardware state
- synchronization and context changes for multiple threads or applications
  with smooth interaction 

> When multitasking, how long does it take to change
> context?

Quick measurements showed that it's harmless on a per frame basis.

> Does graphics hardware have a single set of context
> registers, or are there multiple sets for fast context
> switch?

Most cards have a single set, but a small DMA transfer should be enough
to change it completely.

> How is drmCreateContext related to the driver's
> context variables?

Each GL context is bound to a drmContext. When the drmContext is locked
and it's not the previously locked context, the hardware related settings
of the context are transfered to the card.

Please correct me, if I'm wrong.

> How many contexts can I have, per processes, globally?

At least there's no limit in DirectFBGL, but the global
limit is defined by the DRM, I guess.

> How are the states diff'd to minimize state changes on
> process switch?

There's no diff made, but Chromium (sf.net) does that at application
level and shares a single hardware context.

Again, I may be wrong.

> Do state changes get queued like other 3D ops?

Do you mean changes to the current state or switching from one
state to another?

> Finally, the questions I am trying to answer for the
> R200...
> 1) When implementing per context drawables how do I
> make sure the hardware knows what buffer I am drawing
> to? Is the code already written for this since each
> context can be drawing into either the front or the
> back buffer?
> 2) How do modify the clear functions not to use a
> global variable for the buffer pointer, instead they
> need to get it from the context?

You should have a look at how it's done for MGA in the embedded-2-branch.
The DRM module provides an "extended state" bound to each context
without breaking compatibility.

-- 
Best regards,
  Denis Oliver Kropp

.------------------------------------------.
| DirectFB - Hardware accelerated graphics |
| http://www.directfb.org/                 |
"------------------------------------------"

                            Convergence GmbH


-------------------------------------------------------
This SF.Net email sponsored by: Free pre-built ASP.NET sites including
Data Reports, E-commerce, Portals, and Forums are available now.
Download today and enter to win an XBOX or Visual Studio .NET.
http://aspnet.click-url.com/go/psa00100003ave/direct;at.aspnet_072303_01/01
_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to