On Fri, 7 Mar 2008 10:21:28 +0100
Tom Cooksey <[EMAIL PROTECTED]> wrote:

> Hi,
> 
> I'm a developer working on getting OpenGL ES working with QWS - the window 
> system
> built into Qt/Embedded. That is, Trolltech's own windowing system, completely
> independent of X. The typical hardware we're working with is PowerVR MBX, an
> OpenGL ES 1.1 complient device. We have also played with ATI mobile chipsets. 
> One
> thing all these devices have in common is rubbish (IMO), closed source 
> drivers. The only
> API we have for them is EGL, the only on-screen surface is the entire display.
> 
> While we are continuing development with these devices, I'm very keen to 
> develop a
> proof-of-concept driver using an open source desktop OpenGL implementation. I 
> want
> to show people what can be done with decent (& open) drivers.
> 
> I'm pretty new to X, DRI & associated code bases but have spent the last few 
> months
> reading documentation & code, trying to understand how everything works 
> together.
> I think I've now got to a stage where I've read everything I could find and 
> need some
> help.
> 
> The effect I'm looking for is iPhone/Compiz style window composition. We have 
> this
> already, but the problem is that the drivers are designed for a single process
> accessing the hardware at a time. This is fine if there's only a single 
> process (in QWS,
> the window system is a shared library which is loaded by the first 
> application to be
> launched). All the windows can be drawn into off-screen pbuffers and then 
> used as
> textures to be rendered onto the screen. The problem comes when there are 
> multiple
> processes. Our current solution is to get the client processes to use our 
> raster paint
> engine to render into shared memory, which the server then uploads as a 
> texture.
> As you can imagine, this is *SLOW*. It also doesn't allow client processes to 
> use
> OpenGL themselves - something we really want to have.
> 
> What we want to do is use our OpenGL paint engine (or, even better, an OpenVG
> paint engine - which maps much better to Qt's painter API) in the client 
> processes.
> The client processes render both 2D windows and OpenGL calls to off-screen
> buffers, which the server can use as textures. We'd also like video to be 
> handled in
> a similar way (VAAPI?).
> 
> >From what I've read, AIGLX allows compiz to composite OpenGL window surfaces
> because it's the X server which does the rendering. I.e. X clients serialize 
> OpenGL
> commands and send them to the server via GLX. While we could do this too, (and
> will probably have to do this for nasty closed source OpenGL ES drivers), I
> stumbled upon this:
> 
> http://hoegsberg.blogspot.com/2007/08/redirected-direct-rendering.html
> 
> What I'm hoping to do is bring together all the very fine work done in the 
> last few
> years. What I'm stuck on is how everything is going to hang together. This is 
> what
> I have so far (most of which is probably wrong, so please correct):
> 
> Write a QWS driver where the server opens the framebuffer using DRM 
> Modesetting.
> The server also initializes the DRM. QWS clients render into off-screen 
> buffers 
> (pbuffers or Framebuffer objects?) using OpenGL (Mesa/Gallium?). The QWS 
> client
> then magicaly gets the DRM ID of the off-screen buffer (Is there a 1:1 
> relationship
> between a DRM buffer and a framebuffer object's color buffer?). The clients 
> then 
> send that DRM ID to the server. The server then somehow magically tells 
> mesa/gallium about the buffer which is then (also magically) mapped to a 
> texture
> name/ID and used as a texture to be drawn into the framebuffer.
> 
> Obviously, I still have a lot to learn. :-D
> 
> The first step I'd like to make is to just get something on the screen. I was
> wondering if it's possible to use DRM to just map the framebuffer into a user
> process's address space and use it like we would use the LinuxFB device? Or
> do modern frame buffer drivers use the DRM themselves to do this?
> 
> 
> Any/all comments, suggestions & insults are welcome. :-)
> 
> 
> Cheers,
> 
> Tom

In drm tree you can find example on how to use drm modesetting (test directory).
The drm modesetting interface is going under heavy change Dave, Jesse and Jakob
are one working on that, so it's likely going to evolve a bit see 
http://dri.freedesktop.org/wiki/DrmModesetting for an overview of what the
current aim is.

Once you got your app in charge of modesetting, then you can work on winsys
gallium driver. winsys driver is the part which interface with your windowing
system. As you are not using X you need to do your own winsys, but this will
likely end up being a lot of cut & paste. What you also need is somethings like
DRI2 ie passing drm object id is not enough for compositor. DRM buffer object
don't have information on the size or format or data they containt. So you need
to pass BO ID between your server and your client through somethings like DRI2
where along the ID you send width, height, texture format, or anyother revealent
informations needed by the hw. You can then reuse this id along with the
informations in the server with gl/gallium to do compositing using 3d (here
again this will go through winsys layer).

So i believe you can reuse most of stuff maybe even reuse DRI2 things and just
change X stuff it's doing into somethings suitable for your server. Last advice
use intel hw as this the one with most up to date support for all this. And
there is still no DRI2 winsys (not that i am aware of) in fact DRI bit in
gallium branch are bit old but this can be worked around.

Cheers,
Jerome Glisse

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
--
_______________________________________________
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to