Re: DRM QWS

2008-03-27 Thread Tom Cooksey
On Wednesday 26 March 2008 19:32:22 Kristian Høgsberg wrote:
 On Wed, Mar 26, 2008 at 1:50 PM, Tom Cooksey
 [EMAIL PROTECTED] wrote:
 ...
   I guess what I was thinking about was a single API which can be used on 
  3D-less
   (or legacy, if you want) hardware and on modern hardware. If the graphics 
  hardware
   is a simple pointer to a main-memory buffer which is scanned out to the 
  display, then
   your right, you might as well just use user-space shared memory, as we 
  currently do.
   A new API would only be useful for devices with video memory and a 
  hardware blitter.
   There are still new devices coming out with this kind of hardware, the 
  Marvel PXA3x0
   and Freescale i.MX27 for example spring to mind.
 
 I agree with you that it probably doesn't make sense to use
 gallium/mesa on everything everywhere.  There are still small devices
 or early boot scenarios (you mention initramfs) where gallium isn't
 appropriate.  However, there is no need to put this a 2D engine into
 the kernel.  What the drm ttm gives us is a nice abstraction for
 memory management and command buffer submission, and drm modesetting
 builds on this to let the kernel set a graphics mode.  And that's all
 that we need in the kernel.  Building a small userspace library on top
 of this to accelerate blits and fills should be pretty easy.

I had a think about this last night. I think Zack is probably right about future
graphics hardware. There's always going to be devices with simple graphics,
having a framebuffer in main memory and a few registers for configuration. I
think in the future, if more advanced graphics are needed, it will take the 
form 
of programmable 3D hardware. Take the set-top-box example I gave: While I 
stand by the fact that a low-power 3D core can't render at 1920×1080, a 
software-only graphics stack also can't render at this resolution. I'm just 
thinking about the problems I've been trying to solve getting Qt to perform well
on the Neo1973 with it's 480x640 display and 266Mhz CPU.

So for simple, linear framebuffer devices we have fbdev. For programmable 3D, 
we have gallium/DRM. There's still the issue of early boot for 3D devices, but 
as
Jesse mentioned, the DRM drivers can include an fbdev interface as the intel
driver does already.

Ok, I'm satisfied. Thanks to all. :-)


Cheers,

Tom

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRM QWS

2008-03-26 Thread Tom Cooksey
On Wednesday 19 March 2008 16:26:37 Zack Rusin wrote:
 On Wednesday 19 March 2008 05:21:43 am Tom Cooksey wrote:
2) Sortof related to the above... it would be very cool to have a very
simple drawing API to use on top of the modesetting API. A simple blit
 solid fill would surfice. I've always found it odd that the internal
kernal API for framebuffer devices includes blit and solid fill, but
that API is never exposed to user-space applications - even though it
would be _very_ useful.
  
   I don't think this will be done. Main reason is that newer hw is hard to
   program, no 2d anymore so you have to program the whole 3d pipeline stuff
   and we don't want such code in kernel.
  
   So the idea here is to use one userspace driver like gallium3d. Basicly
   you do a winsys for your use and you can also do a new frontend for
   gallium other than a GL frontend or wait for new frontend to appear :)
 
  Hmm... If you have to use gallium to talk to the hardware, shouldn't fbcon
  be renamed to glcon? :-) Also, while 2D is dissappearing on desktop, it's
  very much alive on embedded, for the moment at least. 
 
 That depends on your definition of embedded. 
 I think what you're referring to are dummy framebuffers or gpu's that were 
 made with some absolutely silly requirements like no known bugs policy 
 which implies that all they have is an underperforming 2D engine. In both of 
 those cases you already lost. So if you're trying to accelerate or design 
 framework based on those than honestly you can just give up and go with an 
 all software framework. If you're referring to actual embedded GPU's the 
 current generation is actually already fully programmable and if you're 
 designing with those in mind than what Jerome said holds.

I was initially thinking about low-end graphics hardware, which are mainly just 
dummy 
framebuffers as you say. However, I've thought some more about this and there's 
still 
set-top-box type hardware here, which need to decode full resolution HD video 
(1920×1080 or even 3840×2160). Typically this is off-loaded onto a dedicated 
DSP. 
E.g. TI's DaVinci platform manages to do Full-HD resolution h.264 decoding in a 
~2W 
power envolope. I believe the video is composited with a normal framebuffer 
(for the UI) 
in hardware. I don't think there's any programmable 3D hardware avaliable which 
can 
do 1920×1080 resolutions in a 2W power envelope. So even if they replace the 
linear 
framebuffer with a programmable 3D core, that core still needs to render at 
[EMAIL PROTECTED] fps without impacting the 2W power draw too much. I guess it 
will 
probably be possible in 5 years or so, but it's not possible now.


  I can't see fbdev 
  going anytime soon if the only replacement is a full-blown programmable 3D
  driver architecture. Perhaps a new, simple API could be created. On desktop
  it would be implemented as a new front-end API for gallium and on embedded
  it would be implemented using a thin user-space wrapper to the kernel
  module? 
 
 I don't think that makes a lot of sense. Gallium3D is an interface to 
 hardware - it models the way modern graphics hardware works. Front-end's in 
 the Gallium3D sense are the state-trackers that are used by the API that 
 you're trying to accelerate. So if your hardware is an actual GPU that you 
 can write a Gallium3D driver for, than front-end for it would be just another 
 api (you could be just using GL at this point).

I guess what I was thinking about was a single API which can be used on 3D-less
(or legacy, if you want) hardware and on modern hardware. If the graphics 
hardware
is a simple pointer to a main-memory buffer which is scanned out to the 
display, then
your right, you might as well just use user-space shared memory, as we 
currently do.
A new API would only be useful for devices with video memory and a hardware 
blitter.
There are still new devices coming out with this kind of hardware, the Marvel 
PXA3x0 
and Freescale i.MX27 for example spring to mind.

I'm still a bit confused about what's meant to be displayed during the boot 
process,
before the root fs is mounted. Will the gallium libraries  drivers need to be 
in the
initramfs? If not, what shows the splash screen  provides single-user access 
if 
anything goes wrong in the boot process?


  A bit like what DirectFB started life as (before it started trying 
  to be X).
 
 Well, that's what you end up with when you start adding things that you need 
 across devices. I know that in the beginning when you look at the stack you 
 tend to think this could be a lot smaller!, but then with time you realize 
 that you actually need all of those things but instead of optimizing the 
 parts that were there you went some custom solution and are now stuck with 
 it.

I was refering here to DirectFB's window management, input device abstraction,
audio interface abstraction  video streaming APIs. Personally, I believe there 
is
a requirement for a simple, 

Re: DRM QWS

2008-03-26 Thread Kristian Høgsberg
On Wed, Mar 26, 2008 at 1:50 PM, Tom Cooksey
[EMAIL PROTECTED] wrote:
...
  I guess what I was thinking about was a single API which can be used on 
 3D-less
  (or legacy, if you want) hardware and on modern hardware. If the graphics 
 hardware
  is a simple pointer to a main-memory buffer which is scanned out to the 
 display, then
  your right, you might as well just use user-space shared memory, as we 
 currently do.
  A new API would only be useful for devices with video memory and a hardware 
 blitter.
  There are still new devices coming out with this kind of hardware, the 
 Marvel PXA3x0
  and Freescale i.MX27 for example spring to mind.

I agree with you that it probably doesn't make sense to use
gallium/mesa on everything everywhere.  There are still small devices
or early boot scenarios (you mention initramfs) where gallium isn't
appropriate.  However, there is no need to put this a 2D engine into
the kernel.  What the drm ttm gives us is a nice abstraction for
memory management and command buffer submission, and drm modesetting
builds on this to let the kernel set a graphics mode.  And that's all
that we need in the kernel.  Building a small userspace library on top
of this to accelerate blits and fills should be pretty easy.

cheers,
Kristian

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRM QWS

2008-03-19 Thread Tom Cooksey

  I've had some time to play with the modesetting branch. I am using a laptop 
  with
  an Intel 965GM, is this likely to work? At the moment, when I run 
  tests/modedemo
  I get a hard lock. :-/
 
 Well there is fixes pushed allmost everydays so make sure to use lastest git 
 :)

Yep, I'm pulling every day at the moment. If you think the 965GM is good to 
develop
on, I'll have a go at debugging what's going wrong.

  I have a few comments/questions from what I've looked at so far:
  
  1) The current libdrm looks to be a very thin wrapper around the ioctls. If 
  this is the
  case and all the code is kernel-side, what are the thoughts of implementing 
  a linuxfb
  driver ontop of this? It would be pretty cool to get fbcon rendering using 
  DRM?
 
 Implementing fbcon in userspace have been on the table as a fun things that 
 we might do.

  2) Sortof related to the above... it would be very cool to have a very 
  simple drawing
  API to use on top of the modesetting API. A simple blit  solid fill would 
  surfice. I've
  always found it odd that the internal kernal API for framebuffer devices 
  includes blit
  and solid fill, but that API is never exposed to user-space applications - 
  even though
  it would be _very_ useful.
 
 I don't think this will be done. Main reason is that newer hw is hard to 
 program,
 no 2d anymore so you have to program the whole 3d pipeline stuff and we don't 
 want
 such code in kernel.
 
 So the idea here is to use one userspace driver like gallium3d. Basicly you do
 a winsys for your use and you can also do a new frontend for gallium other 
 than
 a GL frontend or wait for new frontend to appear :)

Hmm... If you have to use gallium to talk to the hardware, shouldn't fbcon be 
renamed to 
glcon? :-) Also, while 2D is dissappearing on desktop, it's very much alive on 
embedded,
for the moment at least. I can't see fbdev going anytime soon if the only 
replacement is a 
full-blown programmable 3D driver architecture. Perhaps a new, simple API could 
be created. 
On desktop it would be implemented as a new front-end API for gallium and on 
embedded
it would be implemented using a thin user-space wrapper to the kernel module? A 
bit like
what DirectFB started life as (before it started trying to be X).


  7) The modedemo/demo.c seems to be doing stuff with /dev/fb0. From what I 
  can 
  tell, this is just getting the current mode at startup  restoring it 
  before exit. Can
  I assume this is to stop garbled output after the program exits and can be 
  safely
  #defined out (as I'm using VGA console)?
  
 
 Lot of the interface is not mature yet so there is few hack to work around 
 things.
 But in your case you shouldn't define out this as drm modesetting should have 
 take
 over your vga console (at least this what i do on radeon) so you should be 
 using
 fbcon.

Alas I believe the intelfb driver doesn't support the 965GM, at least not in 
2.6.24. I
think I'll work by logging in over ssh. Doesn't matter then if the screen gets 
garbled
after exiting. Or do I need to run from a vt?


Cheers,

Tom


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRM QWS

2008-03-19 Thread Zack Rusin
On Wednesday 19 March 2008 05:21:43 am Tom Cooksey wrote:
   2) Sortof related to the above... it would be very cool to have a very
   simple drawing API to use on top of the modesetting API. A simple blit
solid fill would surfice. I've always found it odd that the internal
   kernal API for framebuffer devices includes blit and solid fill, but
   that API is never exposed to user-space applications - even though it
   would be _very_ useful.
 
  I don't think this will be done. Main reason is that newer hw is hard to
  program, no 2d anymore so you have to program the whole 3d pipeline stuff
  and we don't want such code in kernel.
 
  So the idea here is to use one userspace driver like gallium3d. Basicly
  you do a winsys for your use and you can also do a new frontend for
  gallium other than a GL frontend or wait for new frontend to appear :)

 Hmm... If you have to use gallium to talk to the hardware, shouldn't fbcon
 be renamed to glcon? :-) Also, while 2D is dissappearing on desktop, it's
 very much alive on embedded, for the moment at least. 

That depends on your definition of embedded. 
I think what you're referring to are dummy framebuffers or gpu's that were 
made with some absolutely silly requirements like no known bugs policy 
which implies that all they have is an underperforming 2D engine. In both of 
those cases you already lost. So if you're trying to accelerate or design 
framework based on those than honestly you can just give up and go with an 
all software framework. If you're referring to actual embedded GPU's the 
current generation is actually already fully programmable and if you're 
designing with those in mind than what Jerome said holds.

 I can't see fbdev 
 going anytime soon if the only replacement is a full-blown programmable 3D
 driver architecture. Perhaps a new, simple API could be created. On desktop
 it would be implemented as a new front-end API for gallium and on embedded
 it would be implemented using a thin user-space wrapper to the kernel
 module? 

I don't think that makes a lot of sense. Gallium3D is an interface to 
hardware - it models the way modern graphics hardware works. Front-end's in 
the Gallium3D sense are the state-trackers that are used by the API that 
you're trying to accelerate. So if your hardware is an actual GPU that you 
can write a Gallium3D driver for, than front-end for it would be just another 
api (you could be just using GL at this point).

 A bit like what DirectFB started life as (before it started trying 
 to be X).

Well, that's what you end up with when you start adding things that you need 
across devices. I know that in the beginning when you look at the stack you 
tend to think this could be a lot smaller!, but then with time you realize 
that you actually need all of those things but instead of optimizing the 
parts that were there you went some custom solution and are now stuck with 
it.

All in all I don't think what you're thinking about doing is going to work. 
You won't be able to accelerate Qt vector graphics framework with the devices 
that you're thinking about writing all this for so I don't think it's a time 
well spent. Sure drawLine or drawRectangle will be a lot faster but the 
number of UIs that can be writen with a combination of those is so ugly and 
unattractive it's a waste of time. Not even mentioning that it's pointless 
for companies that actually care about graphics because they're going to have 
recent embedded GPU's on theis devices. So if you actually care about 
accelerating graphics' than you need to think about current generation of 
embedded chips and those are programmable.

z

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRM QWS

2008-03-18 Thread Tom Cooksey
On Friday 07 March 2008 18:35:10 Jesse Barnes wrote:
 On Friday, March 07, 2008 1:21 am Tom Cooksey wrote:
  I'm a developer working on getting OpenGL ES working with QWS - the window
  system built into Qt/Embedded. That is, Trolltech's own windowing system,
  completely independent of X. The typical hardware we're working with is
  PowerVR MBX, an OpenGL ES 1.1 complient device. We have also played with
  ATI mobile chipsets. One thing all these devices have in common is rubbish
  (IMO), closed source drivers. The only API we have for them is EGL, the
  only on-screen surface is the entire display.
 
  While we are continuing development with these devices, I'm very keen to
  develop a proof-of-concept driver using an open source desktop OpenGL
  implementation. I want to show people what can be done with decent ( open)
  drivers.
 
 Great, that's one of the goals we had in mind when changing the DRM recently. 
  
 There's actually some standalone OpenGL code in the Mesa tree that can be 
 used as a starting point (EGL  miniglx, two separate ways of doing that).
 
  The first step I'd like to make is to just get something on the screen. I
  was wondering if it's possible to use DRM to just map the framebuffer into
  a user process's address space and use it like we would use the LinuxFB
  device? Or do modern frame buffer drivers use the DRM themselves to do
  this?
 
 Yeah, that should be doable with the current code on Intel  ATI devices.  
 You'll have to allocate a new buffer object for your front buffer, then use 
 it to set a new mode.
 
 We'd really like to hear any feedback you have about the interfaces and 
 design; given that what you're doing is something we'd really like to 
 support, we want to make sure we get it right before it gets pushed upstream 
 into Linux and set in stone.


I've had some time to play with the modesetting branch. I am using a laptop with
an Intel 965GM, is this likely to work? At the moment, when I run tests/modedemo
I get a hard lock. :-/


I have a few comments/questions from what I've looked at so far:

1) The current libdrm looks to be a very thin wrapper around the ioctls. If 
this is the
case and all the code is kernel-side, what are the thoughts of implementing a 
linuxfb
driver ontop of this? It would be pretty cool to get fbcon rendering using DRM?

2) Sortof related to the above... it would be very cool to have a very simple 
drawing
API to use on top of the modesetting API. A simple blit  solid fill would 
surfice. I've
always found it odd that the internal kernal API for framebuffer devices 
includes blit
and solid fill, but that API is never exposed to user-space applications - even 
though
it would be _very_ useful.

3) The drmBOCreate() looks fun. Could we use this to store pixmaps? Again, 
having 
an API to blit a pixmap created with drmBOCreate() to the framebuffer, would be 
very 
nice. Even nicer if porter-duff composition modes were supported, although a 
simple 
blit would be a massive leap forward.

4) The API doesn't seem to provide any mechenism for syncing framebuffer 
updates 
to VBLANK. Does this mean the sync is done automatically, i.e. after unmapping 
the
framebuffer, the contents on the screen aren't actually updated until the next 
vblank?

5) Can we implement double-buffering by creating 2 BOs and switching between
them using drmModeAddFB()?

6) What is the plan for this modesetting work? Is it intended to replace fbdev
or suplement it? From what I've seen, there's nothing stopping you creating a 
DRM
driver for very basic framebuffer-only type hardware?

7) The modedemo/demo.c seems to be doing stuff with /dev/fb0. From what I can 
tell, this is just getting the current mode at startup  restoring it before 
exit. Can
I assume this is to stop garbled output after the program exits and can be 
safely
#defined out (as I'm using VGA console)?




Cheers,

Tom


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRM QWS

2008-03-18 Thread Jerome Glisse
On Tue, 18 Mar 2008 17:54:47 +0100
Tom Cooksey [EMAIL PROTECTED] wrote:
 
 
 I've had some time to play with the modesetting branch. I am using a laptop 
 with
 an Intel 965GM, is this likely to work? At the moment, when I run 
 tests/modedemo
 I get a hard lock. :-/

Well there is fixes pushed allmost everydays so make sure to use lastest git :)
 
 I have a few comments/questions from what I've looked at so far:
 
 1) The current libdrm looks to be a very thin wrapper around the ioctls. If 
 this is the
 case and all the code is kernel-side, what are the thoughts of implementing a 
 linuxfb
 driver ontop of this? It would be pretty cool to get fbcon rendering using 
 DRM?

Implementing fbcon in userspace have been on the table as a fun things that we 
might do.

 2) Sortof related to the above... it would be very cool to have a very simple 
 drawing
 API to use on top of the modesetting API. A simple blit  solid fill would 
 surfice. I've
 always found it odd that the internal kernal API for framebuffer devices 
 includes blit
 and solid fill, but that API is never exposed to user-space applications - 
 even though
 it would be _very_ useful.

I don't think this will be done. Main reason is that newer hw is hard to 
program,
no 2d anymore so you have to program the whole 3d pipeline stuff and we don't 
want
such code in kernel.

So the idea here is to use one userspace driver like gallium3d. Basicly you do
a winsys for your use and you can also do a new frontend for gallium other than
a GL frontend or wait for new frontend to appear :)

 3) The drmBOCreate() looks fun. Could we use this to store pixmaps? Again, 
 having 
 an API to blit a pixmap created with drmBOCreate() to the framebuffer, would 
 be very 
 nice. Even nicer if porter-duff composition modes were supported, although a 
 simple 
 blit would be a massive leap forward.

The idea is that every piece of informations (pixmap, vertex buffer, frame 
buffer, ...)
the GPU might deal with should be allocated in BO. 

 4) The API doesn't seem to provide any mechenism for syncing framebuffer 
 updates 
 to VBLANK. Does this mean the sync is done automatically, i.e. after 
 unmapping the
 framebuffer, the contents on the screen aren't actually updated until the 
 next vblank?

Kristian is still evaluating which way and how we gona enable syncing with 
vblank.
This will likely happen through a tasklet in the kernel responsible for firing
framebuffer update at vblank.

 5) Can we implement double-buffering by creating 2 BOs and switching between
 them using drmModeAddFB()?

I think interface is their to do double-buffering but maybe not. Lazy to check
the code. Anyway we will have somethings at somepoint in the future if not
already.

 6) What is the plan for this modesetting work? Is it intended to replace fbdev
 or suplement it? From what I've seen, there's nothing stopping you creating a 
 DRM
 driver for very basic framebuffer-only type hardware?

In my opinion it should replace fbdev and i think fbdev are depreciated at least
i know some people working on it now only do maintenance things.

 7) The modedemo/demo.c seems to be doing stuff with /dev/fb0. From what I can 
 tell, this is just getting the current mode at startup  restoring it before 
 exit. Can
 I assume this is to stop garbled output after the program exits and can be 
 safely
 #defined out (as I'm using VGA console)?
 

Lot of the interface is not mature yet so there is few hack to work around 
things.
But in your case you shouldn't define out this as drm modesetting should have 
take
over your vga console (at least this what i do on radeon) so you should be using
fbcon.

Cheers,
Jerome Glisse [EMAIL PROTECTED]

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


DRM QWS

2008-03-07 Thread Tom Cooksey
Hi,

I'm a developer working on getting OpenGL ES working with QWS - the window 
system
built into Qt/Embedded. That is, Trolltech's own windowing system, completely
independent of X. The typical hardware we're working with is PowerVR MBX, an
OpenGL ES 1.1 complient device. We have also played with ATI mobile chipsets. 
One
thing all these devices have in common is rubbish (IMO), closed source drivers. 
The only
API we have for them is EGL, the only on-screen surface is the entire display.

While we are continuing development with these devices, I'm very keen to 
develop a
proof-of-concept driver using an open source desktop OpenGL implementation. I 
want
to show people what can be done with decent ( open) drivers.

I'm pretty new to X, DRI  associated code bases but have spent the last few 
months
reading documentation  code, trying to understand how everything works 
together.
I think I've now got to a stage where I've read everything I could find and 
need some
help.

The effect I'm looking for is iPhone/Compiz style window composition. We have 
this
already, but the problem is that the drivers are designed for a single process
accessing the hardware at a time. This is fine if there's only a single process 
(in QWS,
the window system is a shared library which is loaded by the first application 
to be
launched). All the windows can be drawn into off-screen pbuffers and then used 
as
textures to be rendered onto the screen. The problem comes when there are 
multiple
processes. Our current solution is to get the client processes to use our 
raster paint
engine to render into shared memory, which the server then uploads as a texture.
As you can imagine, this is *SLOW*. It also doesn't allow client processes to 
use
OpenGL themselves - something we really want to have.

What we want to do is use our OpenGL paint engine (or, even better, an OpenVG
paint engine - which maps much better to Qt's painter API) in the client 
processes.
The client processes render both 2D windows and OpenGL calls to off-screen
buffers, which the server can use as textures. We'd also like video to be 
handled in
a similar way (VAAPI?).

From what I've read, AIGLX allows compiz to composite OpenGL window surfaces
because it's the X server which does the rendering. I.e. X clients serialize 
OpenGL
commands and send them to the server via GLX. While we could do this too, (and
will probably have to do this for nasty closed source OpenGL ES drivers), I
stumbled upon this:

http://hoegsberg.blogspot.com/2007/08/redirected-direct-rendering.html

What I'm hoping to do is bring together all the very fine work done in the last 
few
years. What I'm stuck on is how everything is going to hang together. This is 
what
I have so far (most of which is probably wrong, so please correct):

Write a QWS driver where the server opens the framebuffer using DRM Modesetting.
The server also initializes the DRM. QWS clients render into off-screen buffers 
(pbuffers or Framebuffer objects?) using OpenGL (Mesa/Gallium?). The QWS client
then magicaly gets the DRM ID of the off-screen buffer (Is there a 1:1 
relationship
between a DRM buffer and a framebuffer object's color buffer?). The clients 
then 
send that DRM ID to the server. The server then somehow magically tells 
mesa/gallium about the buffer which is then (also magically) mapped to a texture
name/ID and used as a texture to be drawn into the framebuffer.

Obviously, I still have a lot to learn. :-D

The first step I'd like to make is to just get something on the screen. I was
wondering if it's possible to use DRM to just map the framebuffer into a user
process's address space and use it like we would use the LinuxFB device? Or
do modern frame buffer drivers use the DRM themselves to do this?


Any/all comments, suggestions  insults are welcome. :-)


Cheers,

Tom


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRM QWS

2008-03-07 Thread Jerome Glisse
On Fri, 7 Mar 2008 10:21:28 +0100
Tom Cooksey [EMAIL PROTECTED] wrote:

 Hi,
 
 I'm a developer working on getting OpenGL ES working with QWS - the window 
 system
 built into Qt/Embedded. That is, Trolltech's own windowing system, completely
 independent of X. The typical hardware we're working with is PowerVR MBX, an
 OpenGL ES 1.1 complient device. We have also played with ATI mobile chipsets. 
 One
 thing all these devices have in common is rubbish (IMO), closed source 
 drivers. The only
 API we have for them is EGL, the only on-screen surface is the entire display.
 
 While we are continuing development with these devices, I'm very keen to 
 develop a
 proof-of-concept driver using an open source desktop OpenGL implementation. I 
 want
 to show people what can be done with decent ( open) drivers.
 
 I'm pretty new to X, DRI  associated code bases but have spent the last few 
 months
 reading documentation  code, trying to understand how everything works 
 together.
 I think I've now got to a stage where I've read everything I could find and 
 need some
 help.
 
 The effect I'm looking for is iPhone/Compiz style window composition. We have 
 this
 already, but the problem is that the drivers are designed for a single process
 accessing the hardware at a time. This is fine if there's only a single 
 process (in QWS,
 the window system is a shared library which is loaded by the first 
 application to be
 launched). All the windows can be drawn into off-screen pbuffers and then 
 used as
 textures to be rendered onto the screen. The problem comes when there are 
 multiple
 processes. Our current solution is to get the client processes to use our 
 raster paint
 engine to render into shared memory, which the server then uploads as a 
 texture.
 As you can imagine, this is *SLOW*. It also doesn't allow client processes to 
 use
 OpenGL themselves - something we really want to have.
 
 What we want to do is use our OpenGL paint engine (or, even better, an OpenVG
 paint engine - which maps much better to Qt's painter API) in the client 
 processes.
 The client processes render both 2D windows and OpenGL calls to off-screen
 buffers, which the server can use as textures. We'd also like video to be 
 handled in
 a similar way (VAAPI?).
 
 From what I've read, AIGLX allows compiz to composite OpenGL window surfaces
 because it's the X server which does the rendering. I.e. X clients serialize 
 OpenGL
 commands and send them to the server via GLX. While we could do this too, (and
 will probably have to do this for nasty closed source OpenGL ES drivers), I
 stumbled upon this:
 
 http://hoegsberg.blogspot.com/2007/08/redirected-direct-rendering.html
 
 What I'm hoping to do is bring together all the very fine work done in the 
 last few
 years. What I'm stuck on is how everything is going to hang together. This is 
 what
 I have so far (most of which is probably wrong, so please correct):
 
 Write a QWS driver where the server opens the framebuffer using DRM 
 Modesetting.
 The server also initializes the DRM. QWS clients render into off-screen 
 buffers 
 (pbuffers or Framebuffer objects?) using OpenGL (Mesa/Gallium?). The QWS 
 client
 then magicaly gets the DRM ID of the off-screen buffer (Is there a 1:1 
 relationship
 between a DRM buffer and a framebuffer object's color buffer?). The clients 
 then 
 send that DRM ID to the server. The server then somehow magically tells 
 mesa/gallium about the buffer which is then (also magically) mapped to a 
 texture
 name/ID and used as a texture to be drawn into the framebuffer.
 
 Obviously, I still have a lot to learn. :-D
 
 The first step I'd like to make is to just get something on the screen. I was
 wondering if it's possible to use DRM to just map the framebuffer into a user
 process's address space and use it like we would use the LinuxFB device? Or
 do modern frame buffer drivers use the DRM themselves to do this?
 
 
 Any/all comments, suggestions  insults are welcome. :-)
 
 
 Cheers,
 
 Tom

In drm tree you can find example on how to use drm modesetting (test directory).
The drm modesetting interface is going under heavy change Dave, Jesse and Jakob
are one working on that, so it's likely going to evolve a bit see 
http://dri.freedesktop.org/wiki/DrmModesetting for an overview of what the
current aim is.

Once you got your app in charge of modesetting, then you can work on winsys
gallium driver. winsys driver is the part which interface with your windowing
system. As you are not using X you need to do your own winsys, but this will
likely end up being a lot of cut  paste. What you also need is somethings like
DRI2 ie passing drm object id is not enough for compositor. DRM buffer object
don't have information on the size or format or data they containt. So you need
to pass BO ID between your server and your client through somethings like DRI2
where along the ID you send width, height, texture format, or anyother revealent
informations needed by the hw. You

Re: DRM QWS

2008-03-07 Thread Jesse Barnes
On Friday, March 07, 2008 1:21 am Tom Cooksey wrote:
 I'm a developer working on getting OpenGL ES working with QWS - the window
 system built into Qt/Embedded. That is, Trolltech's own windowing system,
 completely independent of X. The typical hardware we're working with is
 PowerVR MBX, an OpenGL ES 1.1 complient device. We have also played with
 ATI mobile chipsets. One thing all these devices have in common is rubbish
 (IMO), closed source drivers. The only API we have for them is EGL, the
 only on-screen surface is the entire display.

 While we are continuing development with these devices, I'm very keen to
 develop a proof-of-concept driver using an open source desktop OpenGL
 implementation. I want to show people what can be done with decent ( open)
 drivers.

Great, that's one of the goals we had in mind when changing the DRM recently.  
There's actually some standalone OpenGL code in the Mesa tree that can be 
used as a starting point (EGL  miniglx, two separate ways of doing that).

 The first step I'd like to make is to just get something on the screen. I
 was wondering if it's possible to use DRM to just map the framebuffer into
 a user process's address space and use it like we would use the LinuxFB
 device? Or do modern frame buffer drivers use the DRM themselves to do
 this?

Yeah, that should be doable with the current code on Intel  ATI devices.  
You'll have to allocate a new buffer object for your front buffer, then use 
it to set a new mode.

We'd really like to hear any feedback you have about the interfaces and 
design; given that what you're doing is something we'd really like to 
support, we want to make sure we get it right before it gets pushed upstream 
into Linux and set in stone.

Thanks,
Jesse

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel