Re: Partial updates with glX/DRI

2009-08-24 Thread Tom Cooksey
On Friday 21 August 2009 16:47:23 ext Michel Dänzer wrote:
 On Fri, 2009-08-21 at 11:45 +0200, Tom Cooksey wrote:
  When using glX, we have no guarantee over what state the back buffer will
  be in after swap buffers. So, whenever an application needs to update, it
  must re- render the entire window. This makes things slow (we have to
  invoke every child widget's paint event). To overcome this, we try to use
  a 3rd buffer as a back buffer, usually a multi-sampled FBO or PBuffer. We
  direct rendering to the FBO/PBuffer, bind it as a texture (after blitting
  it to a non multi-sampled FBO if needed), draw the whole buffer to the
  window's back buffer then call swap buffers. Eughhh! But at least the
  PBuffer/FBO contents aren't destroyed. What would be really nice is to be
  able to have an EGL_SWAP_BEHAVIOR == EGL_BUFFER_PRESERVED equivalent on
  glX.

 There's the GLX_OML_swap_method extension, but I'm not sure how well
 it's supported by our drivers at this point. Any issues there might not
 be hard to fix up though.

I've taken a look at that extension, but all it tells you is if the driver 
will flip or blit, which is a different question to if the back buffer will be 
preserved after a swap. Some hw can provide buffer preserved behavior for flips 
and other hw seems to nuke the back buffer during the blit. :-(


  I think I can work around this by making a glx context current on a
  GLXPixamp (from a XPixmap). Pixmaps are single buffered and so don't get
  destroyed on swap buffers (in fact I don't even call swap buffers). I can
  then post updates using XCopyArea which will also cause the Xserver to
  generate an appropriate XDamage region for the compositor. The only down
  side is that I have to glFinish before calling XCopyArea.

 Actually you should only need glXWaitGL(), assuming the implementation
 of that meets the requirements of the GLX spec.

I'll give that a go, thanks for the tip!


  While I have this working, it seems a little hacky and isn't widely
  supported (I have it working on 1 driver build for 1 bit of hardware).

 If you mean rendering to pixmaps isn't widely supported, that should be
 getting better with DRI2.

I hope so. Although even with DRI2, I think my Intel drivers have broken 
pixmap rendering at the moment (although it could be Qt which is doing 
something strange).


Cheers,

Tom


--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Partial updates with glX/DRI

2009-08-21 Thread Tom Cooksey
Hello,

I'm a Qt developer.

We want all Qt rendering to be done using OpenGL 2. We have this working 
pretty well (a few artifacts still here and there). However, we've found some 
fundamental problems using GL for regular widget rendering. Normally I 
wouldn't bother this list, but I've recently seen that someone's writing a new 
GL back end to Cairo, so I guess those guys are going to hit the exact same 
issues.

The problem is partial updates. From what we've seen it's typical for 
applications to only update a small region of their top-level-widget at a time 
(E.g. a flashing cursor in a text edit). As Qt only creates a single X window 
per top-level widget, this means a sub-region of a single X window.

When using glX, we have no guarantee over what state the back buffer will be in 
after swap buffers. So, whenever an application needs to update, it must re-
render the entire window. This makes things slow (we have to invoke every 
child widget's paint event). To overcome this, we try to use a 3rd buffer as a 
back buffer, usually a multi-sampled FBO or PBuffer. We direct rendering to the 
FBO/PBuffer, bind it as a texture (after blitting it to a non multi-sampled FBO 
if needed), draw the whole buffer to the window's back buffer then call swap 
buffers. Eughhh! But at least the PBuffer/FBO contents aren't destroyed. What 
would be really nice is to be able to have an EGL_SWAP_BEHAVIOR == 
EGL_BUFFER_PRESERVED equivalent on glX. I notice there's an EGL implementation 
being worked on in Mesa trunk so perhaps we should switch to EGL rather than 
glX?

Anyway, lets assume that swap buffers keeps the contents of the back buffer. 
There's another issue, although not as detrimental as the first. When we issue 
swap buffers, glX/EGL has to assume the entire window's contents has changed. 
That has 2 effects: 1) XDamage regions are generated for the whole window: When 
a composition manager is running, this means it has to re-compose the entire 
window, even though only a few pixels may have changed (which it might have to 
do anyway, see above :-)). 2) I'm led to believe that DRI2 implements swap 
buffers as a blit and so must blit the entire back buffer to the front.

I think I can work around this by making a glx context current on a GLXPixamp 
(from a XPixmap). Pixmaps are single buffered and so don't get destroyed on 
swap buffers (in fact I don't even call swap buffers). I can then post updates 
using XCopyArea which will also cause the Xserver to generate an appropriate 
XDamage region for the compositor. The only down side is that I have to 
glFinish before calling XCopyArea. While I have this working, it seems a 
little hacky and isn't widely supported (I have it working on 1 driver build 
for 1 bit of hardware).

It seems like a glXSwapPartialBuffers which takes an array of dirty rects would 
be preferable. The implementation could ignore the rects completely if it so 
chooses or only use them to generate the damage region. Or, if it's on DRI2, 
it can choose to only copy the sub-region from the back to the front. 
Eventually it could also use the rects as a metric to figure out if it should 
flip or blit (flip if there's 30% region changed, blit otherwise).


Cheers,

Tom

PS: Scrolling is also an issue... :-/



--
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Help with DRM+Modesetting+GEM

2008-10-15 Thread Tom Cooksey
Hi,

I would like to play with the GEM  DRM-Modesetting APIs. E.g. I'd like to 
write a test application which ennumerates the CRTCs, sets a mode on each and 
fills a scanout buffer with a solid color. Pretty simple stuff.

I've tried to build the modesetting-gem branch and found that my kernel sources 
were mising some symbols (pci_read_base). I've upgraded to 2.6.27 and also 
tried various branches, but none have the symbols I'm missing. Well, they have 
the symbol but don't seem to export it.

Is there a kernel git repo  branch hosted somewhere which I can use for 
development? What kernel sources do developers of modesetting-gem use for 
development? Is this branch even under development? :-)



Cheers,

Tom

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Confused as to which branches of what I should play with.

2008-07-15 Thread Tom Cooksey
Hi all,

I've been trying to follow this mailing list as to the current state of DRM, 
especially
modesetting. My goal is to get Qt-Embedded to use moden graphics drivers rather
than the fbdev interface we have today. A few months ago I played with the drm
modesetting-101 branch with the intel driver to see if I could at least map a 
frame
buffer into the server process and use our software renderer to render into it. 
I 
eventually managed to get the modesetting test app to work, but other things 
pulled
me away from the work. I now want to resume what I started, but it seems 
everything
has got a whole lot more complicated.

So my question is this: Are things API-stable enough to start playing again, or 
should I
wait another few months until the GEM vs TTM stuff has been resolved?

If the API is stable, how can I start playing again? All I want to do at this 
stage is map the
framebuffer and render into it. I will then have the equivilant to what we have 
already
with fbdev. Later (probably much later) I want to modify the Gallium winsys 
layer so I can
use gl to render into the framebuffer. Then comes multiple processes, then 
retirement.

For now I guess all I need is a vanilla kernel and a drm build, but which 
branch of drm 
should I use? modesetting-101 or modesetting-gem? I notice the gem branch has a 
test 
(gem_mmap.c) which seems to create  map a hunk of memory, but I guess this is 
just 
a i915 test as all the ioctls are i915-specific. Unless the idea of a generic 
interface is dead?



Cheers,

Tom

PS: I have a GM965 based laptop I want to use for experimentation - but can get 
hold of
something else if it's going to help?

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK  win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100url=/
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: [Bug 15582] Qt4 Demo missing animation

2008-06-25 Thread Tom Cooksey
On Tuesday 24 June 2008 03:44:03 [EMAIL PROTECTED] wrote:
 http://bugs.freedesktop.org/show_bug.cgi?id=15582
 
 
 Michael Fu [EMAIL PROTECTED] changed:
 
What|Removed |Added
 
  Status|NEW |RESOLVED
  Resolution||WONTFIX
 
 
 
 
 --- Comment #7 from Michael Fu [EMAIL PROTECTED]  2008-06-23 18:44:02 PST 
 ---
 doesn't sounds like driver bug.. I marks this as wontfix for now. please 
 reopen
 if you tried qt-4.4 and still see this...

qtdemo will disable animations if it deems they are running too slowly. I guess
this is what's happening. Why it's running too slowly is anyones guess. Could
be a qt bug or a driver bug.


Cheers,

Tom


-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://sourceforge.net/services/buy/index.php
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRM Modesetting fbset tool?

2008-04-30 Thread Tom Cooksey
On Tuesday 29 April 2008 17:29:24 Jerome Glisse wrote:
 For console we had the idea of building a full userspace console things
 instead of having this whole things in the kernel. Which would mean to
 write some userspace program using the modesetting API and providing
 the console. I believe there is several advantages (i talk about drawbacks
 latter) for instance you can do a fency console, or have multiple
 console at the same time by dividing screen, or have efficient multi-seat
 console with nice screen  input association. Well many things worth
 having for the XXI century where car flies and robots wash the house and
 do the laundry.

I'm not sure I quite follow on the user-space console thing. Do you mean that 
tty0-5 get removed  replaced with ptys? In fact the only real ttys left are 
serial ports? A daemon process is created by init which opens the drm  input 
devices, creates 6 or so ptys for each vt. It then implements a framebuffer 
vt102 or something and lets init attach gettys or whatever to the slave ptys.
The daemon watches for Alt+Fx etc. and switches framebuffers  which pty
it sends keyboard input to. Have I understood correctly? It seems like vt's
become a completely user-space thing. This is great for console applications,
but what about graphics applications? Is that where this multi-master thing
comes in to play?

 Main drawback i see, is for rescue case, ie we likely need to keep a
 minimal console in kernel as in rescue we can't rely on userspace to
 be their or operational. Their is likely others drawback but none of
 importance come to my mind.

There's also the boot splash before the kernel brings up userspace (although
I think userspace is brought up pretty quickly with initramfs). There's also
all the fbdev drivers, although I guess the user-space console could also 
support the fbdev interface too.

 
 Anyway i believe such things need to be discused on lkml but as the API
 and things like multi-master, DRM2, ... are not yet worked out i believe
 its a bit too early to bring this topic on lkml (given that this might
 proove to lead to some nice flamewar :() Still you might be interested
 in writing a proof of concept user space console. Adapting it to
 API change won't be that hard.

Yeah, you seem to be talking about some fairly major changes to Linux. 
Where is all this discussion going on? Is there a DRI2 mailing list 
somewhere? I'd quite like to follow what's being planned.

As for userspace console, it seems fairly strait forward. I guess the easiest
thing would be to grab an existing terminal emulator which renders to a
framebuffer  adapt it? Either fbcon or something like rxvt/similar.


Thanks for the info!


Cheers,

Tom

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRM Modesetting fbset tool?

2008-04-30 Thread Tom Cooksey
On Wednesday 30 April 2008 10:59:03 Jerome Glisse wrote:
   For console we had the idea of building a full userspace console things
   instead of having this whole things in the kernel. Which would mean to
   write some userspace program using the modesetting API and providing
   the console. I believe there is several advantages (i talk about drawbacks
   latter) for instance you can do a fency console, or have multiple
   console at the same time by dividing screen, or have efficient multi-seat
   console with nice screen  input association. Well many things worth
   having for the XXI century where car flies and robots wash the house and
   do the laundry.
  
  I'm not sure I quite follow on the user-space console thing. Do you mean 
  that 
  tty0-5 get removed  replaced with ptys? In fact the only real ttys left 
  are 
  serial ports? A daemon process is created by init which opens the drm  
  input 
  devices, creates 6 or so ptys for each vt. It then implements a framebuffer 
  vt102 or something and lets init attach gettys or whatever to the slave 
  ptys.
  The daemon watches for Alt+Fx etc. and switches framebuffers  which pty
  it sends keyboard input to. Have I understood correctly? It seems like vt's
  become a completely user-space thing. This is great for console 
  applications,
  but what about graphics applications? Is that where this multi-master thing
  comes in to play?
 
 Graphic applications like X are just another client and handled the same
 way. Some master will be in charge to switch btw terminal and X or others
 client. What i mean is that the console is a program in itself, different
 from the master and a client of it.

Ok... I think I see what you're getting at...

So init launches this all-powerful master process as root, then launches 6 
console
processes, (one for each virtual console) as a non-root user. Each console 
process 
connects to the master (using sockets? DBus?), some authentication goes on to 
make
sure they're allowed to access the DRM. The console process then opens the DRM,
sets up a BO for the scan-out buffer  creates a psudo-terminal pair, with the 
slave
end dangling ready to be attached to getty. The all-powerful master tells the 
console
process it is now the current vt and can attach its scan-out buffer(s) to any 
crtc it
is allowed to. When the user initiates a vt-switch, the master tells the 
console it is 
no-longer the current vt and should detach it's scanout buffer so the new 
current
slave can attach it's scan-out buffer instead.


   Main drawback i see, is for rescue case, ie we likely need to keep a
   minimal console in kernel as in rescue we can't rely on userspace to
   be their or operational. Their is likely others drawback but none of
   importance come to my mind.
  
  There's also the boot splash before the kernel brings up userspace (although
  I think userspace is brought up pretty quickly with initramfs). There's also
  all the fbdev drivers, although I guess the user-space console could also 
  support the fbdev interface too.
 
 fbdev driver of drm supported card will be removed. Likely only keep a fbdev
 emulation layer in drm but i guess we highly prefer native drm user than
 one using such emulation layer.

I'm just thinking about how this is going to work on non-DRM graphics devices, 
where 
the driver is fbdev, DirectFB, proprietry, etc. Of course if there's a DRM 
driver, the DRM 
modesetting API sould be used. But the fbdev driver could potentially be used 
if DRM 
is unavaliable. In a perfect world, there would only be DRM. 

Actually I'm thinking about taking a look at writing a drm modesetting driver 
for very 
stupid hardware like an OMAP3 LCD controller. Everything, including the 
framebuffer 
 BOs would be stored in main memory as that's all these devices have! Could 
then
look at how to handle video decoded by the OMAP's DSP in a DRM-friendly way.


Cheers,

Tom



-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


DRM Modesetting fbset tool?

2008-04-29 Thread Tom Cooksey
I've started playing with the modesetting branch of DRM and managed to get it 
to work on my GMA 965 based laptop (after working out I needed to pass 
modeset=1 as a parameter to the i915 module).

On my laptop, I get /dev/fb0  /dev/fb1, with /dev/fb0 connected to my laptop 
screen (LVDS?) and fb1 connected to VGA out. I can successfully run 
Qt/Embedded on fb1 (using the normal fbdev interface... not started writing 
drm modesetting code yet).

What would be nice is to have a tool like fbset which not only sets the mode, 
but also chooses which crtc (correct terminology?) is connected to which 
framebuffer. On the OMAP framebuffer, this can be controlled through a sysfs 
interface.

However, my understanding is that the i915 driver provides a linuxfb emulation 
driver it registers with the kernel during probe? The fbcon then binds to the 
(first) fbdev device? So the tool would in fact just configure i915's linuxfb 
emulation and not be very useful or portable. Have I understood things 
correctly?

I'm getting a bit confused about how things should look inside the kernel 
(This is mainly because I'm having a hard time working out how consoles, 
virtual terminals  vt-switching fit together... but I'm picking it up 
bit-by-bit). It seems to me that a completely new console driver needs to be 
written which uses the drm modesetting interface, rather than the fbdev 
interface? So the tool to set modes  change crtcs would only talk to the 
console driver. 

User-space applications like X  Qt/Embedded seem pretty strait-forward. They 
just use mode setting functions in libdrm. I.e. They provide their own way of 
configuring which output goes to which crtc. What about vt-switches? Will an 
application still be responsible for re-drawing itself after a vt-switch? Or 
will vt-switches now become completely transparent to userspace applications?


Please let me know what I've got wrong. Eventually, I'd quite like to have a 
go at writing some in-kernel stuff using the drm. If there's any boring 
low-hanging fruit I could start to learn on, let me know (like an fbset-like 
utility).


Cheers,

Tom

-
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: GSOC '08 hardware accelerated video decoding

2008-03-28 Thread Tom Cooksey
On Friday 28 March 2008 05:08:38 Younes M wrote:
 Hi,
 
 I recently posted to the Nouveau mailing list about this, but for
 those who don't participate in that one I thought I would also post
 here since it seems to concern DRI as much as Nouveau. I intend to
 submit an application for a project that will attempt to implement
 XvMC in terms of Gallium3D. I've come up with a preliminary proposal
 and was hoping people would be willing to give it a quick read and
 give me some feedback; opinions, corrections, concerns, etc. An HTML
 version is here: http://www.bitblit.org/gsoc/gallium3d_xvmc.shtml and
 a text version is below.

Isn't XvMC going to be depricated in favor of VAAPI? Not sure if it's of any 
use 
to you, but Qt 4.4.0 has an OpenGL playback widget for Phonon GStreamer
backend. It uses a shader to do the color-space conversion and is avaliable
under the GPLv2/GPLv3.

If your interest is XcMV because of client-side support, I may be able to
find someone in Trolltech willing to write a Phonon/GStreamer playback 
widget which uses VAAPI. It may not be able to make use of all the VAAPI
features as many of them need to be supported in the decoder elements
(I think anyway). Ping me if this is of interest.


Cheers,

Tom

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRM QWS

2008-03-27 Thread Tom Cooksey
On Wednesday 26 March 2008 19:32:22 Kristian Høgsberg wrote:
 On Wed, Mar 26, 2008 at 1:50 PM, Tom Cooksey
 [EMAIL PROTECTED] wrote:
 ...
   I guess what I was thinking about was a single API which can be used on 
  3D-less
   (or legacy, if you want) hardware and on modern hardware. If the graphics 
  hardware
   is a simple pointer to a main-memory buffer which is scanned out to the 
  display, then
   your right, you might as well just use user-space shared memory, as we 
  currently do.
   A new API would only be useful for devices with video memory and a 
  hardware blitter.
   There are still new devices coming out with this kind of hardware, the 
  Marvel PXA3x0
   and Freescale i.MX27 for example spring to mind.
 
 I agree with you that it probably doesn't make sense to use
 gallium/mesa on everything everywhere.  There are still small devices
 or early boot scenarios (you mention initramfs) where gallium isn't
 appropriate.  However, there is no need to put this a 2D engine into
 the kernel.  What the drm ttm gives us is a nice abstraction for
 memory management and command buffer submission, and drm modesetting
 builds on this to let the kernel set a graphics mode.  And that's all
 that we need in the kernel.  Building a small userspace library on top
 of this to accelerate blits and fills should be pretty easy.

I had a think about this last night. I think Zack is probably right about future
graphics hardware. There's always going to be devices with simple graphics,
having a framebuffer in main memory and a few registers for configuration. I
think in the future, if more advanced graphics are needed, it will take the 
form 
of programmable 3D hardware. Take the set-top-box example I gave: While I 
stand by the fact that a low-power 3D core can't render at 1920×1080, a 
software-only graphics stack also can't render at this resolution. I'm just 
thinking about the problems I've been trying to solve getting Qt to perform well
on the Neo1973 with it's 480x640 display and 266Mhz CPU.

So for simple, linear framebuffer devices we have fbdev. For programmable 3D, 
we have gallium/DRM. There's still the issue of early boot for 3D devices, but 
as
Jesse mentioned, the DRM drivers can include an fbdev interface as the intel
driver does already.

Ok, I'm satisfied. Thanks to all. :-)


Cheers,

Tom

-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRM QWS

2008-03-26 Thread Tom Cooksey
On Wednesday 19 March 2008 16:26:37 Zack Rusin wrote:
 On Wednesday 19 March 2008 05:21:43 am Tom Cooksey wrote:
2) Sortof related to the above... it would be very cool to have a very
simple drawing API to use on top of the modesetting API. A simple blit
 solid fill would surfice. I've always found it odd that the internal
kernal API for framebuffer devices includes blit and solid fill, but
that API is never exposed to user-space applications - even though it
would be _very_ useful.
  
   I don't think this will be done. Main reason is that newer hw is hard to
   program, no 2d anymore so you have to program the whole 3d pipeline stuff
   and we don't want such code in kernel.
  
   So the idea here is to use one userspace driver like gallium3d. Basicly
   you do a winsys for your use and you can also do a new frontend for
   gallium other than a GL frontend or wait for new frontend to appear :)
 
  Hmm... If you have to use gallium to talk to the hardware, shouldn't fbcon
  be renamed to glcon? :-) Also, while 2D is dissappearing on desktop, it's
  very much alive on embedded, for the moment at least. 
 
 That depends on your definition of embedded. 
 I think what you're referring to are dummy framebuffers or gpu's that were 
 made with some absolutely silly requirements like no known bugs policy 
 which implies that all they have is an underperforming 2D engine. In both of 
 those cases you already lost. So if you're trying to accelerate or design 
 framework based on those than honestly you can just give up and go with an 
 all software framework. If you're referring to actual embedded GPU's the 
 current generation is actually already fully programmable and if you're 
 designing with those in mind than what Jerome said holds.

I was initially thinking about low-end graphics hardware, which are mainly just 
dummy 
framebuffers as you say. However, I've thought some more about this and there's 
still 
set-top-box type hardware here, which need to decode full resolution HD video 
(1920×1080 or even 3840×2160). Typically this is off-loaded onto a dedicated 
DSP. 
E.g. TI's DaVinci platform manages to do Full-HD resolution h.264 decoding in a 
~2W 
power envolope. I believe the video is composited with a normal framebuffer 
(for the UI) 
in hardware. I don't think there's any programmable 3D hardware avaliable which 
can 
do 1920×1080 resolutions in a 2W power envelope. So even if they replace the 
linear 
framebuffer with a programmable 3D core, that core still needs to render at 
[EMAIL PROTECTED] fps without impacting the 2W power draw too much. I guess it 
will 
probably be possible in 5 years or so, but it's not possible now.


  I can't see fbdev 
  going anytime soon if the only replacement is a full-blown programmable 3D
  driver architecture. Perhaps a new, simple API could be created. On desktop
  it would be implemented as a new front-end API for gallium and on embedded
  it would be implemented using a thin user-space wrapper to the kernel
  module? 
 
 I don't think that makes a lot of sense. Gallium3D is an interface to 
 hardware - it models the way modern graphics hardware works. Front-end's in 
 the Gallium3D sense are the state-trackers that are used by the API that 
 you're trying to accelerate. So if your hardware is an actual GPU that you 
 can write a Gallium3D driver for, than front-end for it would be just another 
 api (you could be just using GL at this point).

I guess what I was thinking about was a single API which can be used on 3D-less
(or legacy, if you want) hardware and on modern hardware. If the graphics 
hardware
is a simple pointer to a main-memory buffer which is scanned out to the 
display, then
your right, you might as well just use user-space shared memory, as we 
currently do.
A new API would only be useful for devices with video memory and a hardware 
blitter.
There are still new devices coming out with this kind of hardware, the Marvel 
PXA3x0 
and Freescale i.MX27 for example spring to mind.

I'm still a bit confused about what's meant to be displayed during the boot 
process,
before the root fs is mounted. Will the gallium libraries  drivers need to be 
in the
initramfs? If not, what shows the splash screen  provides single-user access 
if 
anything goes wrong in the boot process?


  A bit like what DirectFB started life as (before it started trying 
  to be X).
 
 Well, that's what you end up with when you start adding things that you need 
 across devices. I know that in the beginning when you look at the stack you 
 tend to think this could be a lot smaller!, but then with time you realize 
 that you actually need all of those things but instead of optimizing the 
 parts that were there you went some custom solution and are now stuck with 
 it.

I was refering here to DirectFB's window management, input device abstraction,
audio interface abstraction  video streaming APIs. Personally, I believe there 
is
a requirement for a simple

Re: DRM QWS

2008-03-19 Thread Tom Cooksey

  I've had some time to play with the modesetting branch. I am using a laptop 
  with
  an Intel 965GM, is this likely to work? At the moment, when I run 
  tests/modedemo
  I get a hard lock. :-/
 
 Well there is fixes pushed allmost everydays so make sure to use lastest git 
 :)

Yep, I'm pulling every day at the moment. If you think the 965GM is good to 
develop
on, I'll have a go at debugging what's going wrong.

  I have a few comments/questions from what I've looked at so far:
  
  1) The current libdrm looks to be a very thin wrapper around the ioctls. If 
  this is the
  case and all the code is kernel-side, what are the thoughts of implementing 
  a linuxfb
  driver ontop of this? It would be pretty cool to get fbcon rendering using 
  DRM?
 
 Implementing fbcon in userspace have been on the table as a fun things that 
 we might do.

  2) Sortof related to the above... it would be very cool to have a very 
  simple drawing
  API to use on top of the modesetting API. A simple blit  solid fill would 
  surfice. I've
  always found it odd that the internal kernal API for framebuffer devices 
  includes blit
  and solid fill, but that API is never exposed to user-space applications - 
  even though
  it would be _very_ useful.
 
 I don't think this will be done. Main reason is that newer hw is hard to 
 program,
 no 2d anymore so you have to program the whole 3d pipeline stuff and we don't 
 want
 such code in kernel.
 
 So the idea here is to use one userspace driver like gallium3d. Basicly you do
 a winsys for your use and you can also do a new frontend for gallium other 
 than
 a GL frontend or wait for new frontend to appear :)

Hmm... If you have to use gallium to talk to the hardware, shouldn't fbcon be 
renamed to 
glcon? :-) Also, while 2D is dissappearing on desktop, it's very much alive on 
embedded,
for the moment at least. I can't see fbdev going anytime soon if the only 
replacement is a 
full-blown programmable 3D driver architecture. Perhaps a new, simple API could 
be created. 
On desktop it would be implemented as a new front-end API for gallium and on 
embedded
it would be implemented using a thin user-space wrapper to the kernel module? A 
bit like
what DirectFB started life as (before it started trying to be X).


  7) The modedemo/demo.c seems to be doing stuff with /dev/fb0. From what I 
  can 
  tell, this is just getting the current mode at startup  restoring it 
  before exit. Can
  I assume this is to stop garbled output after the program exits and can be 
  safely
  #defined out (as I'm using VGA console)?
  
 
 Lot of the interface is not mature yet so there is few hack to work around 
 things.
 But in your case you shouldn't define out this as drm modesetting should have 
 take
 over your vga console (at least this what i do on radeon) so you should be 
 using
 fbcon.

Alas I believe the intelfb driver doesn't support the 965GM, at least not in 
2.6.24. I
think I'll work by logging in over ssh. Doesn't matter then if the screen gets 
garbled
after exiting. Or do I need to run from a vt?


Cheers,

Tom


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


Re: DRM QWS

2008-03-18 Thread Tom Cooksey
On Friday 07 March 2008 18:35:10 Jesse Barnes wrote:
 On Friday, March 07, 2008 1:21 am Tom Cooksey wrote:
  I'm a developer working on getting OpenGL ES working with QWS - the window
  system built into Qt/Embedded. That is, Trolltech's own windowing system,
  completely independent of X. The typical hardware we're working with is
  PowerVR MBX, an OpenGL ES 1.1 complient device. We have also played with
  ATI mobile chipsets. One thing all these devices have in common is rubbish
  (IMO), closed source drivers. The only API we have for them is EGL, the
  only on-screen surface is the entire display.
 
  While we are continuing development with these devices, I'm very keen to
  develop a proof-of-concept driver using an open source desktop OpenGL
  implementation. I want to show people what can be done with decent ( open)
  drivers.
 
 Great, that's one of the goals we had in mind when changing the DRM recently. 
  
 There's actually some standalone OpenGL code in the Mesa tree that can be 
 used as a starting point (EGL  miniglx, two separate ways of doing that).
 
  The first step I'd like to make is to just get something on the screen. I
  was wondering if it's possible to use DRM to just map the framebuffer into
  a user process's address space and use it like we would use the LinuxFB
  device? Or do modern frame buffer drivers use the DRM themselves to do
  this?
 
 Yeah, that should be doable with the current code on Intel  ATI devices.  
 You'll have to allocate a new buffer object for your front buffer, then use 
 it to set a new mode.
 
 We'd really like to hear any feedback you have about the interfaces and 
 design; given that what you're doing is something we'd really like to 
 support, we want to make sure we get it right before it gets pushed upstream 
 into Linux and set in stone.


I've had some time to play with the modesetting branch. I am using a laptop with
an Intel 965GM, is this likely to work? At the moment, when I run tests/modedemo
I get a hard lock. :-/


I have a few comments/questions from what I've looked at so far:

1) The current libdrm looks to be a very thin wrapper around the ioctls. If 
this is the
case and all the code is kernel-side, what are the thoughts of implementing a 
linuxfb
driver ontop of this? It would be pretty cool to get fbcon rendering using DRM?

2) Sortof related to the above... it would be very cool to have a very simple 
drawing
API to use on top of the modesetting API. A simple blit  solid fill would 
surfice. I've
always found it odd that the internal kernal API for framebuffer devices 
includes blit
and solid fill, but that API is never exposed to user-space applications - even 
though
it would be _very_ useful.

3) The drmBOCreate() looks fun. Could we use this to store pixmaps? Again, 
having 
an API to blit a pixmap created with drmBOCreate() to the framebuffer, would be 
very 
nice. Even nicer if porter-duff composition modes were supported, although a 
simple 
blit would be a massive leap forward.

4) The API doesn't seem to provide any mechenism for syncing framebuffer 
updates 
to VBLANK. Does this mean the sync is done automatically, i.e. after unmapping 
the
framebuffer, the contents on the screen aren't actually updated until the next 
vblank?

5) Can we implement double-buffering by creating 2 BOs and switching between
them using drmModeAddFB()?

6) What is the plan for this modesetting work? Is it intended to replace fbdev
or suplement it? From what I've seen, there's nothing stopping you creating a 
DRM
driver for very basic framebuffer-only type hardware?

7) The modedemo/demo.c seems to be doing stuff with /dev/fb0. From what I can 
tell, this is just getting the current mode at startup  restoring it before 
exit. Can
I assume this is to stop garbled output after the program exits and can be 
safely
#defined out (as I'm using VGA console)?




Cheers,

Tom


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel


DRM QWS

2008-03-07 Thread Tom Cooksey
Hi,

I'm a developer working on getting OpenGL ES working with QWS - the window 
system
built into Qt/Embedded. That is, Trolltech's own windowing system, completely
independent of X. The typical hardware we're working with is PowerVR MBX, an
OpenGL ES 1.1 complient device. We have also played with ATI mobile chipsets. 
One
thing all these devices have in common is rubbish (IMO), closed source drivers. 
The only
API we have for them is EGL, the only on-screen surface is the entire display.

While we are continuing development with these devices, I'm very keen to 
develop a
proof-of-concept driver using an open source desktop OpenGL implementation. I 
want
to show people what can be done with decent ( open) drivers.

I'm pretty new to X, DRI  associated code bases but have spent the last few 
months
reading documentation  code, trying to understand how everything works 
together.
I think I've now got to a stage where I've read everything I could find and 
need some
help.

The effect I'm looking for is iPhone/Compiz style window composition. We have 
this
already, but the problem is that the drivers are designed for a single process
accessing the hardware at a time. This is fine if there's only a single process 
(in QWS,
the window system is a shared library which is loaded by the first application 
to be
launched). All the windows can be drawn into off-screen pbuffers and then used 
as
textures to be rendered onto the screen. The problem comes when there are 
multiple
processes. Our current solution is to get the client processes to use our 
raster paint
engine to render into shared memory, which the server then uploads as a texture.
As you can imagine, this is *SLOW*. It also doesn't allow client processes to 
use
OpenGL themselves - something we really want to have.

What we want to do is use our OpenGL paint engine (or, even better, an OpenVG
paint engine - which maps much better to Qt's painter API) in the client 
processes.
The client processes render both 2D windows and OpenGL calls to off-screen
buffers, which the server can use as textures. We'd also like video to be 
handled in
a similar way (VAAPI?).

From what I've read, AIGLX allows compiz to composite OpenGL window surfaces
because it's the X server which does the rendering. I.e. X clients serialize 
OpenGL
commands and send them to the server via GLX. While we could do this too, (and
will probably have to do this for nasty closed source OpenGL ES drivers), I
stumbled upon this:

http://hoegsberg.blogspot.com/2007/08/redirected-direct-rendering.html

What I'm hoping to do is bring together all the very fine work done in the last 
few
years. What I'm stuck on is how everything is going to hang together. This is 
what
I have so far (most of which is probably wrong, so please correct):

Write a QWS driver where the server opens the framebuffer using DRM Modesetting.
The server also initializes the DRM. QWS clients render into off-screen buffers 
(pbuffers or Framebuffer objects?) using OpenGL (Mesa/Gallium?). The QWS client
then magicaly gets the DRM ID of the off-screen buffer (Is there a 1:1 
relationship
between a DRM buffer and a framebuffer object's color buffer?). The clients 
then 
send that DRM ID to the server. The server then somehow magically tells 
mesa/gallium about the buffer which is then (also magically) mapped to a texture
name/ID and used as a texture to be drawn into the framebuffer.

Obviously, I still have a lot to learn. :-D

The first step I'd like to make is to just get something on the screen. I was
wondering if it's possible to use DRM to just map the framebuffer into a user
process's address space and use it like we would use the LinuxFB device? Or
do modern frame buffer drivers use the DRM themselves to do this?


Any/all comments, suggestions  insults are welcome. :-)


Cheers,

Tom


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
--
___
Dri-devel mailing list
Dri-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dri-devel