On Wed, Nov 16, 2005 at 12:02:33AM +0100, Sander Stoks wrote:
> Hello all,
> 
> I have a question about mode selection; more specifically about the bit 
> depth of the surface. 
> 
> Looking in systems/fbdev/fbdev.c in primarySetRegion, I see that 
> DirectFB iterates over the available modes and compares the requested 
> width and height, but not the bit depth.  Also, I see that a "priority" 
> is checked, but I can't find anywhere where this priority is actually 
> set.

It's not set anywhere so you can ignore it :)

> I ran into this while investigating the bug reported at http://
> www.directfb.org/mantis/view.php?id=32, and looking at the resulting 
> rgb codes it looks to me like there is some intermediate RGB565 
> conversion going on.  I put some debugging printfs in primarySetRegion 
> (notably printing out highest->bpp), and it seems to pick a [EMAIL PROTECTED] 
> bit screen, which I thought would explain the results seen on a Via 
> CLE266.

The VideoMode's bpp value is also ignored. You will get RGB32 when you 
request for it.

> However, when I try the test app included with the bug report on my 
> laptop (Radeon M9), the resulting rgb values _are_ correct - and even 
> there, DirectFB seems to pick a 16 bit screen.

AFAICS neither radeon driver supports LUT8->RGB32 conversion. What you 
see is software rendering.

The r200 driver claims to support LUT8->LUT8 blits but that must be a 
bug.

> Also, adding printfs in dfb_fbdev_set_mode, printing out 
> var.bits_per_pixel and mode->bpp, I see a lot of mode->bpp - 16 (12 
> times), then one var.bits_per_pixel = 16, then the (*) DirectFB/Code/
> WM: Default 0.2 (Convergence GmbH) line, then one more 
> var.bits_per_pixel = 16, then 5 times var.bits_per_pixel  = 32, then I 
> see my primarySetRegion printf (saying a [EMAIL PROTECTED] mode is selected), 
> then finally one more var.bits_per_pixel = 32.

The only place where you are guaranteed to see correct results is in 
dfb_fbdev_var_to_mode().

> Since the code attached to the bug report reads data back from the 
> frame buffer as 32 bit RGB values, apparently the buffer _is_ set to a 
> 32 bit mode.

It should. Otherwise there is a bug somewhere.

> Looking at /etc/fb.modes, I notice that there are no 32 bit modes 
> listed.  So I guess I have the following questions:
> 
> - Am I supposed to be able to select 32 bit frame buffer modes at all?

Yes. There are several ways to select the format of the primary surface.

1. Specify pixelformat in DFBSurfaceDescription
2. SetVideoMode()
3. pixelformat= option
4. depth= option
5. current layer format

> - Is SetVideoMode supposed to fall back to a lower bit depth when the 
> requested one is not available?

No.

> - Is it possible that the unichrome driver somehow uses 16bit palette 
> lookups when doing a LUT8 blit to a 32bit surface?

Yes. It looks like the palette is loaded with 32bit values but I think 
it just ignores the lower bits.

It's probably quite common to handle texture palette as RGB16. Matrox 
cards work that way but there you actually load the palette with 16bit 
values.

-- 
Ville Syrjälä
[EMAIL PROTECTED]
http://www.sci.fi/~syrjala/

_______________________________________________
directfb-users mailing list
[email protected]
http://mail.directfb.org/cgi-bin/mailman/listinfo/directfb-users

Reply via email to