Hi !

> The trickiest bit is that Y and UV values which make up each on screen pixel
> are stored in two separate (but contiguous) planar framebuffers. Ie
> 720*576*8bit pixels of Y values, followed by 720*576*8bit UV values. 

O.K. - I get it right, that UV is subsampled with a 1:2 ratio in the x
direction, as usual with video signals, right ?

So basically the layout is

YYYYYY.............YYYY [720 Ys]
.
.
.
YYYYYY.............YYYY [line 576]
UVUVUV.............UVUV [360 Us, 360 Vs]
.
.
.
UVUVUV.............UVUV [line 576]

right ?

O.K. - when we attack this, we should as well consider the other formats.

My card uses something like UYVYUYVY which is a linear buffer, but with
a strange property:

Two neightbouring pixels are interdependent. Y (brightness) is adjustable
idependently, while U and V (color) can only be set for each pair of pixels.

How should we handle that ? IMHO we should just let the "last one win".

I.e. we report a size that corresponds to the finest-grain controllable
part (y) and every access will re-set both Y and UV parts. This will
cause slight color distortions for already-touches pixels, but that can't
be helped. o.k. - one could some kind of average them, but that can cause
even more strange effects unless you backing-store all pixels at full
resolution.

For the implementation, I'd suggest to implement another color-mapping
library that does RGB<->YUV and makes pixelvalues like this: YYUUVV.

Then we make a set of rendering libraries that either do:

*(base+  x    +y*720  )=value>>16;
*(base2+(x&~1)+y*720  )=(value>>8)&0xff;
*(base2+(x&~1)+y*720+1)=(value   )&0xff;

for the card you described, while another one does

*(base+1+ x    *2+720*2*y)=value>>16;
*(base  +(x&~1)*2+720*2*y)=(value>>8)&0xff;
*(base+2+(x&~1)*2+720*2*y)=(value   )&0xff;

for mine.

> We will need dev/fb to be in this bi planar format which means GGI 
> framebuffer/pixel operations will need operate on both Y & UV planes 
> simultaneously as well as converting the (RGB <-> YUV) formats. 

See above on a way to do this. I'll see, if I can hack it in some day.

> Does/could GGI support any such multi plane operations through one 
> single config point? 

Basically it is supported, but the renderers have to be written. This
however is no big deal.

> Ie will I need to implement custom fb read/write procedures for lots of
> framebuffer access routines or maybe just putpixel & getpixel.

You only _need_ putpixel & getpixel. However for performance, implementing
other functions might be a good idea.

> As long as the format has packed pixels, and doesn't do "weird" stuff,
> like writing one pixel affects the colors of adjacent pixels, 

It does for all video stuff I know. The reason is the encoding of the
video signal, that gives significantly less bandwidth to the color
information.

> We might also have to add a new buffer type to the DirectBuffer
> structure, but that is no problem.

Yeah, though it's a bit weird ...

CU, ANdy

-- 
= Andreas Beck                    |  Email :  <[EMAIL PROTECTED]> =

Reply via email to