Hi Mark, thanks for the detailed reply.

On Tue, 2006-08-29 at 15:19 +0100, Mark Adams wrote:
> > I wish to render two video streams in a picture-in-picture style on the
> > TV o/p (720x576 PAL interlaced).
> 
> For video, you really need to use the video overlay, layer 1, as only
> that layer supports YUV pixel formats and the FIELD_PARITY option that
> you'll need to display interlaced video correctly full screen via a TV
> encoder.

OK, I have that working, although I'm unsure about the interlacing
options as they seem to make little difference. I currently have the
layer set as interlaced, set the field parity option. I guess that the
SetField() fn has to be called to toggle fields between each frame blit?

> Is this a VIA EPIA board you're using?  Is it MPEG-2 video?

Yes that's the one. But video is decoded from MPEG4 so the hardware
decode is not useful :-(

> The unichrome driver currently supports acceleration of YV12 only for
> 2D operations.  You will not get hardware acceleration for a
> StretchBlit of YV12.

Ah that explains the slow running of the enlarged video stream. I have
switched to scaling the main video with layer 1, that works well with v.
low CPU effort. Video is a bit "blocky" but I can experiment with
increasing the resolution and dithering a little with some of the saved
CPU cycles.

> Instead, you can use the overlay layer to do the scaling for you.  The
> overlay layer supports scaling and positioning.  However, as soon as
> you apply a vertical scaling factor, the picture won't look too good
> because of interlace effects.

Yep I have that now.

> Render directly to layer 1's surface, configured with FIELD_PARITY and
> BACKVIDEO or TRIPLE buffering.

Buffering seems make little difference and best result (nearest
real-time) seems to come with FRONTONLY and direct copy between surface
Lock/Unlock pairs.

> It supports other pixelformats too but YV12 will be the best option
> for efficient display of video.

Fits our decoded format so no problems there.

> I suggest you avoid using windows for the video -- for efficiency you
> want to avoid copying the frames so write to the overlay layer
> directly.

I realise that now, writing directly to front buffer in layer 1 which
has been set to exact width/height of decoded frames gives a full-screen
image that looks OK. I suppose that the aspect ratio of the frames being
the same as the TV is saving me here from the effects of vertical
scaling.

> You'll get extremely poor performance if you've got surfaces in video
> memory and you're using operations that aren't hardware accelerated.
> If you need to do a YV12 stretch blit, you'll need the surfaces in
> system memory and it won't be accelerated.

> The video provider interface is just a nice interface for things that
> provide sources of video.  If you've already solved the video decoding
> part of the problem, it won't help you.

OK, I now see that the raw YUV format cannot be understood by the video
provider from a stream source.

> In all the above, I've just talked about one video window.  For your
> second, you've got fewer options.  The CLE266 hardware does support a
> second video overlay (known as V3) but this is not currently supported
> by the DirectFB driver (which only supports V1).  I did do an
> experimental version of the driver with support for only V3 and
> someone with the time could fairly easily add V3 support properly.
> That would give you two independently scalable video windows.
> However, note that V3 does not support YV12 directly which opens up
> another whole issue of the HQV blitter for which, again, there is no
> support in DirectFB.  However, V3 would support YUY2 video.  Also note
> that later VIA chipsets don't have support for V1.

I think the later chipsets can directly decode MPEG4, so with those we
would make use of the decoder. As for V3 support, that would be optimal
if there is an easy transform from YV12 to YUY2 but I suspect not.

> Another option for your second video window is to use a window on the
> primary layer (note you can put the primary layer in front of the
> video layer if you want and make it transparent except where there is
> something to display).  The primary supports various RGB modes but no
> YUV modes.  However, RGB StretchBlits are supported.

I'm trying this route as the inset video is shrunk to a small size and I
think a transform from YV12 to ARGB might be a CPU hit we can afford
given the efficiency of the layer scaling approach above. Especially if
the stretch blit is hw accelerated.

I am struggling with the alpha settings on layer/surface 0/primary to
render the inset video on top of layer 1's surface. But I'm sure I'll
work it out eventually :-)

> BTW, I have hardware accelerated MPEG-2 video decoding using the
> CLE266's MPEG-2 decoder via a Xine plug-in working now.  I don't know
> if this is of use.  It's not 100% error free though.  Doesn't help
> with your picture-in-picture problem either!

Unfortunately no help for us, but good to see that it's working as it
makes the EPIA mobo's quite a potent setup for MPEG-2 streams etc..

> Regards,
> 
> Mark

Thanks again for the high quality answer and help, keep up the great
work with the drivers and library. I hope I can contribute something
back once I've got the hang of this video stuff!

Bill Somerville



_______________________________________________
directfb-users mailing list
[email protected]
http://mail.directfb.org/cgi-bin/mailman/listinfo/directfb-users

Reply via email to