On Jun 1, 2009, at 12:47 AM, Sriram Neelakandan wrote:

On Sat, May 30, 2009 at 9:22 PM, Simon Fraser <[email protected]> wrote:


Something else you should consider here is ways that the video can be clipped
and transformed.

Thanks Simon, I forgot that


Any solution which naively puts a hardware surface over some rect where the
video is  supposed to be will be broken in many cases.


Thinking aloud; the only option seems to be is to convert the YUV
output of the decoder to a RGB surface, which is mapped in to Cairo
surface;
this way Webkit is free to do those crazy CSS Transforms;
YUV to RGB will kill the processor (unless done in HW)
But for any embedded core running around 300~400MHz this will never
work for even d...@25 fps  (forget HD)

Now. how can we accelerate this ?

I assume clip will always translate to a paint / setSize function. So
that can be handled in HW. Is that right ?

Clipping in hardware is tricky. You have to consider that there may be CSS transforms between the video and its clipping container, so it's not as simple as just some rect of
the video being exposed.

Is there a way to decently accelerate all the video layer CSS transforms ?
Is it possible to send the Transform down to RenderVideo and then to
the MediaPlayer ?
Some chips do provide HW transform functions;

The ACCELERATED_COMPOSITING code path is really what you want.



This is exactly what the ACCELERATED_COMPOSITING code path does. There is currently a Core Animation backend for Mac (GraphicsLayerCA.mm); you'd have
to
write a backend for your compositing system if you wish to use this code
path.

Does this handle the CSS transforms as well ?

Yes. It also allows for 3D transforms, and hardware acceleration transitions and
animations of certain CSS properties.

Simon

_______________________________________________
webkit-dev mailing list
[email protected]
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

Reply via email to