On Wed, Jan 9, 2013 at 11:49 AM, Arvind R <arvin...@gmail.com> wrote:

> Hi all,
>
> My understanding is that emotion gets the video backend to render RGBA
> to the evas canvas that is then displayed by the ecore-evas backend.
> Correct?
>

Actually it outputs to YUV as well, being converted to RGB by CPU (mmx/sse)
or GPU (OpenGL).


If so, would it be possible, for instance, using the xine backend to
> render directly to screen using whatever HW-accleration available to
> it, and have the evas-canvas as an 'underlay' to the video screen in
> order to trap events. This would mean modifying the emotion-xine
> module to be an interceptor in the xine pipeline instead of being a
> video_output driver.
>
> Feasible?
>

Yes, Cedric did this for Gstreamer. There is support for that in
Evas_Object_Image with Evas_Video_Surface that you can use to hook and
change the underlying backend. At the Evas level it will draw an empty hole
in the image region, leaving it to the top/below HW plane to draw it.


-- 
Gustavo Sverzut Barbieri
http://profusion.mobi embedded systems
--------------------------------------
MSN: barbi...@gmail.com
Skype: gsbarbieri
Mobile: +55 (19) 9225-2202
------------------------------------------------------------------------------
Master Java SE, Java EE, Eclipse, Spring, Hibernate, JavaScript, jQuery
and much more. Keep your Java skills current with LearnJavaNow -
200+ hours of step-by-step video tutorials by Java experts.
SALE $49.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122612 
_______________________________________________
enlightenment-users mailing list
enlightenment-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-users

Reply via email to