Hi all,

My understanding is that emotion gets the video backend to render RGBA
to the evas canvas that is then displayed by the ecore-evas backend.
Correct?

If so, would it be possible, for instance, using the xine backend to
render directly to screen using whatever HW-accleration available to
it, and have the evas-canvas as an 'underlay' to the video screen in
order to trap events. This would mean modifying the emotion-xine
module to be an interceptor in the xine pipeline instead of being a
video_output driver.

Feasible?

Arvind R.

------------------------------------------------------------------------------
Master Java SE, Java EE, Eclipse, Spring, Hibernate, JavaScript, jQuery
and much more. Keep your Java skills current with LearnJavaNow -
200+ hours of step-by-step video tutorials by Java experts.
SALE $49.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122612 
_______________________________________________
enlightenment-users mailing list
enlightenment-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-users

Reply via email to