On Sun, Oct 6, 2013 at 9:04 AM, Chris Michael <devilho...@comcast.net> wrote:
> On 10/06/13 06:11, Cedric BAIL wrote:
>> On Sat, Oct 5, 2013 at 12:05 AM, Rafael Antognolli <antogno...@gmail.com> 
>> wrote:
>>> Example usage of what I have just committed (fixes and improvements
>>> for Evas_Video_Surface, and added Ecore_Wl_Subsurf) here:
>>>
>>> https://github.com/antognolli/buffer_object
>>>
>>> This is a helper, or a skeleton, for creating and setting up the image
>>> object that would be used with the buffers. It can be made more
>>> generic if necessary, thus allowing to use both Wayland buffers or X
>>> stuff. The code itself is inside buffer_object.c. Sample usage is
>>> inside main.c.
>> That's exactly the direction where I wanted to get that code. Really
>> nice patch, thanks. The next improvement I was looking for was to use
>> somehow the pixels buffer directly when using OpenGL (zero copy
>> scheme), by looking at your code I do think than in compositing mode
>> we are still doing a copy. Am i right ?
>>
>>> Anyway, this can be added somewhere in EFL, I just don't know exactly
>>> where would be the best place... ideas?
>> That is indeed a good question. I guess the first place to use this is
>> somewhere in Emotion's gstreamer backend. I would even prefer to see
>> that feature working with VLC backend, but I don't think there is a
>> way to make vlc output the pixels in a wayland surface.
> Not currently :( And I would not count on one anytime soon :(
>
> https://trac.videolan.org/vlc/ticket/7936
>
> Although, if we had the pixels (I don't know the VLC code too well) then
> we should be able to slap those into a surface...

Well, in the generic backend, we create the pixel buffer where VLC is
going to decode the video, right? So it's basically the same, we could
create a wl_buffer and let VLC write on it, and then we display that
as a subsurface if possible.

I was also thinking that if we add YUV as a buffer format for
wl_buffer (it's missing that so far), then VLC can write the pixels in
YUV format, and we let the "composition" of the subsurface + main
surface to the compositor, and that would speed up things, wouldn't
it?

>>   Also the
>> gstreamer backend is easier to integrate as it doesn't require to
>> communicate with another process to get the pixels (Not really a win
>> in my opinion, but in this case will make life easier).

wl_buffer can be (maybe *must* be) a shm buffer, so that should be
easy to handle even in vlc, I think.

>> Also I have been starting to think that maybe we should have a simpler
>> layer than Emotion that does all this buffer management and his used
>> by Emotion. That's just a though right now.
>
>
> ------------------------------------------------------------------------------
> October Webinars: Code for Performance
> Free Intel webinars can help you accelerate application performance.
> Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from
> the latest Intel processors and coprocessors. See abstracts and register >
> http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk
> _______________________________________________
> enlightenment-devel mailing list
> enlightenment-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

-- 
Rafael Antognolli

------------------------------------------------------------------------------
October Webinars: Code for Performance
Free Intel webinars can help you accelerate application performance.
Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from 
the latest Intel processors and coprocessors. See abstracts and register >
http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk
_______________________________________________
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to