If you use 'pixel data' of the wl_buffer when evas iamge isn't visible, it means data are mapped from GPU memory address to CPU memory address. The performance will be bad.
If you look at Weston code, it create EGLImage from wl_buffer, then sibling texture from EGLImage, pixel data is used inside GPU memory without copy. However, I realize that it is not ok to create EGLImage from wl_buffer in wayland client side. So we need another way to export video surface for texture use, something like egl dma_buf extension: http://cgit.freedesktop.org/mesa/mesa/commit/?id=0de013b61930505bbeaf094d079b566df18a0cf7 -----Original Message----- From: Antognolli, Rafael Sent: Tuesday, October 29, 2013 7:46 PM To: Zhao, Halley; [email protected] Subject: RE: subsurfaces and EFL Hmm... I am not sure what you mean by update by texture... can you explain it a little more, please? ________________________________________ From: Zhao, Halley Sent: Monday, October 21, 2013 12:38 AM To: Antognolli, Rafael; [email protected] Subject: RE: subsurfaces and EFL When evas image isn't visible, is it possible to update evas image by texture instead of raw data? The texture can be bind from wl_buffer (as Weston does). Otherwise, it make the solution much complicate with less efficient: map video data from GPU memory usually not a good idea. -----Original Message----- From: Antognolli, Rafael Sent: Friday, October 18, 2013 9:18 PM To: Zhao, Halley; [email protected] Subject: subsurfaces and EFL Hi Halley, The patches for subsurface support in EFL are integrated, and they should appear in the next image built for Tizen:IVI. So the example (buffer_object) that I sent you earlier should compile just fine, let me know if you have any problem. Now, notice that in that example, I use the wl_buffer (an shm buffer) both attaching it to the subsurface, and using its pixel data directly setting them on the evas image object, depending on the subsurface being visible or not. I was trying to modify your example to use this, but I am not sure if I can have direct access to the pixel data of the wl_buffer that you provide from the gst plugin. Is this true? That's part of what I need to have full control over the subsurface. Another solution option would be to modify the example to attach 2 elements to the gstreamer pipeline, that receive the same buffer queue using a tee component, and switching each of them would receive the buffers by adding "valve" components to both. This way, when the subsurface is created, we set it as the window handle, and turn on that part of the pipeline. When the subsurface is destroyed, we remove it as a window handle and turn off the video overlay element, and turn on another element that just decodes the video on a normal buffer, that can be used to feed the pixels for the image object. Well, take a look at my example, see if you understand the way that the subsurfaces can be used in EFL now, and we can discuss more about it on Monday. Regards, Rafael _______________________________________________ IVI mailing list [email protected] https://lists.tizen.org/listinfo/ivi
