On 05/25/16 at 09:45am, Víctor M. Jáquez L. wrote: > On 05/25/16 at 01:22am, Anon wrote: > > Hi All, > > > > I want to implement the following use case on Intel Haswell CPU with HD > > graphics: > > > > offscreen OpenGL rendering to OpenGL 2D texture -> use > > EGL_MESA_image_dma_buf_export > > to export the texture as prime fd -> vaCreateSurfaces from the prime fd -> > > use vaapi to hw acclerate h264 encoding > > > > Is this supported by the latest va-api and libva-intel-driver release > > (i.e., 1.7.0)? If so, is there any example I can start with? > > > > I did read the sample application h264encode.c and thought it might be a > > good starting point. However, the first major issue I encountered is when > > creating context through vaCreateContext, a number of pre-allocated va > > surfaces are required and thus statically associated with the new va > > context. However, with prime fd, the fd will change so a new va surface is > > created with every new prime fd. I don't see any API can be used to > > dynamically add/remove va surfaces after the context is created. Can you > > please give me some suggestions? > > You can look at the gstreamer-vaapi[1] code, which supports it, though right > now is under discussion and rewriting. > > 1. https://cgit.freedesktop.org/gstreamer/gstreamer-vaapi
Ugh... morning coffee is not kicking yet. This specific pipeline is not supported out-of-the-box, perhaps hacking gldownload element to export dmabuf elements from a specific texture grabber by glsrcbin. I don't know the details. Anyway, from the VA side, afaik, encoders/decoders need pre-allocate the output surfaces, not the input ones. You can associate the prime fd to a surface and encode it. Don't release the fd until the surface is already processed. vmjl _______________________________________________ Libva mailing list [email protected] https://lists.freedesktop.org/mailman/listinfo/libva
