On 14/06/16 06:02, Gwenole Beauchesne wrote: > Hi, > > 2016-06-10 10:49 GMT+02:00 Mark Thompson <[email protected]>: >> On 10/06/16 08:06, Anton Khirnov wrote: >>> Quoting Mark Thompson (2016-06-10 01:25:58) >>>> Adds a general mechanism for setting driver quirks, and detects the >>>> driver name to set them. >>>> >>>> When the i965 driver is being used, destroy all parameter buffers >>>> explicitly after use despite passing them to vaRenderPicture(). >>>> --- >>>> An alternative approach to solving the same problem as the previous patch. >>>> Maybe this is actually better once we start detecting the driver anyway? >>> >>> Yes, this looks much better to me. >> >> Hmm, ok. Thinking a bit further, maybe this should be available a bit more >> generally because >> <https://git.libav.org/?p=libav.git;a=blob;f=libavcodec/vaapi.c;hb=HEAD#l180> >> should also use it - maybe a property of the device rather than inside the >> encoder? >> >> A flags field in AVVAAPIDeviceContext would seem to do the right thing, but >> it >> is unclear who would initialise it - the user, or inside >> av_hwdevice_ctx_init(). > > Since most VA drivers implemented the incorrect behaviour, IMHO I > believe a better approach is to use that flag to actually enable the > correct behaviour. That way, you can automatically flag correct VA > drivers, after testing, and let the user flag itself if he > has/develops a driver. I believe the gallium VA driver implements the > correct behaviour though.
I don't think this is appropriate. If a new implementer turns up with a correct implementation as documented, they shouldn't need to add themselves to a special list of correct implementations to make this work. >> I should probably acquire a suitable AMD card so that I can actually test >> this >> on more than one target. I did, I will investigate all of this with the gallium driver soon. >>> Btw, do the intel driver devs know about this problem? >> >> Yes. I don't know if it's been considered recently, but see >> <https://lists.freedesktop.org/archives/libva/2012-July/000961.html> for some >> hand-wringing a while ago. > > Updates: that was further discussed during the Linux "port" of the > proprietary VA driver. It sounded the trend was at least to make the > VA buffer live until vaEndPicture(). A rewording of vaRenderPicture() > was also considered. There are also VA buffers that could be used for > statistics, so an implicit destroy of everything was not much useful, > as it would have needed the users users to know whether to destroy > explicitly or implicitly either kind of buffer. That was not a > convincing approach. Is live-ness or not until vaEndPicture() really relevant? My reading is that the user hands the buffer off to libva with vaRenderPicture() and should never touch it again after that point, uncaring of what actually happens internally. Buffers which explicitly want to bring information back (such as statistics) then want to be referred to by some other route, like coded_buf or macroblock_info in the H.264 encode parameters. >> Also compare Intel's libyami behaviour: >> <https://github.com/01org/libyami/blob/0aef8be5122f7ef917258aafcdb44a22bdb0c1c8/vaapi/vaapipicture.cpp#L65> >> <https://github.com/01org/libyami/blob/0aef8be5122f7ef917258aafcdb44a22bdb0c1c8/vaapi/VaapiBuffer.cpp#L86> > > The reference and original implementation remains libgstvaapi. And, VA > buffers were destroyed just after vaRenderPicture() because that was > the least dirty solution, and worked for all drivers, at that time. As long as everything is single-threaded, sure. Thank you for your thoughts on this matter :) - Mark _______________________________________________ libav-devel mailing list [email protected] https://lists.libav.org/mailman/listinfo/libav-devel
