On 2019-05-12 19:28, Mark Thompson wrote:
> On 09/05/2019 20:38, Jonas Karlman wrote:
>> Hello,
>>
>> When a multi-layer AVDRMFrameDescriptor is used to describe a frame the 
>> overall
>> frame format is missing and applications need to deduce the frame 
>> DRM_FORMAT_*
>> based on sw_format or the layers format.
>>
>> This patchset adds a AVDRMFrameDescriptor.format field to remove any 
>> ambiguity
>> of what frame format a multi-layer descriptor may have.
>>
>> Kodi has up until now only supported single layer AVDRMFrameDescriptor,
>> when trying to add support for multi-layer frame descriptors [1],
>> we did not want to try and deduce the frame format, hence this patchset.
>>
>> [1] https://github.com/xbmc/xbmc/pull/16102
>>
>> Patch 1 adds a new field, format, to the AVDRMFrameDescriptor struct.
>> Patch 2-4 adds code to set the new format field.
>>
>> Regards,
>> Jonas
>>
>> ---
>>
>> Jonas Karlman (4):
>>   hwcontext_drm: Add AVDRMFrameDescriptor.format field
>>   hwcontext_vaapi: Set AVDRMFrameDescriptor.format in map_from
>>   rkmppdec: Set AVDRMFrameDescriptor.format
>>   kmsgrab: Set AVDRMFrameDescriptor.format
>>
>>  doc/APIchanges              |  3 +++
>>  libavcodec/rkmppdec.c       |  1 +
>>  libavdevice/kmsgrab.c       |  1 +
>>  libavutil/hwcontext_drm.h   |  4 ++++
>>  libavutil/hwcontext_vaapi.c | 38 +++++++++++++++++++++++++++++++++++++
>>  libavutil/version.h         |  4 ++--
>>  6 files changed, 49 insertions(+), 2 deletions(-)
> Can you argue why this case should be put in FFmpeg rather than constructing 
> the format you want in the client code?
>
> The intent of the existing format information is that each layer is 
> definitely usable as the specific format stated if the device supports that 
> format and format modifier.  That isn't true for the top-level format - some 
> devices enforce additional constraints which aren't visible.  For example, if 
> you take an R8 + GR88 frame from an AMD device, it probably won't work as 
> NV12 with Intel video hardware because there the whole frame is required to 
> be in one object (well, not quite - actually the offset from the luma plane 
> to the chroma plane just has some relatively small limit; in practice this 
> gets enforced as single-object, though), but it will work perfectly well as 
> R8 and GR88 planes.

The reason why I wanted to offload this to FFmpeg is that the top-level format 
is already known while the application would have to guess/calculate correct 
format to use when importing the video buffer into a drm plane.

The main issue we are facing is that kernel api do not have a distinction 
between single/multi layer/object when importing a video buffer into a 
framebuffer, drmModeAddFB2WithModifiers is expecting the top-level format 
regardless if the planes come from multiple objects or not.
(Kernel driver may still enforce additional constraints, e.g. on Rockchip the 
luma plane must be contiguous after chroma plane, and Allwinner have similar 
limits as Intel, chorma and luma plane must be in close proximity)

In order to support HDR video using a framebuffer tied to a drm plane on Intel, 
the top-level format passed to drmModeAddFB2WithModifiers must be P010 even if 
vaExportSurfaceHandle exports the video buffer as two layers, R16 + RG1616.

Changing vaapi_map_to_drm_esh in hwcontext_vaapi to try and use 
VA_EXPORT_SURFACE_COMPOSED_LAYERS and fallback to 
VA_EXPORT_SURFACE_SEPARATE_LAYERS could also work for our use case, the Intel 
vaapi driver support both export methods and AMD only the separate layers 
method.
From what I understand support for composed layers wont be added to AMD as it 
technically are multiple objects. I am not even sure if AMD have support for 
NV12 as a drm plane format.

Regards,
Jonas

>
> Thanks,
>
> - Mark

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Reply via email to