Re: [Libav-user] HW Decode d3d11va Failing for VP9/AV1 Videos

2024-03-13 Thread Mark Thompson

On 05/03/2024 15:45, Sebastiaan Geus via Libav-user wrote:

Hi,

I am using d3d11va hardware acceleration for decoding video’s in Unity I use 
the d3d11 pix_fmt to directly copy the output texture to unity.

This now works fine for h264 /h265 video’s but both vp9 and av1 video’s produce : 
Static surface pool size exceeded. & get_buffer() failed on decode.

Here is a log that I save from my application lines with a @ prefix are from 
the av_log_set_callback.

Query available decoder id.

Decoder does not exist.

Using d3d11va HWDecoder

config loading success.

USE_TCP=false

BUFF_VIDEO_MAX=18

BUFF_AUDIO_MAX=128

SEEK_ANY=false

@ Opening 'C:\FFmpeg\bin\Download\out9.webm' for reading

@ Setting default whitelist 'file,crypto,data'

@ Probing h263 score:25 size:2048

@ Probing matroska,webm score:100 size:2048

@ Format matroska,webm probed with size=2048 and score=100

@ st:0 removing common factor 100 from timebase

Opened file: (C:\FFmpeg\bin\Download\out9.webm).

@ Before avformat_find_stream_info() pos: 319 bytes read:32768 seeks:0 
nb_streams:1

@ Format yuv420p chosen by get_format().

@ All info found

@ stream 0: start_time: 0 duration: NOPTS

@ format: start_time: 0 duration: 9.92 (estimate from stream) bitrate=144 kb/s

@ After avformat_find_stream_info() pos: 5130 bytes read:32768 seeks:0 frames:1

duration= 992.00

decoding video using vp9

Using d3d11 as the HW pixel format

Using d3d11va as the HW Device Type

HW Decoder initializing!

@ Selecting d3d11va adapter 0

@ Using device 10de:1e07 (NVIDIA GeForce RTX 2080 Ti).

audio stream not found.

Initialization of decoders complete!

nativeIsVideoEnabled: true

nativeIsAudioEnabled: false

@ Format d3d11 chosen by get_format().

@ Format d3d11 requires hwaccel initialisation.

@ Decoder GUIDs reported as supported:

@ {86695f12-340e-4f04-9fd3-9253dd327460}@  103@  106@

@ {ee27417f-5e28-4e65-beea-1d26b508adc9}@  103@  106@

@ {6f3ec719-3735-42cc-8063-65cc3cb36616}@  103@  106@

@ {1b81bea4-a0c7-11d3-b984-00c04f2e73c5}@  103@  106@

@ {1b81bea3-a0c7-11d3-b984-00c04f2e73c5}@  103@  106@

@ {32fcfe3f-de46-4a49-861b-ac71110649d5}@  103@  106@

@ {d79be8da-0cf1-4c81-b82a-69a4e236f43d}@  103@  106@

@ {f9aaccbb-c2b6-4cfc-8779-5707b1760552}@  103@  106@

@ {1b81be68-a0c7-11d3-b984-00c04f2e73c5}@  103@  106@

@ {5b11d51b-2f4c-4452-bcc3-09f2a1160cc0}@  103@  106@

@ {107af0e0-ef1a-4d19-aba8-67a163073d13}@  104@  106@

@ {20bb8b0a-97aa-4571-8e99-64e60606c1a6}@  104@  106@

@ {15df9b21-06c4-47f1-841e-a67c97d7f312}@  103@  106@

@ {efd64d74-c9e8-41d7-a5e9-e9b0e39fa319}@  103@  106@

@ {ed418a9f-010d-4eda-9ae3-9a65358d8d2e}@  103@  106@

@ {9947ec6f-689b-11dc-a320-0019dbbc4184}@  103@  106@

@ {33fcfe41-de46-4a49-861b-ac71110649d5}@  103@  106@  107@

@ {463707f8-a1d0-4585-876d-83aa6d60b89e}@  103@  106@

@ {a4c749ef-6ecf-48aa-8448-50a7a1165ff7}@  104@  106@

@ {dda19dc7-93b5-49f5-a9b3-2bda28a2ce6e}@  104@  106@

@ {6affd11e-1d96-42b1-a215-93a31f09a53d}@  103@  106@

@ {914c84a3-4078-4fa9-984c-e2f262cb5c9c}@  103@  106@

@ {8a1a1031-29bc-46d0-a007-e9b092ca6767}@  103@  106@

@ Static surface pool size exceeded.

@ get_buffer() failed

Sendpacket produced Cannot allocate memory

Av Error

@ Static surface pool size exceeded.

@ get_buffer() failed

Sendpacket produced Cannot allocate memory

Av Error

@ Static surface pool size exceeded.

Errors persist on using latest shared build or release 6.1

Can someone point me in a direction of how to fix this?


D3D11 decoding requires that output is an array texture with a size fixed in 
advance.  It sounds like you have not told the decoder that you are going to 
hold on to multiple frames in the output, so it has made the array texture too 
small.

You probably want to set AVCodecContext.extra_hw_frames to tell the decoder how 
many frames you are going to hold on to.

Thanks,

- Mark
___
Libav-user mailing list
Libav-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/libav-user

To unsubscribe, visit link above, or email
libav-user-requ...@ffmpeg.org with subject "unsubscribe".


Re: [Libav-user] Format requires hwaccel initialisation

2020-02-09 Thread Mark Thompson
On 08/02/2020 22:51, Philippe Noël wrote:
> Hello,
> 
> I'm trying to do the vaapi/d3d11va/videotoolbox hardware accel for 
> AV_CODEC_ID_H264. I'm following the libav example, but when I init it gives 
> me "requires waccel initialisation" although I am creating the hw device w/ 
> av_hwdevice_ctx_create. What am I missing?
> 
> Format videotoolbox_vld chosen by get_format().
> 
> Error found
> 
> Format videotoolbox_vld requires hwaccel initialisation.

This isn't an error - it's a debug message noting that the decoder needs some 
system-specific initialisation to use the hardware, which is true 
().

Whatever is going wrong must be somewhere after that.

- Mark
___
Libav-user mailing list
Libav-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/libav-user

To unsubscribe, visit link above, or email
libav-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [Libav-user] AMD AMF decode?

2020-02-01 Thread Mark Thompson
On 28/01/2020 05:02, Philippe Noël wrote:
> Hello,
> 
> I read from the FFmpeg wiki that AMD AMF is only supported for encode, not 
> decode. Any idea if there are plans for this to be supported at all 
> eventually?

Unlikely.  All of the relevant hardware decode features and interop are fully 
supported on AMD devices via the more standard platform APIs (DXVA/VAAPI), so 
there isn't any reason to add special code for the proprietary AMF interface.

- Mark
___
Libav-user mailing list
Libav-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/libav-user

To unsubscribe, visit link above, or email
libav-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [Libav-user] maxhap: Set SEI unregistered user message using H264MetadataContext

2018-12-20 Thread Mark Thompson
On 19/12/2018 16:21, Max Ashton wrote:
> Hi all and thanks for reading!
> 
> I'm trying to set the SEI unregistered user message on a per video frame 
> basis. I notice that libav has a H264MetadataContext structure which contains 
> a const char *sei_user_data field. This seems to be exactly what I'm looking 
> for. After poking around in the ffmpeg code I notice this structure is 
> wrapped within the private data of the AVBSFContext. My knowledge is 
> extremely limited so at this point I'm looking for an example or explanation 
> on how to correctly access the H264MetadataContext structure. I presume I 
> need a bit stream filter (based on the naming), but can't find any examples 
> of setting the H264MetadataContext.
> 
> Can anyone help me with an explanation, code snippet or point me to an 
> example I might have missed/overlooked?
> 
> Any general advice would also be appreciated. I have checked the few similar 
> questions on stackoverflow, they don't seem to have any solid answers or 
> explanations though (maybe due to my lack of understanding).

You're looking at internal code inside the h264_metadata BSF.  It only supports 
inserting the same thing on all IRAP access units - the intended use-case is 
adding global metadata (such as fixing up odd bugs like the x264 4:4:4 8x8 
problem).  If you only want to use it for that, then you can make an instance 
of the h264_metadata BSF, set the sei_user_data option on it, and pass packets 
through it with the external BSF API.

If you want something more, there isn't any external API to do it.  The 
internal CBS API does offer complete decomposition of the coded bitstream, 
including the SEI user data, and can be used to implement whatever you like 
there (indeed, the h264_metadata BSF uses it).  That would require editing the 
FFmpeg source, though, and there is no guarantee of API stability for internals 
like that.  If you want to go that way then the easiest approach is probably to 
hack something in to the h264_metadata BSF, though making another BSF or just 
adding new external API calls are equally possible.

- Mark
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user

To unsubscribe, visit link above, or email
libav-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [Libav-user] customize d3d11va decoder

2018-12-17 Thread Mark Thompson
On 14/12/2018 21:31, Benjamin de la Fuente Ranea wrote:
> Hello,
> 
> I'm using ffmpeg to decode mp4 videos in hwaccel mode with d3d11va and I'm 
> trying to draw the decoded surface (NV12 format) to the screen using DX11, 
> without copying the surface to CPU and upload again to another texture. For 
> this I need to add D3D11_BIND_SHADER_RESOURCE to 
> hw_frames_ctx->hwctx->BindFlags. These flags are used to create the internal 
> Texture2DArray with all the surfaces needed to decode by ffmpeg, but I can't 
> find a way to do it without modifying ffmpeg source code.
> 
> The way I modify ffmpeg is removing the const qualifier to the declaration 
> and definition of the ff_hwcontext_type_d3d11va struct in hwcontext_d3d11va.c 
> and hwcontext_internal.h, so in my user code I can change all the hooks, in 
> particular "frames_init" function pointer to a custom function that add the 
> D3D11_BIND_SHADER_RESOURCE to the BindFlags and then calls the original 
> d3d11va_frames_init function.
> 
> I'm not sure if there is any other better way to do this. Does anyone knows 
> something that can be useful?

Presumably you are currently using the decoder by setting only the 
hw_device_ctx field in AVCodecContext to a suitable device, and letting it do 
allocation internally?

If you need to allocate with some non-default parameters then you want to set 
the hw_frames_ctx field instead.

If you only want to set the BindFlags, this is provided in the frames context 
creation through the field you note above in AVD3D11VAFramesContext.  Generally 
the way to do this is to create a new hwframe context with the desired 
parameters (including the hwctx structure with your BindFlags) inside the 
get_format callback, then setting that as hw_frames_ctx for the decoder to use.

That still makes the array texture internally with the flags you specify - if 
you want even more control, you can make the array texture yourself with 
arbitrary parameters and set it in hwctx so that it gets wrapped by the frames 
context.  (Though note that some parameters can make the textures unusable for 
decoding on particular hardware, so care is needed in this setup.)

- Mark
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user

To unsubscribe, visit link above, or email
libav-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [Libav-user] Is it possible to preallocate memory for AVFrame data when calling avcodec_receive_frame?

2018-12-17 Thread Mark Thompson
On 17/12/2018 18:45, ggeng wrote:
> Hi,
> 
> I am using libav to decode a video (following the demo code available
> online), and I am trying to pre-allocate memory that the decoded frame data
> will be written to. Ultimately, I would like to have pFrame->data[0] to be
> in a region of memory I pre-allocate.
> 
> Following examples online, I am able to set pFrame->data as so:
> 
> *// Determine required buffer size and allocate buffer*
> int numBytes;
> numBytes = av_image_get_buffer_size(pixFmt, width, height, 1);
> (uint8_t*) dataBuff = (uint8_t*) malloc (numBytes * sizeof(uint8_t));
> 
> *// Assign buffer to image planes in pFrame*
> av_image_fill_arrays(frame->data, frame->linesize, dataBuff, pixFmt, width,
> height, 1);
> 
> However, I found that this call:
> 
> ret = avcodec_receive_frame(pCodecContext, pFrame)
> 
> will always write the decoded frame data to be somewhere else, and allocate
> its own memory for pFrame->data, completely ignoring the dataBuffer I set to
> pFrame->data before.
> 
> Is there a way to get around this? Either by decoding video without a call
> to avcodec_receive_frame or a way to set the memory for the data?

Have a look at the AVCodecContext.get_buffer2 callback.

- Mark
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user

To unsubscribe, visit link above, or email
libav-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [Libav-user] hardware encoding on Raspberry Pi using Openmax has issue of time stamp

2018-11-19 Thread Mark Thompson
On 19/11/18 18:14, Jinbo Li wrote:
> Hello,
> 
> Hello, I have an issue on implementing hardware encoding on raspberry pi by 
> using openmax. The problem is that I always get wrong time stamp in my 
> AVpacket even right after it gets initialized, the time stamp was always 
> assigned to be 634 (seems to be the numerator of time base) and it does not 
> change overtime. I have run this code on my another Ubuntu laptop, it doesn't 
> have this issue for the printf.
> 
> code:
> 
> AVPacket pkt;
> printf("0-time stamp = %ld, enc = %d/%d st = %d/%d\n", pkt.pts,
> encoder_ctx->time_base.num,encoder_ctx->time_base.den,
> fmt_encoder_ctx->streams[video_stream]->time_base.num,
> fmt_encoder_ctx->streams[video_stream]->time_base.den);
> printf("avframe time stamp = %ld\n", sw_frame->pts);
> av_init_packet();
> printf("1-time stamp = %ld, enc = %d/%d st = %d/%d\n", pkt.pts,
> encoder_ctx->time_base.num,encoder_ctx->time_base.den,
> fmt_encoder_ctx->streams[video_stream]->time_base.num,
> fmt_encoder_ctx->streams[video_stream]->time_base.den);
> 
> result:
> 
> 0-time stamp = 634, enc = 1907363872/0 st = 634/19001
> 1-time stamp = 634, enc = 0/-2147483648 st = 634/19001
> ...(the printed time stamp is always 634 below)

The timestamp is not a long, so you're invoking undefined behaviour and the 
printed numbers are meaningless.  You want PRId64 for printing timestamps.

- Mark
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


Re: [Libav-user] av_hwframe_transfer_data does not succeed from NV12 to YUV420P - How to get YUV420P from cuvid?

2018-06-06 Thread Mark Thompson
On 06/06/18 08:46, Sergio Basurco wrote:> On 05/06/2018 22:23, Mark Thompson 
wrote:
>> On 05/06/18 11:15, Sergio Basurco wrote:
>>> I'm decoding h264 with ffmpeg. I want to use the hwaccel decoders. I'm 
>>> using the cuvid decoder via API. In the fftools code there's a function 
>>> "hwaccel_retrieve_data" that is supposed to convert the decoded frame 
>>> (NV12) into any other format, I'm trying YUV420P.
>>>
>>> The conversion does not return any error, however the resulting data is not 
>>> correct. Here's the original NV12 frame:
>>>
>>> https://imgur.com/a/jAb8h12
>>>
>>> And here's the conversion to YUV420P (only avframe->data[0] and 
>>> avframe->data[1] have any data, data[2] is expected to have the V data but 
>>> it is missing).
>>>
>>> https://imgur.com/a/ihQeJ0M
>>>
>>>
>>> I think I'm on the right track, based on the code in hwcontext_cuda.c 
>>> aparently YUV420P is a supported format, I cannot get my head around how I 
>>> can tell the decoder to convert from NV12 to YUV420P though.
>>>
>>> Any tips will be appreciated, I'll update if I find anything.
>> See the documentation for av_hwframe_transfer_data() 
>> (<http://ffmpeg.org/doxygen/trunk/hwcontext_8h.html#abf1b1664b8239d953ae2cac8b643815a>),
>>  in particular:
>>
>> "If src has an AVHWFramesContext attached, then the format of dst (if set) 
>> must use one of the formats returned by av_hwframe_transfer_get_formats(src, 
>> AV_HWFRAME_TRANSFER_DIRECTION_FROM). If dst has an AVHWFramesContext 
>> attached, then the format of src must use one of the formats returned by 
>> av_hwframe_transfer_get_formats(dst, AV_HWFRAME_TRANSFER_DIRECTION_TO)"
>>
>> For CUDA, no conversion during transfer is supported so the only usable 
>> output format returned by av_hwframe_transfer_get_formats() is the same 
>> format as the GPU-side frame itself (NV12 in your case): 
>> <http://git.videolan.org/?p=ffmpeg.git;a=blob;f=libavutil/hwcontext_cuda.c#l176>.
> 
> Hi Mark,
> 
> I did misread that bit of documentation, thanks!
> 
> I'm trying to understand the supported_formats array found in 
> hwcontext_cuda.c 
> <https://github.com/FFmpeg/FFmpeg/blob/master/libavutil/hwcontext_cuda.c#L35>.
>  Can the hw context be initialized so that the decoder returns YUV420P 
> instead of NV12?

The code there is saying that YUV420P is supported as a format for CUDA frames, 
but that doesn't mean that it is also supported for decoding to - what formats 
are supported there is dependent on the decoder hardware.

Looking at the decoder code it looks like it only supports NV12 for 8-bit 
decoding, so YUV420P in CUDA therefore can't be used there (on the other hand, 
it can be used for encoding and some filter operations).

- Mark
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


Re: [Libav-user] av_hwframe_transfer_data does not succeed from NV12 to YUV420P - How to get YUV420P from cuvid?

2018-06-05 Thread Mark Thompson
On 05/06/18 11:15, Sergio Basurco wrote:
> I'm decoding h264 with ffmpeg. I want to use the hwaccel decoders. I'm using 
> the cuvid decoder via API. In the fftools code there's a function 
> "hwaccel_retrieve_data" that is supposed to convert the decoded frame (NV12) 
> into any other format, I'm trying YUV420P.
> 
> The conversion does not return any error, however the resulting data is not 
> correct. Here's the original NV12 frame:
> 
> https://imgur.com/a/jAb8h12
> 
> And here's the conversion to YUV420P (only avframe->data[0] and 
> avframe->data[1] have any data, data[2] is expected to have the V data but it 
> is missing).
> 
> https://imgur.com/a/ihQeJ0M
> 
> 
> I think I'm on the right track, based on the code in hwcontext_cuda.c 
> aparently YUV420P is a supported format, I cannot get my head around how I 
> can tell the decoder to convert from NV12 to YUV420P though.
> 
> Any tips will be appreciated, I'll update if I find anything.

See the documentation for av_hwframe_transfer_data() 
(),
 in particular:

"If src has an AVHWFramesContext attached, then the format of dst (if set) must 
use one of the formats returned by av_hwframe_transfer_get_formats(src, 
AV_HWFRAME_TRANSFER_DIRECTION_FROM). If dst has an AVHWFramesContext attached, 
then the format of src must use one of the formats returned by 
av_hwframe_transfer_get_formats(dst, AV_HWFRAME_TRANSFER_DIRECTION_TO)"

For CUDA, no conversion during transfer is supported so the only usable output 
format returned by av_hwframe_transfer_get_formats() is the same format as the 
GPU-side frame itself (NV12 in your case): 
.

- Mark
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


Re: [Libav-user] How to use rkmpp decoder?

2018-04-16 Thread Mark Thompson
On 16/04/18 07:58, Anton Prikazchikov wrote:
> 
>> Can you give more detail about your setup and the input stream?
> 
> I have firefly RK3399.
> 
> OS:
> Linux localhost 4.4.120 #2 SMP Mon Apr 9 10:08:10 +04 2018 aarch64 aarch64 
> aarch64 GNU/Linux
> No LSB modules are available.
> Distributor ID:    Ubuntu
> Description:    Ubuntu 16.04.3 LTS
> Release:    16.04
> Codename:    xenial
> 
> I have installed rockchip kernel from git master branch.
> 
> ffmpeg:
> ffmpeg version N-90453-g44000b7 Copyright (c) 2000-2018 the FFmpeg developers
>   built with gcc 5.4.0 (Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.9) 20160609
>   configuration: --prefix=/usr/local --enable-libx264 --enable-libdrm 
> --enable-rkmpp --enable-version3 --enable-gpl
>   libavutil  56. 12.100 / 56. 12.100
>   libavcodec 58. 16.100 / 58. 16.100
>   libavformat    58. 10.100 / 58. 10.100
>   libavdevice    58.  2.100 / 58.  2.100
>   libavfilter 7. 13.100 /  7. 13.100
>   libswscale  5.  0.102 /  5.  0.102
>   libswresample   3.  0.101 /  3.  0.101
>   libpostproc    55.  0.100 / 55.  0.100
> 
> I tried this command:
> sudo ffmpeg -v 55 -c:v h264_rkmpp -i ./4k.h264 -an -frames:v 100  out.h264
> 
> And received:
> ...
> [h264_rkmpp @ 0x1024c380] Initializing RKMPP decoder.
> mpi: mpp version: 5849089 author: Herman Chen [mpp]: Add temporally patch for 
> blocking issue
> hal_h264d_api: hal_h264d_init mpp_buffer_group_get_internal used ion In
> mpp_rt: NOT found ion allocator
> mpp_rt: found drm allocator
> [h264_rkmpp @ 0x1024c380] RKMPP decoder initialized successfully.
> Stream mapping:
>   Stream #0:0 -> #0:0 (h264 (h264_rkmpp) -> h264 (libx264))
> Press [q] to stop, [?] for help
> cur_dts is invalid (this is harmless if it occurs once at the start per 
> stream)
> [AVBSFContext @ 0x102ddf30] The input looks like it is Annex B already
> [h264_rkmpp @ 0x1024c380] Wrote 35 bytes to decoder
> [h264_rkmpp @ 0x1024c380] Wrote 43028 bytes to decoder
> cur_dts is invalid (this is harmless if it occurs once at the start per 
> stream)
> [h264_rkmpp @ 0x1024c380] Wrote 99 bytes to decoder
> cur_dts is invalid (this is harmless if it occurs once at the start per 
> stream)
> [h264_rkmpp @ 0x1024c380] Wrote 40 bytes to decoder
> [h264_rkmpp @ 0x1024c380] Timeout when trying to get a frame from MPP
> cur_dts is invalid (this is harmless if it occurs once at the start per 
> stream)
> [h264_rkmpp @ 0x1024c380] Wrote 126 bytes to decoder
> cur_dts is invalid (this is harmless if it occurs once at the start per 
> stream)
> [h264_rkmpp @ 0x1024c380] Wrote 32 bytes to decoder
> cur_dts is invalid (this is harmless if it occurs once at the start per 
> stream)
> [h264_rkmpp @ 0x1024c380] Wrote 50 bytes to decoder
> cur_dts is invalid (this is harmless if it occurs once at the start per 
> stream)
> [h264_rkmpp @ 0x1024c380] Wrote 32 bytes to decoder
> cur_dts is invalid (this is harmless if it occurs once at the start per 
> stream)
> [h264_rkmpp @ 0x1024c380] Wrote 49 bytes to decoder
> cur_dts is invalid (this is harmless if it occurs once at the start per 
> stream)
> [h264_rkmpp @ 0x1024c380] Wrote 33 bytes to decoder
> cur_dts is invalid (this is harmless if it occurs once at the start per 
> stream)
> [h264_rkmpp @ 0x1024c380] Wrote 77 bytes to decoder
> cur_dts is invalid (this is harmless if it occurs once at the start per 
> stream)
> [h264_rkmpp @ 0x1024c380] Wrote 41 bytes to decoder
> cur_dts is invalid (this is harmless if it occurs once at the start per 
> stream)
> [h264_rkmpp @ 0x1024c380] Wrote 64 bytes to decoder
> cur_dts is invalid (this is harmless if it occurs once at the start per 
> stream)
> [h264_rkmpp @ 0x1024c380] Wrote 33 bytes to decoder
> cur_dts is invalid (this is harmless if it occurs once at the start per 
> stream)
> [h264_rkmpp @ 0x1024c380] Wrote 48 bytes to decoder
> cur_dts is invalid (this is harmless if it occurs once at the start per 
> stream)
> [h264_rkmpp @ 0x1024c380] Wrote 33 bytes to decoder
> cur_dts is invalid (this is harmless if it occurs once at the start per 
> stream)
> [h264_rkmpp @ 0x1024c380] Wrote 52 bytes to decoder
> [h264_rkmpp @ 0x1024c380] Timeout when trying to get a frame from MPP
> cur_dts is invalid (this is harmless if it occurs once at the start per 
> stream)
> [h264_rkmpp @ 0x1024c380] Wrote 37 bytes to decoder
> ...
> 
> Also i have checked again my built mpp. The command for the test:
> sudo ./mpi_dec_test -i ./4k.h264 -o 1080.yuv -w 1920 -h 1080 -t 7
> 
> And have received good video file without errors.

I don't see anything funny there.  Does it fail in the same way with all H.264 
files (including ones in a proper container with timestamps)?  What about 
H.265, VP8 or VP9 (which I think should all be supported on RK3399)?

> Maybe do i have to add "--extra-ldflags='-L/usr/local/lib 
> -lmali-midgard-r13p0-fbdev'"?

Shouldn't be relevant - that's me forcing it to link against a specific 
graphics driver binary so that OpenCL interop works.

- Mark

Re: [Libav-user] Regarding jpeg_qsv support in ffmpeg

2018-04-13 Thread Mark Thompson
On 13/04/18 07:39, Vivekanand wrote:
> Dear Team,
> 
> Intel Media SDK provides support for hardware encoding and decoding of
> JPEG/H264 data.
> I want to use this functionality using ffmpeg. I checked ffmpeg and found
> h264_qsv for H264 but didn't find anything for JPEG.
> 
> Is there anything like "jpeg_qsv" exists in ffmpeg ?
> Is there a way in ffmpeg using which I can use hardware encoding/decoding
> for JPEG with intel hardware.

It's supported via VAAPI with the i965 driver.

For decoding, use the VAAPI hwaccel mode (supports 4:0:0, 4:2:0, 4:1:1, 4:2:2, 
4:4:0, 4:4:4).  For encoding, use the mjpeg_vaapi encoder (supports 4:2:0 only).

- Mark
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


Re: [Libav-user] How to use rkmpp decoder?

2018-04-13 Thread Mark Thompson
On 13/04/18 07:08, Anton Prikazchikov wrote:
>> So what is the issue? It seems to be working?
> 
> Technically it works, but not completely...
> 
> I try do decode 50 packets(i think that each contains 1 frame) and I can't 
> receive no one decoded frame.
> 
> I looked the code of rkmppdec.c. The decoder must print to log " Received a 
> frame. " but it doesn't do this.
> 
> If I'm changing the codec from "h264_rkmpp" to "h264" with the same code i 
> receive all 50 decoded frames.
> 
> I ran examples from mpp with the same video file and decoding works normally, 
> but in ffmpeg it doesn't work.

Works for me on RK3288 with the same MPP version as you have:

$ ./ffmpeg_g -v 55 -c:v h264_rkmpp -i ~/test/bbb_1080_264.mp4 -an -frames:v 100 
-f null -
ffmpeg version N-90683-g37d46dc21d Copyright (c) 2000-2018 the FFmpeg developers
  built with gcc 6.3.0 (Debian 6.3.0-18+deb9u1) 20170516
  configuration: --enable-debug --enable-opencl --enable-libdrm --enable-rkmpp 
--enable-gpl --enable-version3 --enable-libx264 
--extra-ldflags='-L/usr/local/lib -lmali-midgard-r13p0-fbdev'
...
[h264_rkmpp @ 0x80b91830] Initializing RKMPP decoder.
mpi: mpp version: 5849089 author: Herman Chen [mpp]: Add temporally patch for 
blocking issue
hal_h264d_api: hal_h264d_init mpp_buffer_group_get_internal used ion In
mpp_rt: NOT found ion allocator
mpp_rt: found drm allocator
[h264_rkmpp @ 0x80b91830] RKMPP decoder initialized successfully.
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (h264_rkmpp) -> wrapped_avframe (native))
Press [q] to stop, [?] for help
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
[h264_rkmpp @ 0x80b91830] Wrote 43 bytes to decoder
[h264_rkmpp @ 0x80b91830] Wrote 1162 bytes to decoder
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
[h264_rkmpp @ 0x80b91830] Wrote 69 bytes to decoder
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
[h264_rkmpp @ 0x80b91830] Wrote 11423 bytes to decoder
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
[h264_rkmpp @ 0x80b91830] Wrote 8407 bytes to decoder
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
[h264_rkmpp @ 0x80b91830] Wrote 18558 bytes to decoder
[h264_rkmpp @ 0x80b91830] Decoder noticed an info change (1920x1080), format=0
[h264_rkmpp @ 0x80b91830] Received a frame.
...
[h264_rkmpp @ 0x80b91830] Wrote 881 bytes to decoder
[h264_rkmpp @ 0x80b91830] Received a frame.
[h264_rkmpp @ 0x80b91830] Wrote 165 bytes to decoder
[h264_rkmpp @ 0x80b91830] Received a frame.
[h264_rkmpp @ 0x80b91830] Wrote 251 bytes to decoder
[h264_rkmpp @ 0x80b91830] Received a frame.
[h264_rkmpp @ 0x80b91830] Wrote 194 bytes to decoder
[h264_rkmpp @ 0x80b91830] Received a frame.
[h264_rkmpp @ 0x80b91830] Wrote 170 bytes to decoder
[h264_rkmpp @ 0x80b91830] Wrote 6744 bytes to decoder
[h264_rkmpp @ 0x80b91830] Wrote 1603 bytes to decoder
[h264_rkmpp @ 0x80b91830] Received a frame.
[h264_rkmpp @ 0x80b91830] Wrote 725 bytes to decoder

etc.

Can you give more detail about your setup and the input stream?

- Mark
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


Re: [Libav-user] How to use rkmpp decoder?

2018-04-12 Thread Mark Thompson
On 12/04/18 10:24, Anton Prikazchikov wrote:
>>Did you already implement software decoding with libavcodec?
>>Once this works, you should only have to switch from the "h264"
>>decoder to the "h264_rkmpp" decoder to make it work.
> 
> I rebuilded my ffmpeg and now output:
> 
> Codec is initializated
> [NULL @ 0x20f35030] Opening '../4k.h264' for reading
> [file @ 0x20f356e0] Setting default whitelist 'file,crypto'
> [h264 @ 0x20f35030] Format h264 probed with size=2048 and score=51
> [h264 @ 0x20f35030] Before avformat_find_stream_info() pos: 0 bytes 
> read:32768 seeks:0 nb_streams:1
> [AVBSFContext @ 0x20f3cd20] nal_unit_type: 6, nal_ref_idc: 0
> [AVBSFContext @ 0x20f3cd20] nal_unit_type: 7, nal_ref_idc: 3
> [AVBSFContext @ 0x20f3cd20] nal_unit_type: 8, nal_ref_idc: 3
> [AVBSFContext @ 0x20f3cd20] nal_unit_type: 5, nal_ref_idc: 3
> [h264 @ 0x20f36100] nal_unit_type: 6, nal_ref_idc: 0
> [h264 @ 0x20f36100] nal_unit_type: 7, nal_ref_idc: 3
> [h264 @ 0x20f36100] nal_unit_type: 8, nal_ref_idc: 3
> [h264 @ 0x20f36100] nal_unit_type: 5, nal_ref_idc: 3
> [h264 @ 0x20f36100] Format yuv420p chosen by get_format().
> [h264 @ 0x20f36100] Reinit context to 1280x544, pix_fmt: yuv420p
> [h264 @ 0x20f36100] no picture
> [h264 @ 0x20f35030] max_analyze_duration 500 reached at 5017667 
> microseconds st:0
> [h264 @ 0x20f35030] After avformat_find_stream_info() pos: 63488 bytes 
> read:65536 seeks:0 frames:123
> [h264_rkmpp @ 0x20f37760] Initializing RKMPP decoder.
> mpi: mpp version: 5849089 author: Herman Chen [mpp]: Add temporally patch for 
> blocking issue
> hal_h264d_api: hal_h264d_init mpp_buffer_group_get_internal used ion In
> mpp_rt: NOT found ion allocator
> mpp_rt: NOT found drm allocator

The ffmpeg implementation requires DRM.  Make sure that mpp is built with 
-DHAVE_DRM and that your user can access the DRM devices (/dev/dri/?).

- Mark
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


Re: [Libav-user] Are there some VDPAU decode and present tutorials using FFMPEG?

2017-07-31 Thread Mark Thompson
On 31/07/17 12:11, jing zhang wrote:
> I'm trying to use FFMPEG with VDPAU to decode HEVC streams.
> There is no hevc_vdpau decoder in the 3.3.2 version of FFMPEG.
> How can I use the VDPAU hwaccel to do the decoding?

There is a hardware decode example which works for VDPAU: 
.

> Are there some VDPAU decode and present tutorials using FFMPEG?

That example downloads the output frames from the GPU and writes them to a 
file.  Presentation on the screen isn't really in the scope of ffmpeg - you 
could look at the source code for one the media players (e.g. mpv or vlc) for 
that.

- Mark
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


Re: [Libav-user] vaapi decoding support status in master branch of ffmpeg

2017-02-12 Thread Mark Thompson
On 12/02/17 15:01, Anton Sviridenko wrote:
> I've noticed recently that changes related to removal of requirement
> for "struct vaapi_context"  were pushed to master. But they did not
> appear in latest release 3.2.4.
> 
> Is current way of initializing VAAPI decoding without vaapi_context
> production ready?

It should be.  For a non-ffmpeg user of it through lavc, see mpv.

> Also I am interested what min. version of libva is required to make things 
> work?

It requires libva >= 1.2.0 (which introduced surface attributes and encoding).  
There may be bugs in older versions of drivers, so generally using the most 
recent version of your driver is a good idea.

> VAAPI decoding works for me on libva 1.7.3, but popular Linux distros
> (like Ubuntu 16.04LTS) have libva 1.7.0 in their repos (or older) and
> decoding acceleration does not work there. get_format() callback just
> is not called with AV_PIX_FMT_VAAPI

Maybe your build with the older version is not configured correctly?  Whether 
AV_PIX_FMT_VAAPI is offered by the get_format() callback is completely 
determined by the build parameters (see 

 for H.264 - no VAAPI code has actually run at that point).

> I suspect that shipping application with latest version of libva could
> help. Is it safe to use git master branch of libva?

The version of libva doesn't really matter beyond enabling some support (e.g. 
1.6.0 required for VP9 decode), because it's really just a thin wrapper around 
a dynamically-loaded driver.  The driver itself matters more - in my experience 
they are generally ok, but whether to use a non-release version is really a 
matter for yourself and whoever makes the driver you want to use.

- Mark
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


Re: [Libav-user] VAAPI constraints - Failed to query surface attributes: 4 (invalid VAConfigID)

2017-02-07 Thread Mark Thompson
On 07/02/17 18:54, Anton Sviridenko wrote:
> {
> AVVAAPIHWConfig *hwconfig = NULL;
> AVHWFramesConstraints *constraints = NULL;
> 
> hwconfig = (AVVAAPIHWConfig
> *)av_hwdevice_hwconfig_alloc(m_hw_device_ctx);
> constraints =
> av_hwdevice_get_hwframe_constraints(m_hw_device_ctx, hwconfig);
> 
> if (!constraints)
> {
> qDebug() << "Failed to query VAAPI constraints";
> av_freep();
> return;
> }
> 
> qDebug() << "VAAPI frame constraints: \n"
>  << "min_width " << constraints->min_width
>  << "min_height " << constraints->min_height
>  << "max_width " << constraints->max_width
>  << "max_height " << constraints->max_height;
> 
> av_freep();
> av_hwframe_constraints_free();
> }

You need to fill the config_id field of AVVAAPIHWConfig.  The 
av_hwdevice_get_hwframe_constraints() function returns constraints on hwframes 
/for the given processing configuration/, which you have to supply.  
Alternatively, you can pass a null pointer in place of the hwconfig argument to 
get any global constraints which exist on image sizes or formats (in your case 
above for VAAPI that wouldn't say much of use because there are no global 
constraints on image sizes, only local ones to each processing configuration).

For some examples, see 

 (making a scaler) or 

 (making a decoder).

- Mark
___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user


Re: [Libav-user] vaapi decoding, av_hwframe_transfer_data

2016-05-20 Thread Mark Thompson
On 20/05/16 11:27, Nikita Orlov wrote:
> Hello!
> 
> I am decoding using last ffmpeg with vaapi support (libva for intel).
> 
> For decoding I am handling my pool of VASurfaces, and set pointers for my own 
> get_frame, get_buffer and release_buffer for AVCodecContext.
> So, when I am try to transfer hw decoded frame to memory, using  
> av_hwframe_transfer_data it assumes, that src AVFrame will have pointer to 
> hw_frames_ctx,
> but mine is null, so segfault.
> 
> During encoding, I am creating AVHWDeviceContext, AVVAAPIDeviceContext and 
> AVHWFramesContext, so ffmpeg inits VASurfaces by its own.
> 
> Question is, how to use it for decoding (h.264)? How to set proper get_frame, 
> get_buffer and release_buffer for AVCodecContext?

The frame has to come from a buffer pool associated with an AVHWFramesContext: 
as you observe, it falls over if this requirement is violated.

If you are happy for libav* to manage the surfaces for you, then you can use 
code like that in ffmpeg_vaapi.c to set up the buffer pool.

If you want to manage the surfaces yourself, then you will need to make your 
own AVBufferPool containing the surfaces and associate it with the 
AVHWFramesContext.  The documentation for this is in the doxygen - see 
, though it might be 
clearer just to read the libavutil/hwcontext.h headers directly.

- Mark

___
Libav-user mailing list
Libav-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/libav-user