[FFmpeg-devel] [PATCH] avformat/smacker: add better seeking support

2022-04-06 Thread Paul B Mahol
Signed-off-by: Paul B Mahol 
---
 libavformat/smacker.c | 35 ++-
 1 file changed, 26 insertions(+), 9 deletions(-)

diff --git a/libavformat/smacker.c b/libavformat/smacker.c
index 80d36f2f40..eac50040d7 100644
--- a/libavformat/smacker.c
+++ b/libavformat/smacker.c
@@ -94,6 +94,7 @@ static int smacker_read_header(AVFormatContext *s)
 AVStream *st;
 AVCodecParameters *par;
 uint32_t magic, width, height, flags, treesize;
+int64_t pos;
 int i, ret, pts_inc;
 int tbase;
 
@@ -211,8 +212,13 @@ static int smacker_read_header(AVFormatContext *s)
 smk->frm_flags = (void*)(smk->frm_size + smk->frames);
 
 /* read frame info */
+pos = 0;
 for (i = 0; i < smk->frames; i++) {
 smk->frm_size[i] = avio_rl32(pb);
+if ((ret = av_add_index_entry(st, pos, i, smk->frm_size[i], 0,
+  (i == 0 || (smk->frm_size[i] & 1)) ? 
AVINDEX_KEYFRAME : 0)) < 0)
+return ret;
+pos += smk->frm_size[i];
 }
 if ((ret = ffio_read_size(pb, smk->frm_flags, smk->frames)) < 0 ||
 /* load trees to extradata, they will be unpacked by decoder */
@@ -335,7 +341,7 @@ static int smacker_read_packet(AVFormatContext *s, AVPacket 
*pkt)
 if ((ret = av_new_packet(pkt, smk->frame_size + 769)) < 0)
 goto next_frame;
 flags = smk->new_palette;
-if (smk->frm_size[smk->cur_frame] & 1)
+if ((smk->frm_size[smk->cur_frame] & 1) || smk->cur_frame == 0)
 flags |= 2;
 pkt->data[0] = flags;
 memcpy(pkt->data + 1, smk->pal, 768);
@@ -344,6 +350,9 @@ static int smacker_read_packet(AVFormatContext *s, AVPacket 
*pkt)
 goto next_frame;
 pkt->stream_index = smk->videoindex;
 pkt->pts  = smk->cur_frame;
+pkt->duration = 1;
+if (flags & 2)
+pkt->flags |= AV_PKT_FLAG_KEY;
 smk->next_audio_index = 0;
 smk->new_palette = 0;
 smk->cur_frame++;
@@ -359,20 +368,28 @@ next_frame:
 static int smacker_read_seek(AVFormatContext *s, int stream_index,
  int64_t timestamp, int flags)
 {
+AVStream *st = s->streams[stream_index];
 SmackerContext *smk = s->priv_data;
-int64_t ret;
+int64_t pos;
+int ret;
 
-/* only rewinding to start is supported */
-if (timestamp != 0) {
-av_log(s, AV_LOG_ERROR,
-   "Random seeks are not supported (can only seek to start).\n");
+if (!(s->pb->seekable & AVIO_SEEKABLE_NORMAL))
+return -1;
+
+if (timestamp < 0 || timestamp >= smk->frames)
 return AVERROR(EINVAL);
-}
 
-if ((ret = avio_seek(s->pb, ffformatcontext(s)->data_offset, SEEK_SET)) < 
0)
+ret = av_index_search_timestamp(st, timestamp, flags);
+if (ret < 0)
 return ret;
 
-smk->cur_frame = 0;
+pos  = ffformatcontext(s)->data_offset;
+pos += ffstream(st)->index_entries[ret].pos;
+pos  = avio_seek(s->pb, pos, SEEK_SET);
+if (pos < 0)
+return pos;
+
+smk->cur_frame = ret;
 smk->next_audio_index = 0;
 smk->new_palette = 0;
 memset(smk->pal, 0, sizeof(smk->pal));
-- 
2.35.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [RFC] Switching ffmpeg.c to a threaded architecture

2022-04-06 Thread Soft Works



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Kieran Kunhya
> Sent: Wednesday, April 6, 2022 7:57 PM
> To: FFmpeg development discussions and patches  de...@ffmpeg.org>
> Subject: Re: [FFmpeg-devel] [RFC] Switching ffmpeg.c to a threaded
> architecture
> 
> >
> > Not gonna happen, not gonna block progress because of whim of single
> random
> > contributor.
> >
> 
> Agreed, this patch series is as important as buffer referencing. We
> can't
> let holdouts block progress.

RFC Patch => Opinionated Response != Blocking
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [RFC] Switching ffmpeg.c to a threaded architecture

2022-04-06 Thread Kieran Kunhya
>
> Not gonna happen, not gonna block progress because of whim of single random
> contributor.
>

Agreed, this patch series is as important as buffer referencing. We can't
let holdouts block progress.

Kieran
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [RFC] Switching ffmpeg.c to a threaded architecture

2022-04-06 Thread Paul B Mahol
On Wed, Apr 6, 2022 at 6:30 PM Soft Works  wrote:

>
>
> > -Original Message-
> > From: ffmpeg-devel  On Behalf Of
> > Anton Khirnov
> > Sent: Wednesday, April 6, 2022 10:42 AM
> > To: FFmpeg development discussions and patches  > de...@ffmpeg.org>
> > Subject: Re: [FFmpeg-devel] [RFC] Switching ffmpeg.c to a threaded
> > architecture
> >
> > Quoting Soft Works (2022-04-05 23:05:57)
> > > do I understand it right that there won't be a single-thread
> > > operation mode that replicates/corresponds the current behavior?
> >
> > Correct, maintaining a single-threaded mode would require massive
> > amounts of extra effort for questionable gain.
>
> The gain is not to be seen in having an alternate run-mode in
> longer-term term perspective. It is about avoiding a single
> point-of-no-return change which may have fundamental consequences
> and impose debt on other developers when it would no longer be possible
> to compare to the previous mode of operation.
>
> > If I understand correctly what you're suggesting then I don't believe
> > this approach is feasible. The goal is not "add threading to improve
> > performance", keeping everything else intact as much as possible. The
> > goal is "improve architecture to make the code easier to
> > understand/maintain/extend", threads are a means towards that goal.
> > The
> > fact that this should also improve throughput is more of a nice side
> > effect than anything else.
> >
> > This patchset already changes behaviour in certain cases, making the
> > output more predictable and consistent. Reordering it somehow to
> > separate "semantically neutral" patches would require vast amounts of
> > extra work. Note that progressing at all without obviously breaking
> > anything is already quite hard --- I've been working on this since
> > November and this is just the first step. I really do not want to make
> > my work 10x harder for the vague benefit of maybe making some
> > debugging
> > slightly easier.
>
> I understand that, but I'm not talking about re-development. Let me try
> explain it in a different way:
>
> What I mean is to go through your patches one after another but apply
> only those parts that do not affect the current single-threaded execution
> flow - effectively separating out those parts. Then, you go through
> the remaining changes and make corresponding "similar" changes to the
> working code, making it get as close as possible to your original code.
> It's an iterative process. At the end you should have just a small set
> of changes left which make up the difference between the working code
> (still following the traditional flow) and the threaded execution flow.
> That last set of differences can be finally applied in a way that it
> can be activated/deactivated by an option.
>
> When you have been working on this for so long already, then this would
> make up just a small part of the total work.
>
>
Not gonna happen, not gonna block progress because of whim of single random
contributor.


> Regards,
> softworkz
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [RFC] Switching ffmpeg.c to a threaded architecture

2022-04-06 Thread Soft Works



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Anton Khirnov
> Sent: Wednesday, April 6, 2022 10:42 AM
> To: FFmpeg development discussions and patches  de...@ffmpeg.org>
> Subject: Re: [FFmpeg-devel] [RFC] Switching ffmpeg.c to a threaded
> architecture
> 
> Quoting Soft Works (2022-04-05 23:05:57)
> > do I understand it right that there won't be a single-thread
> > operation mode that replicates/corresponds the current behavior?
> 
> Correct, maintaining a single-threaded mode would require massive
> amounts of extra effort for questionable gain.

The gain is not to be seen in having an alternate run-mode in 
longer-term term perspective. It is about avoiding a single 
point-of-no-return change which may have fundamental consequences 
and impose debt on other developers when it would no longer be possible 
to compare to the previous mode of operation.

> If I understand correctly what you're suggesting then I don't believe
> this approach is feasible. The goal is not "add threading to improve
> performance", keeping everything else intact as much as possible. The
> goal is "improve architecture to make the code easier to
> understand/maintain/extend", threads are a means towards that goal.
> The
> fact that this should also improve throughput is more of a nice side
> effect than anything else.
> 
> This patchset already changes behaviour in certain cases, making the
> output more predictable and consistent. Reordering it somehow to
> separate "semantically neutral" patches would require vast amounts of
> extra work. Note that progressing at all without obviously breaking
> anything is already quite hard --- I've been working on this since
> November and this is just the first step. I really do not want to make
> my work 10x harder for the vague benefit of maybe making some
> debugging
> slightly easier.

I understand that, but I'm not talking about re-development. Let me try
explain it in a different way:

What I mean is to go through your patches one after another but apply 
only those parts that do not affect the current single-threaded execution
flow - effectively separating out those parts. Then, you go through 
the remaining changes and make corresponding "similar" changes to the
working code, making it get as close as possible to your original code.
It's an iterative process. At the end you should have just a small set 
of changes left which make up the difference between the working code
(still following the traditional flow) and the threaded execution flow.
That last set of differences can be finally applied in a way that it 
can be activated/deactivated by an option.

When you have been working on this for so long already, then this would
make up just a small part of the total work.

Regards,
softworkz
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [RFC] Switching ffmpeg.c to a threaded architecture

2022-04-06 Thread Soft Works



> -Original Message-
> From: ffmpeg-devel  On Behalf Of Paul
> B Mahol
> Sent: Wednesday, April 6, 2022 1:17 PM
> To: FFmpeg development discussions and patches  de...@ffmpeg.org>
> Subject: Re: [FFmpeg-devel] [RFC] Switching ffmpeg.c to a threaded
> architecture
> 
> On Tue, Apr 5, 2022 at 11:20 PM Soft Works 
> wrote:
> 
> >
> >
> > > -Original Message-
> > > From: ffmpeg-devel  On Behalf Of
> Paul
> > > B Mahol
> > > Sent: Tuesday, April 5, 2022 11:19 PM
> > > To: FFmpeg development discussions and patches  > > de...@ffmpeg.org>
> > > Subject: Re: [FFmpeg-devel] [RFC] Switching ffmpeg.c to a threaded
> > > architecture
> > >
> > > On Tue, Apr 5, 2022 at 11:06 PM Soft Works 
> > > wrote:
> > >
> > > >
> > > >
> > > > > -Original Message-
> > > > > From: ffmpeg-devel  On Behalf
> Of
> > > > > Anton Khirnov
> > > > > Sent: Tuesday, April 5, 2022 9:46 PM
> > > > > To: FFmpeg development discussions and patches  > > > > de...@ffmpeg.org>
> > > > > Subject: Re: [FFmpeg-devel] [RFC] Switching ffmpeg.c to a
> threaded
> > > > > architecture
> > > > >
> > > > > Quoting Michael Niedermayer (2022-04-05 21:15:42)
> > > > > > On Mon, Apr 04, 2022 at 01:29:48PM +0200, Anton Khirnov
> wrote:
> > > > > > > Hi,
> > > > > > > this WIP patchset is the first major part of my ongoing
> work
> > > to
> > > > > change
> > > > > > > ffmpeg.c architecture such that every
> > > > > > > - demuxer
> > > > > > > - decoder
> > > > > > > - filtergraph
> > > > > > > - encoder
> > > > > > > - muxer
> > > > > > > lives in its own thread. The advantages of doing this,
> beyond
> > > > > increased
> > > > > > > throughput, would be enforced separation between these
> > > components,
> > > > > > > making the code more local and easier to reason about.
> > > > > > >
> > > > > > > This set implements threading for muxers. My tentative
> plan is
> > > to
> > > > > > > continue with encoders and then filters. The patches still
> > > need
> > > > > some
> > > > > > > polishing, especially the last one. Two FATE tests do not
> yet
> > > > > pass, this
> > > > > > > will be fixed in later iterations.
> > > > > > >
> > > > > > > Meanwhile, comments on the overall approach are especially
> > > > > welcome.
> > > > > >
> > > > > > I agree that cleanup/modularization to make the code easier
> to
> > > > > > understand is a good idea!
> > > > > > Didnt really look at the patchset yet.
> > > > > > I assume these changes have no real disadvantage ?
> > > > >
> > > > > Playing the devil's advocate, I can think of the following:
> > > > > 1) ffmpeg.c will hard-depend on threads
> > > > > 2) execution flow will become non-deterministic
> > > > > 3) overall resource usage will likely go up due to inter-
> thread
> > > > >synchronization and overhead related to new objects
> > > > > 4) large-scale code changes always carry a higher risk of
> > > regressions
> > > > >
> > > > > re 1): should not be a problem for any serious system
> > > > > re 2): I spent a lot of effort to ensure the _output_ remains
> > > > >deterministic (it actually becomes more predictable for
> > > some
> > > > >cases)
> > > > > re 3): I expect the impact to be small and negligible,
> > > respectively,
> > > > > but
> > > > >would have to be measured once the conversion is
> complete
> > > > > re 4): the only way to avoid this completely would be to stop
> > > > >development
> > > > >
> > > > > Overall, I believe the advantages far outweigh the potential
> > > > > negatives.
> > > >
> > > > Hi,
> > > >
> > > > do I understand it right that there won't be a single-thread
> > > > operation mode that replicates/corresponds the current behavior?
> > > >
> > > > Not that I wouldn't welcome the performance improvements, but
> one
> > > > concern I have is debugging filtergraph operations. This is
> already
> > > > a pretty tedious task in itself, because many relevant decisions
> > > > are made in sub-sub-sub-sub-sub-functions, spread over many
> places.
> > > > When adding an additional - not even deterministic - part to the
> > > > game, it won't make things easier. It could even create
> situations
> > > > where it could no longer be possible to replicate an error in a
> > > > debugger - in case the existence of a debugger would cause a
> > > variance
> > > > within the constraints of the non-determinism range.
> > > >
> > > >
> > > Can you elaborate more?, otherwise this is PEBKAC.
> >
> > You mean like WKOFAIT?
> >
> 
> You failed to provide useful facts to backup your claims above.
> 
> So I can not take your inputs seriously at this time.

I was just wondering What Kind Of F..funny Acronym Is That?
(knowing you won't find it, after being too lazy to lookup yours..)

Getting serious again - I will answer your question, but please 
give me some time until I'm back to work in this area, then I'll 
explain in detail and provide the callstacks that I meant.

Thanks,
softworkz
___

[FFmpeg-devel] [PATCH 2/2] avcodec/hevc_sei: Don't use GetBit-API for byte-aligned reads

2022-04-06 Thread Andreas Rheinhardt
Instead use the bytestream2-API.

Signed-off-by: Andreas Rheinhardt 
---
 libavcodec/hevc_sei.c | 176 ++
 1 file changed, 91 insertions(+), 85 deletions(-)

diff --git a/libavcodec/hevc_sei.c b/libavcodec/hevc_sei.c
index f49264217e..2557117500 100644
--- a/libavcodec/hevc_sei.c
+++ b/libavcodec/hevc_sei.c
@@ -23,25 +23,26 @@
  */
 
 #include "atsc_a53.h"
+#include "bytestream.h"
 #include "dynamic_hdr10_plus.h"
 #include "dynamic_hdr_vivid.h"
 #include "golomb.h"
 #include "hevc_ps.h"
 #include "hevc_sei.h"
 
-static int decode_nal_sei_decoded_picture_hash(HEVCSEIPictureHash *s, 
GetBitContext *gb)
+static int decode_nal_sei_decoded_picture_hash(HEVCSEIPictureHash *s,
+   GetByteContext *gb)
 {
-int cIdx, i;
+int cIdx;
 uint8_t hash_type;
 //uint16_t picture_crc;
 //uint32_t picture_checksum;
-hash_type = get_bits(gb, 8);
+hash_type = bytestream2_get_byte(gb);
 
 for (cIdx = 0; cIdx < 3/*((s->sps->chroma_format_idc == 0) ? 1 : 3)*/; 
cIdx++) {
 if (hash_type == 0) {
 s->is_md5 = 1;
-for (i = 0; i < 16; i++)
-s->md5[cIdx][i] = get_bits(gb, 8);
+bytestream2_get_buffer(gb, s->md5[cIdx], sizeof(s->md5[cIdx]));
 } else if (hash_type == 1) {
 // picture_crc = get_bits(gb, 16);
 } else if (hash_type == 2) {
@@ -51,25 +52,26 @@ static int 
decode_nal_sei_decoded_picture_hash(HEVCSEIPictureHash *s, GetBitCont
 return 0;
 }
 
-static int decode_nal_sei_mastering_display_info(HEVCSEIMasteringDisplay *s, 
GetBitContext *gb, int size)
+static int decode_nal_sei_mastering_display_info(HEVCSEIMasteringDisplay *s,
+ GetByteContext *gb)
 {
 int i;
 
-if (size < 24)
+if (bytestream2_get_bytes_left(gb) < 24)
 return AVERROR_INVALIDDATA;
 
 // Mastering primaries
 for (i = 0; i < 3; i++) {
-s->display_primaries[i][0] = get_bits(gb, 16);
-s->display_primaries[i][1] = get_bits(gb, 16);
+s->display_primaries[i][0] = bytestream2_get_be16u(gb);
+s->display_primaries[i][1] = bytestream2_get_be16u(gb);
 }
 // White point (x, y)
-s->white_point[0] = get_bits(gb, 16);
-s->white_point[1] = get_bits(gb, 16);
+s->white_point[0] = bytestream2_get_be16u(gb);
+s->white_point[1] = bytestream2_get_be16u(gb);
 
 // Max and min luminance of mastering display
-s->max_luminance = get_bits_long(gb, 32);
-s->min_luminance = get_bits_long(gb, 32);
+s->max_luminance = bytestream2_get_be32u(gb);
+s->min_luminance = bytestream2_get_be32u(gb);
 
 // As this SEI message comes before the first frame that references it,
 // initialize the flag to 2 and decrement on IRAP access unit so it
@@ -79,14 +81,15 @@ static int 
decode_nal_sei_mastering_display_info(HEVCSEIMasteringDisplay *s, Get
 return 0;
 }
 
-static int decode_nal_sei_content_light_info(HEVCSEIContentLight *s, 
GetBitContext *gb, int size)
+static int decode_nal_sei_content_light_info(HEVCSEIContentLight *s,
+ GetByteContext *gb)
 {
-if (size < 4)
+if (bytestream2_get_bytes_left(gb) < 4)
 return AVERROR_INVALIDDATA;
 
 // Max and average light levels
-s->max_content_light_level = get_bits(gb, 16);
-s->max_pic_average_light_level = get_bits(gb, 16);
+s->max_content_light_level = bytestream2_get_be16u(gb);
+s->max_pic_average_light_level = bytestream2_get_be16u(gb);
 // As this SEI message comes before the first frame that references it,
 // initialize the flag to 2 and decrement on IRAP access unit so it
 // persists for the coded video sequence (e.g., between two IRAPs)
@@ -127,8 +130,8 @@ static int 
decode_nal_sei_display_orientation(HEVCSEIDisplayOrientation *s, GetB
 return 0;
 }
 
-static int decode_nal_sei_pic_timing(HEVCSEI *s, GetBitContext *gb, const 
HEVCParamSets *ps,
- void *logctx, int size)
+static int decode_nal_sei_pic_timing(HEVCSEI *s, GetBitContext *gb,
+ const HEVCParamSets *ps, void *logctx)
 {
 HEVCSEIPictureTiming *h = >picture_timing;
 HEVCSPS *sps;
@@ -158,23 +161,24 @@ static int decode_nal_sei_pic_timing(HEVCSEI *s, 
GetBitContext *gb, const HEVCPa
 return 0;
 }
 
-static int decode_registered_user_data_closed_caption(HEVCSEIA53Caption *s, 
GetBitContext *gb,
-  int size)
+static int decode_registered_user_data_closed_caption(HEVCSEIA53Caption *s,
+  GetByteContext *gb)
 {
 int ret;
 
-ret = ff_parse_a53_cc(>buf_ref, gb->buffer + get_bits_count(gb) / 8, 
size);
-
+ret = ff_parse_a53_cc(>buf_ref, gb->buffer,
+  bytestream2_get_bytes_left(gb));
 if (ret < 0)
 return 

[FFmpeg-devel] [PATCH 1/2] avcodec/hevc_sei: Fix parsing SEI messages

2022-04-06 Thread Andreas Rheinhardt
SEI messages are naturally byte-aligned by adding padding bits
to achieve byte-alignment. The parsing code in libavcodec/hevc_sei.c
nevertheless uses a GetBitContext to read it. When doing so, parsing
the next SEI message starts exactly at the position where reading
the last message (if any) ended.

This means that one would have to handle both the payload extension data
(which makes most SEI messages extensible structs) as well as the
padding bits for byte-alignment. Yet our SEI parsing code in
libavcodec/hevc_sei.c does not read these at all. Instead several of
the functions used for parsing specific SEI messages use
skip_bits_long(); some don't use it at all, in which case it is possible
for the GetBitContext to not be byte-aligned at the start of the next
SEI message (the parsing code for several types of SEI messages relies
on byte-alignment).

Fix this by always using a dedicated GetBitContext per SEI message;
skipping the necessary amount of bytes in the NALU context
is done at a higher level. This also allows to remove unnecessary
parsing code that only existed in order to skip enough bytes.

Signed-off-by: Andreas Rheinhardt 
---
 libavcodec/hevc_sei.c | 50 +++
 1 file changed, 12 insertions(+), 38 deletions(-)

diff --git a/libavcodec/hevc_sei.c b/libavcodec/hevc_sei.c
index ec3036f932..f49264217e 100644
--- a/libavcodec/hevc_sei.c
+++ b/libavcodec/hevc_sei.c
@@ -44,10 +44,8 @@ static int 
decode_nal_sei_decoded_picture_hash(HEVCSEIPictureHash *s, GetBitCont
 s->md5[cIdx][i] = get_bits(gb, 8);
 } else if (hash_type == 1) {
 // picture_crc = get_bits(gb, 16);
-skip_bits(gb, 16);
 } else if (hash_type == 2) {
 // picture_checksum = get_bits_long(gb, 32);
-skip_bits(gb, 32);
 }
 }
 return 0;
@@ -72,14 +70,12 @@ static int 
decode_nal_sei_mastering_display_info(HEVCSEIMasteringDisplay *s, Get
 // Max and min luminance of mastering display
 s->max_luminance = get_bits_long(gb, 32);
 s->min_luminance = get_bits_long(gb, 32);
-size -= 24;
 
 // As this SEI message comes before the first frame that references it,
 // initialize the flag to 2 and decrement on IRAP access unit so it
 // persists for the coded video sequence (e.g., between two IRAPs)
 s->present = 2;
 
-skip_bits_long(gb, 8 * size);
 return 0;
 }
 
@@ -91,13 +87,11 @@ static int 
decode_nal_sei_content_light_info(HEVCSEIContentLight *s, GetBitConte
 // Max and average light levels
 s->max_content_light_level = get_bits(gb, 16);
 s->max_pic_average_light_level = get_bits(gb, 16);
-size -= 4;
 // As this SEI message comes before the first frame that references it,
 // initialize the flag to 2 and decrement on IRAP access unit so it
 // persists for the coded video sequence (e.g., between two IRAPs)
 s->present = 2;
 
-skip_bits_long(gb, 8 * size);
 return  0;
 }
 
@@ -114,15 +108,7 @@ static int 
decode_nal_sei_frame_packing_arrangement(HEVCSEIFramePacking *s, GetB
 // spatial_flipping_flag, frame0_flipped_flag, field_views_flag
 skip_bits(gb, 3);
 s->current_frame_is_frame0_flag = get_bits1(gb);
-// frame0_self_contained_flag, frame1_self_contained_flag
-skip_bits(gb, 2);
-
-if (!s->quincunx_subsampling && s->arrangement_type != 5)
-skip_bits(gb, 16);  // frame[01]_grid_position_[xy]
-skip_bits(gb, 8);   // frame_packing_arrangement_reserved_byte
-skip_bits1(gb); // frame_packing_arrangement_persistence_flag
 }
-skip_bits1(gb); // upsampled_aspect_ratio_flag
 return 0;
 }
 
@@ -135,7 +121,7 @@ static int 
decode_nal_sei_display_orientation(HEVCSEIDisplayOrientation *s, GetB
 s->vflip = get_bits1(gb); // ver_flip
 
 s->anticlockwise_rotation = get_bits(gb, 16);
-skip_bits1(gb); // display_orientation_persistence_flag
+// skip_bits1(gb); // display_orientation_persistence_flag
 }
 
 return 0;
@@ -167,12 +153,7 @@ static int decode_nal_sei_pic_timing(HEVCSEI *s, 
GetBitContext *gb, const HEVCPa
 av_log(logctx, AV_LOG_DEBUG, "Frame/Field Tripling\n");
 h->picture_struct = HEVC_SEI_PIC_STRUCT_FRAME_TRIPLING;
 }
-get_bits(gb, 2);   // source_scan_type
-get_bits(gb, 1);   // duplicate_flag
-skip_bits1(gb);
-size--;
 }
-skip_bits_long(gb, 8 * size);
 
 return 0;
 }
@@ -187,8 +168,6 @@ static int 
decode_registered_user_data_closed_caption(HEVCSEIA53Caption *s, GetB
 if (ret < 0)
 return ret;
 
-skip_bits_long(gb, size * 8);
-
 return 0;
 }
 
@@ -241,8 +220,6 @@ static int 
decode_registered_user_data_dynamic_hdr_plus(HEVCSEIDynamicHDRPlus *s
 return AVERROR(ENOMEM);
 }
 
-skip_bits_long(gb, size * 8);
-
 return 0;
 }
 
@@ 

Re: [FFmpeg-devel] [PATCH v12 1/1] avformat: Add IPFS protocol support.

2022-04-06 Thread Michael Niedermayer
On Tue, Apr 05, 2022 at 11:27:12PM +0200, Mark Gaiser wrote:
[...]
> >
> >
> > [...]
> > > +// Populate c->gateway_buffer with whatever is in c->gateway
> > > +if (c->gateway != NULL) {
> > > +if (snprintf(c->gateway_buffer, sizeof(c->gateway_buffer), "%s",
> > > + c->gateway) >= sizeof(c->gateway_buffer)) {
> > > +av_log(h, AV_LOG_WARNING, "The -gateway parameter is too
> > long. "
> > > +  "We allow a max of %zu
> > characters\n",
> > > +   sizeof(c->gateway_buffer));
> > > +ret = AVERROR(EINVAL);
> > > +goto err;
> > > +}
> > > +} else {
> > > +// Populate the IPFS gateway if we have any.
> > > +// If not, inform the user how to properly set one.
> > > +ret = populate_ipfs_gateway(h);
> > > +
> > > +if (ret < 1) {
> > > +// We fallback on dweb.link (managed by Protocol Labs).
> > > +snprintf(c->gateway_buffer, sizeof(c->gateway_buffer), "
> > https://dweb.link;);
> > > +
> > > +av_log(h, AV_LOG_WARNING, "IPFS does not appear to be
> > running. "
> > > +  "You’re now using the public
> > gateway at dweb.link.\n");
> > > +av_log(h, AV_LOG_INFO, "Installing IPFS locally is
> > recommended to "
> > > +   "improve performance and
> > reliability, "
> > > +   "and not share all your activity
> > with a single IPFS gateway.\n"
> > > +   "There are multiple options to define this
> > gateway.\n"
> > > +   "1. Call ffmpeg with a gateway param, "
> > > +   "without a trailing slash: -gateway
> > .\n"
> > > +   "2. Define an $IPFS_GATEWAY environment variable
> > with the "
> > > +   "full HTTP URL to the gateway "
> > > +   "without trailing forward slash.\n"
> > > +   "3. Define an $IPFS_PATH environment variable "
> > > +   "and point it to the IPFS data path "
> > > +   "- this is typically ~/.ipfs\n");
> > > +}
> > > +}
> >
> > This will print the warning every time a ipfs url is opened. Not just once
> > is that intended ?
> >
> 
> Yes.
> 
> Or rather, I don't see how to make it persistent in a nice intuitive way.
> By nice intuitive I mean showing it for, lets say, 10 times you use ffmpeg
> to be "sure" you've seen it before it can stop annoying the user about it.
> 
> Adding complexity for that doesn't seem to be worth it to me.

my concern was a use case like image2 formats which open a protocol connection
per image. So a ipfs://.../image%d.jpeg would produce a warning per image
iam not sure this use case makes sense though. 

thx


[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Freedom in capitalist society always remains about the same as it was in
ancient Greek republics: Freedom for slave owners. -- Vladimir Lenin


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] avcodec/ituh263enc: Add AV_CODEC_CAP_SLICE_THREADS to old H.263

2022-04-06 Thread Michael Niedermayer
On Wed, Apr 06, 2022 at 03:05:07AM +0200, Andreas Rheinhardt wrote:
> Michael Niedermayer:
> > On Tue, Apr 05, 2022 at 05:07:22PM +0200, Andreas Rheinhardt wrote:
> >> Michael Niedermayer:
> >>> It is supported by the H.263+ AVCodec already
> >>>
> >>> Is there any case where this does not work ?
> >>>
> >>> Fixes regression of some command lines
> >>>
> >>> Signed-off-by: Michael Niedermayer 
> >>> ---
> >>>  libavcodec/ituh263enc.c | 1 +
> >>>  1 file changed, 1 insertion(+)
> >>>
> >>> diff --git a/libavcodec/ituh263enc.c b/libavcodec/ituh263enc.c
> >>> index db7cdf1fcb..82dce05e36 100644
> >>> --- a/libavcodec/ituh263enc.c
> >>> +++ b/libavcodec/ituh263enc.c
> >>> @@ -908,6 +908,7 @@ const FFCodec ff_h263_encoder = {
> >>>  .p.id   = AV_CODEC_ID_H263,
> >>>  .p.pix_fmts = (const enum AVPixelFormat[]){AV_PIX_FMT_YUV420P, 
> >>> AV_PIX_FMT_NONE},
> >>>  .p.priv_class   = _class,
> >>> +.p.capabilities = AV_CODEC_CAP_SLICE_THREADS,
> >>>  .caps_internal  = FF_CODEC_CAP_INIT_THREADSAFE | 
> >>> FF_CODEC_CAP_INIT_CLEANUP,
> >>>  .priv_data_size = sizeof(MpegEncContext),
> >>>  .init   = ff_mpv_encode_init,
> >>
> >> 1. If you claim that there is a regression, you should mention the
> >> commit that introduced them in the commit message (it's obviously
> >> 8ca4b515e73079cda068e253853654db394b8171 in this case).
> >> 2. What command lines regressed exactly? The only command lines that
> >> should be affected by said commit are command lines that set the slices
> >> option to a value > 1.
> >> 3. As the commit message of 8ca4b515e73079cda068e253853654db394b8171
> >> explains, this was intentional, as the H.263 encoder produces broken
> >> files with multiple slices (whether with slice-threading or not). One
> >> gets all kinds of error messages when decoding such a file: "I cbpy
> >> damaged at 1 7", "Error at MB: 316", "illegal ac vlc code at 0x29",
> >> "slice end not reached but screenspace end (7 left 80, score=
> >> -125)", "run overflow at 0x7 i:1". Of course, there are visual
> >> artifacts, too.
> >> 4. With this patch, this encoder will by default (at least, by the
> >> defaults of the ffmpeg command line tool) produce broken files.
> >> 5. "Is there any case where this does not work ?": Is there any where it
> >> works?
> > 
> > The testcases i had where these:
> > ./ffmpeg -threads 1 -i fate-suite/svq3/Vertical400kbit.sorenson3.mov -t 1 
> > -bitexact -qscale 2 -slices 1 -y -threads 1 -vcodec h263 -s 352x288 -an 
> > /tmp/file-h263-s1t1.h263
> > ./ffmpeg -threads 1 -i fate-suite/svq3/Vertical400kbit.sorenson3.mov -t 1 
> > -bitexact -qscale 2 -slices 2 -y -threads 1 -vcodec h263 -s 352x288 -an 
> > /tmp/file-h263-s2t1.h263
> > ./ffmpeg -threads 1 -i fate-suite/svq3/Vertical400kbit.sorenson3.mov -t 1 
> > -bitexact -qscale 2 -slices 1 -y -threads 2 -vcodec h263 -s 352x288 -an 
> > /tmp/file-h263-s1t2.h263
> > ./ffmpeg -threads 1 -i fate-suite/svq3/Vertical400kbit.sorenson3.mov -t 1 
> > -bitexact -qscale 2 -slices 2 -y -threads 2 -vcodec h263 -s 352x288 -an 
> > /tmp/file-h263-s2t2.h263
> > 
> > The files seem to play fine
> > i did not try to find a case that fails
> > 
> 
> ./ffmpeg -threads 1 -i fate-suite/svq3/Vertical400kbit.sorenson3.mov -t
> 1 -bitexact -qscale 2 -slices 4 -y -threads 4 -vcodec h263 -s 1408x1152
> -an /tmp/file-h263-s2t2.h263
> 
> produces garbage; also with -s 704x576. slices < 4 seem fine. If one
> uses too many slices with smaller resolutions, the file is no longer
> correctly probed, but can be correctly decoded with -f h263.
> I don't know what is wrong with the bigger resolutions and too many
> slices; I don't know H.263 at all. My first (and admittedly only) test
> for whether using multiple slices with a single thread works produced
> garbage, so I put this codec in the "multiple slices not supported" box.

Its a while ago but H263+ has a nice slice structired mode H263 lacks that
and uses a more restricted mode see ff_h263_encode_gob_header()
I would wildly guess these restrictions are what causes some cases not
to work but i didnt check

thx

[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Does the universe only have a finite lifespan? No, its going to go on
forever, its just that you wont like living in it. -- Hiranya Peiri


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] ffmpeg: document -d option

2022-04-06 Thread Michael Niedermayer
On Tue, Apr 05, 2022 at 11:27:08PM +0200, Stefano Sabatini wrote:
> On date Tuesday 2022-04-05 07:23:27 +0200, Anton Khirnov wrote:
> > Quoting Stefano Sabatini (2022-04-03 17:27:06)
> > > Option was added in commit 39aafa5ee90e10382e.
> > > 
> > > Fix trac issue: http://trac.ffmpeg.org/ticket/1698
> > > ---
> > >  doc/ffmpeg.texi  | 12 
> > >  fftools/ffmpeg_opt.c |  3 +++
> > >  2 files changed, 15 insertions(+)
> > 
> > Does this option do anything useful? Shouldn't it rather be removed?
> 
> Works for me.
> 
> Do we have a use case for this? This basically disables logs and
> detaches ffmpeg from the terminal.
> 
> @Michael, can you comment about this (this was added by you)?

i have a few udp tests that used it but i tried now and they work fine
if i remove it so iam not aware of a current use case

thx

[...]

-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Dictatorship naturally arises out of democracy, and the most aggravated
form of tyranny and slavery out of the most extreme liberty. -- Plato


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] avfilter/f_ebur128: multiply in integer first, before dividing in float

2022-04-06 Thread Paul B Mahol
ok, if other fates are not regressed.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] libavformat/rtsp: pkt_size option is not honored in rtsp

2022-04-06 Thread Zhao Zhili

> 在 2022年4月6日,下午9:49,Yubo Xie  写道:
> 
> Yes, I've removed it already.

Sorry I missed that. LGTM.

> 
> From: ffmpeg-devel  on behalf of 
> "zhilizhao(赵志立)" 
> Sent: Wednesday, April 6, 2022 6:28 AM
> To: FFmpeg development discussions and patches 
> Subject: Re: [FFmpeg-devel] [PATCH] libavformat/rtsp: pkt_size option is not 
> honored in rtsp
> 
> 
>> On Apr 6, 2022, at 8:52 PM, Yubo Xie  wrote:
>> 
>> Signed-off-by: xyb 
>> ---
>> libavformat/rtsp.c| 4 ++--
>> libavformat/rtsp.h| 1 -
>> libavformat/rtspenc.c | 2 +-
>> 3 files changed, 3 insertions(+), 4 deletions(-)
>> 
> […]
>> 
>> diff --git a/libavformat/rtspenc.c b/libavformat/rtspenc.c
>> index 2a00b3e18d..5c7e0b4e8b 100644
>> --- a/libavformat/rtspenc.c
>> +++ b/libavformat/rtspenc.c
>> @@ -174,7 +174,7 @@ int ff_rtsp_tcp_write_packet(AVFormatContext *s, 
>> RTSPStream *rtsp_st)
>>size -= packet_len;
>>}
>>av_free(buf);
>> -return ffio_open_dyn_packet_buf(>pb, RTSP_TCP_MAX_PACKET_SIZE);
>> +return ffio_open_dyn_packet_buf(>pb, rt->pkt_size);
> 
> There is no more reference to RTSP_TCP_MAX_PACKET_SIZE now, it would
> be better be removed.
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fffmpeg.org%2Fmailman%2Flistinfo%2Fffmpeg-develdata=04%7C01%7C%7C999d4c6c586540d2b47708da17d171ab%7C84df9e7fe9f640afb435%7C1%7C0%7C637848485562446851%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000sdata=m1TPJnGtWKd6ESBlhlsv9lwPEGJS0N8iXQAJsYpM9o0%3Dreserved=0
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] libavformat/rtsp: pkt_size option is not honored in rtsp

2022-04-06 Thread Yubo Xie
Yes, I've removed it already.

From: ffmpeg-devel  on behalf of 
"zhilizhao(赵志立)" 
Sent: Wednesday, April 6, 2022 6:28 AM
To: FFmpeg development discussions and patches 
Subject: Re: [FFmpeg-devel] [PATCH] libavformat/rtsp: pkt_size option is not 
honored in rtsp


> On Apr 6, 2022, at 8:52 PM, Yubo Xie  wrote:
>
> Signed-off-by: xyb 
> ---
> libavformat/rtsp.c| 4 ++--
> libavformat/rtsp.h| 1 -
> libavformat/rtspenc.c | 2 +-
> 3 files changed, 3 insertions(+), 4 deletions(-)
>
[…]
>
> diff --git a/libavformat/rtspenc.c b/libavformat/rtspenc.c
> index 2a00b3e18d..5c7e0b4e8b 100644
> --- a/libavformat/rtspenc.c
> +++ b/libavformat/rtspenc.c
> @@ -174,7 +174,7 @@ int ff_rtsp_tcp_write_packet(AVFormatContext *s, 
> RTSPStream *rtsp_st)
> size -= packet_len;
> }
> av_free(buf);
> -return ffio_open_dyn_packet_buf(>pb, RTSP_TCP_MAX_PACKET_SIZE);
> +return ffio_open_dyn_packet_buf(>pb, rt->pkt_size);

There is no more reference to RTSP_TCP_MAX_PACKET_SIZE now, it would
be better be removed.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fffmpeg.org%2Fmailman%2Flistinfo%2Fffmpeg-develdata=04%7C01%7C%7C999d4c6c586540d2b47708da17d171ab%7C84df9e7fe9f640afb435%7C1%7C0%7C637848485562446851%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000sdata=m1TPJnGtWKd6ESBlhlsv9lwPEGJS0N8iXQAJsYpM9o0%3Dreserved=0

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] libavformat/rtsp: pkt_size option is not honored in rtsp

2022-04-06 Thread zhilizhao(赵志立)

> On Apr 6, 2022, at 8:52 PM, Yubo Xie  wrote:
> 
> Signed-off-by: xyb 
> ---
> libavformat/rtsp.c| 4 ++--
> libavformat/rtsp.h| 1 -
> libavformat/rtspenc.c | 2 +-
> 3 files changed, 3 insertions(+), 4 deletions(-)
> 
[…]
> 
> diff --git a/libavformat/rtspenc.c b/libavformat/rtspenc.c
> index 2a00b3e18d..5c7e0b4e8b 100644
> --- a/libavformat/rtspenc.c
> +++ b/libavformat/rtspenc.c
> @@ -174,7 +174,7 @@ int ff_rtsp_tcp_write_packet(AVFormatContext *s, 
> RTSPStream *rtsp_st)
> size -= packet_len;
> }
> av_free(buf);
> -return ffio_open_dyn_packet_buf(>pb, RTSP_TCP_MAX_PACKET_SIZE);
> +return ffio_open_dyn_packet_buf(>pb, rt->pkt_size);

There is no more reference to RTSP_TCP_MAX_PACKET_SIZE now, it would
be better be removed.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] Sponsoring R12L Decklink out

2022-04-06 Thread lance . lmwang
On Tue, Apr 05, 2022 at 10:22:54AM -0700, Alan Latteri wrote:
> Hello,
> 
> I am interesting in sponsoring the addition of R12L format  output via 
> Decklink.   Currently Decklink can only output up to v210 which is a 10bit 
> YUV 4:2:2 which is not full range video nor color accurate.  
> 
> bmdFormat12BitRGB= (* 'R12B' *) 
> $52313242;// Big-endian RGB 12-bit per component with full range 
> (0-4095). Packed as 12-bit per component

I can consider providing support, please clarify how much the sponsorship fee 
is and how it will be paid.


thanks.

> 
> Thank you.
> Alan
> 
> 
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

-- 
Thanks,
Limin Wang
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH] libavformat/rtsp: pkt_size option is not honored in rtsp

2022-04-06 Thread Yubo Xie
Signed-off-by: xyb 
---
 libavformat/rtsp.c| 4 ++--
 libavformat/rtsp.h| 1 -
 libavformat/rtspenc.c | 2 +-
 3 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/libavformat/rtsp.c b/libavformat/rtsp.c
index e22b744535..88e9ef5226 100644
--- a/libavformat/rtsp.c
+++ b/libavformat/rtsp.c
@@ -77,7 +77,7 @@
 #define COMMON_OPTS() \
 { "reorder_queue_size", "set number of packets to buffer for handling of 
reordered packets", OFFSET(reordering_queue_size), AV_OPT_TYPE_INT, { .i64 = -1 
}, -1, INT_MAX, DEC }, \
 { "buffer_size","Underlying protocol send/receive buffer size",
  OFFSET(buffer_size),   AV_OPT_TYPE_INT, { .i64 = -1 }, 
-1, INT_MAX, DEC|ENC }, \
-{ "pkt_size",   "Underlying protocol send packet size",
  OFFSET(pkt_size),  AV_OPT_TYPE_INT, { .i64 = -1 }, 
-1, INT_MAX, ENC } \
+{ "pkt_size",   "Underlying protocol send packet size",
  OFFSET(pkt_size),  AV_OPT_TYPE_INT, { .i64 = 1472 }, 
-1, INT_MAX, ENC } \
 
 
 const AVOption ff_rtsp_options[] = {
@@ -843,7 +843,7 @@ int ff_rtsp_open_transport_ctx(AVFormatContext *s, 
RTSPStream *rtsp_st)
 if (CONFIG_RTSP_MUXER && s->oformat && st) {
 int ret = ff_rtp_chain_mux_open((AVFormatContext 
**)_st->transport_priv,
 s, st, rtsp_st->rtp_handle,
-RTSP_TCP_MAX_PACKET_SIZE,
+rt->pkt_size,
 rtsp_st->stream_index);
 /* Ownership of rtp_handle is passed to the rtp mux context */
 rtsp_st->rtp_handle = NULL;
diff --git a/libavformat/rtsp.h b/libavformat/rtsp.h
index 3133bf61c1..6e500fd56a 100644
--- a/libavformat/rtsp.h
+++ b/libavformat/rtsp.h
@@ -74,7 +74,6 @@ enum RTSPControlTransport {
 #define RTSP_DEFAULT_PORT   554
 #define RTSPS_DEFAULT_PORT  322
 #define RTSP_MAX_TRANSPORTS 8
-#define RTSP_TCP_MAX_PACKET_SIZE 1472
 #define RTSP_DEFAULT_AUDIO_SAMPLERATE 44100
 #define RTSP_RTP_PORT_MIN 5000
 #define RTSP_RTP_PORT_MAX 65000
diff --git a/libavformat/rtspenc.c b/libavformat/rtspenc.c
index 2a00b3e18d..5c7e0b4e8b 100644
--- a/libavformat/rtspenc.c
+++ b/libavformat/rtspenc.c
@@ -174,7 +174,7 @@ int ff_rtsp_tcp_write_packet(AVFormatContext *s, RTSPStream 
*rtsp_st)
 size -= packet_len;
 }
 av_free(buf);
-return ffio_open_dyn_packet_buf(>pb, RTSP_TCP_MAX_PACKET_SIZE);
+return ffio_open_dyn_packet_buf(>pb, rt->pkt_size);
 }
 
 static int rtsp_write_packet(AVFormatContext *s, AVPacket *pkt)
-- 
2.25.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v3 1/2] libavcodec/qsvdec: Add more pixel format support to qsvdec

2022-04-06 Thread Xiang, Haihao
On Wed, 2022-04-06 at 16:48 +0800, Wenbin Chen wrote:
> Qsv decoder only supports directly output nv12 and p010 to system
> memory. For other format, we need to download frame from qsv format
> to system memory. Now add other supported format to qsvdec.
> 
> Signed-off-by: Wenbin Chen 
> ---
>  libavcodec/qsv.c  | 36 
>  libavcodec/qsv_internal.h |  3 +++
>  libavcodec/qsvdec.c   | 23 +--
>  3 files changed, 56 insertions(+), 6 deletions(-)
> 
> diff --git a/libavcodec/qsv.c b/libavcodec/qsv.c
> index 67d0e3934a..b86c20b153 100644
> --- a/libavcodec/qsv.c
> +++ b/libavcodec/qsv.c
> @@ -244,6 +244,42 @@ int ff_qsv_map_pixfmt(enum AVPixelFormat format, uint32_t
> *fourcc)
>  }
>  }
>  
> +int ff_qsv_map_frame_to_surface(const AVFrame *frame, mfxFrameSurface1
> *surface)
> +{
> +switch (frame->format) {
> +case AV_PIX_FMT_NV12:
> +case AV_PIX_FMT_P010:
> +surface->Data.Y  = frame->data[0];
> +surface->Data.UV = frame->data[1];
> +/* The SDK checks Data.V when using system memory for VP9 encoding */
> +surface->Data.V  = surface->Data.UV + 1;
> +break;
> +case AV_PIX_FMT_X2RGB10LE:
> +case AV_PIX_FMT_BGRA:
> +surface->Data.B = frame->data[0];
> +surface->Data.G = frame->data[0] + 1;
> +surface->Data.R = frame->data[0] + 2;
> +surface->Data.A = frame->data[0] + 3;
> +break;
> +case AV_PIX_FMT_YUYV422:
> +surface->Data.Y = frame->data[0];
> +surface->Data.U = frame->data[0] + 1;
> +surface->Data.V = frame->data[0] + 3;
> +break;
> +
> +case AV_PIX_FMT_Y210:
> +surface->Data.Y16 = (mfxU16 *)frame->data[0];
> +surface->Data.U16 = (mfxU16 *)frame->data[0] + 1;
> +surface->Data.V16 = (mfxU16 *)frame->data[0] + 3;
> +break;
> +default:
> +return AVERROR(ENOSYS);
> +}
> +surface->Data.PitchLow  = frame->linesize[0];
> +
> +return 0;
> +}
> +
>  int ff_qsv_find_surface_idx(QSVFramesContext *ctx, QSVFrame *frame)
>  {
>  int i;
> diff --git a/libavcodec/qsv_internal.h b/libavcodec/qsv_internal.h
> index 58186ea7ca..e2aecdcbd6 100644
> --- a/libavcodec/qsv_internal.h
> +++ b/libavcodec/qsv_internal.h
> @@ -147,4 +147,7 @@ int ff_qsv_find_surface_idx(QSVFramesContext *ctx,
> QSVFrame *frame);
>  void ff_qsv_frame_add_ext_param(AVCodecContext *avctx, QSVFrame *frame,
>  mfxExtBuffer *param);
>  
> +int ff_qsv_map_frame_to_surface(const AVFrame *frame, mfxFrameSurface1
> *surface);
> +
> +
>  #endif /* AVCODEC_QSV_INTERNAL_H */
> diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c
> index de4af1754d..c4296f80d7 100644
> --- a/libavcodec/qsvdec.c
> +++ b/libavcodec/qsvdec.c
> @@ -132,21 +132,28 @@ static int qsv_get_continuous_buffer(AVCodecContext
> *avctx, AVFrame *frame,
>  frame->linesize[0] = FFALIGN(avctx->width, 128);
>  break;
>  case AV_PIX_FMT_P010:
> +case AV_PIX_FMT_YUYV422:
>  frame->linesize[0] = 2 * FFALIGN(avctx->width, 128);
>  break;
> +case AV_PIX_FMT_Y210:
> +frame->linesize[0] = 4 * FFALIGN(avctx->width, 128);
> +break;
>  default:
>  av_log(avctx, AV_LOG_ERROR, "Unsupported pixel format.\n");
>  return AVERROR(EINVAL);
>  }
>  
> -frame->linesize[1] = frame->linesize[0];
>  frame->buf[0]  = av_buffer_pool_get(pool);
>  if (!frame->buf[0])
>  return AVERROR(ENOMEM);
>  
>  frame->data[0] = frame->buf[0]->data;
> -frame->data[1] = frame->data[0] +
> -frame->linesize[0] * FFALIGN(avctx->height, 64);
> +if (avctx->pix_fmt == AV_PIX_FMT_NV12 ||
> +avctx->pix_fmt == AV_PIX_FMT_P010) {
> +frame->linesize[1] = frame->linesize[0];
> +frame->data[1] = frame->data[0] +
> +frame->linesize[0] * FFALIGN(avctx->height, 64);
> +}
>  
>  ret = ff_attach_decode_data(frame);
>  if (ret < 0)
> @@ -426,9 +433,11 @@ static int alloc_frame(AVCodecContext *avctx, QSVContext
> *q, QSVFrame *frame)
>  if (frame->frame->format == AV_PIX_FMT_QSV) {
>  frame->surface = *(mfxFrameSurface1*)frame->frame->data[3];
>  } else {
> -frame->surface.Data.PitchLow = frame->frame->linesize[0];
> -frame->surface.Data.Y= frame->frame->data[0];
> -frame->surface.Data.UV   = frame->frame->data[1];
> +ret = ff_qsv_map_frame_to_surface(frame->frame, >surface);
> +if (ret < 0) {
> +av_log(avctx, AV_LOG_ERROR, "map frame to surface failed.\n");
> +return ret;
> +}
>  }
>  
>  frame->surface.Info = q->frame_info;
> @@ -992,6 +1001,8 @@ const FFCodec ff_##x##_qsv_decoder = { \
>  .p.priv_class   = ##_qsv_class, \
>  .p.pix_fmts = (const enum AVPixelFormat[]){ AV_PIX_FMT_NV12, \
>  

[FFmpeg-devel] [PATCH v13 1/1] avformat: Add IPFS protocol support.

2022-04-06 Thread Mark Gaiser
This patch adds support for:
- ffplay ipfs://
- ffplay ipns://

IPFS data can be played from so called "ipfs gateways".
A gateway is essentially a webserver that gives access to the
distributed IPFS network.

This protocol support (ipfs and ipns) therefore translates
ipfs:// and ipns:// to a http:// url. This resulting url is
then handled by the http protocol. It could also be https
depending on the gateway provided.

To use this protocol, a gateway must be provided.
If you do nothing it will try to find it in your
$HOME/.ipfs/gateway file. The ways to set it manually are:
1. Define a -gateway  to the gateway.
2. Define $IPFS_GATEWAY with the full http link to the gateway.
3. Define $IPFS_PATH and point it to the IPFS data path.
4. Have IPFS running in your local user folder (under $HOME/.ipfs).

Signed-off-by: Mark Gaiser 
---
 configure |   2 +
 doc/protocols.texi|  30 
 libavformat/Makefile  |   2 +
 libavformat/ipfsgateway.c | 339 ++
 libavformat/protocols.c   |   2 +
 5 files changed, 375 insertions(+)
 create mode 100644 libavformat/ipfsgateway.c

diff --git a/configure b/configure
index e4d36aa639..55af90957a 100755
--- a/configure
+++ b/configure
@@ -3579,6 +3579,8 @@ udp_protocol_select="network"
 udplite_protocol_select="network"
 unix_protocol_deps="sys_un_h"
 unix_protocol_select="network"
+ipfs_protocol_select="https_protocol"
+ipns_protocol_select="https_protocol"
 
 # external library protocols
 libamqp_protocol_deps="librabbitmq"
diff --git a/doc/protocols.texi b/doc/protocols.texi
index d207df0b52..90a9eefde0 100644
--- a/doc/protocols.texi
+++ b/doc/protocols.texi
@@ -2025,5 +2025,35 @@ decoding errors.
 
 @end table
 
+@section ipfs
+
+InterPlanetary File System (IPFS) protocol support. One can access files stored
+on the IPFS network through so called gateways. Those are http(s) endpoints.
+This protocol wraps the IPFS native protocols (ipfs:// and ipns://) to be send
+to such a gateway. Users can (and should) host their own node which means this
+protocol will use your local gateway to access files on the IPFS network.
+
+If a user doesn't have a node of their own then the public gateway dweb.link is
+used by default.
+
+You can use this protocol in 2 ways. Using IPFS:
+@example
+ffplay ipfs://QmbGtJg23skhvFmu9mJiePVByhfzu5rwo74MEkVDYAmF5T
+@end example
+
+Or the IPNS protocol (IPNS is mutable IPFS):
+@example
+ffplay ipns://QmbGtJg23skhvFmu9mJiePVByhfzu5rwo74MEkVDYAmF5T
+@end example
+
+You can also change the gateway to be used:
+
+@table @option
+
+@item gateway
+Defines the gateway to use. When nothing is provided the protocol will first 
try
+your local gateway. If that fails dweb.link will be used.
+
+@end table
 
 @c man end PROTOCOLS
diff --git a/libavformat/Makefile b/libavformat/Makefile
index d7182d6bd8..e3233fd7ac 100644
--- a/libavformat/Makefile
+++ b/libavformat/Makefile
@@ -660,6 +660,8 @@ OBJS-$(CONFIG_SRTP_PROTOCOL) += srtpproto.o 
srtp.o
 OBJS-$(CONFIG_SUBFILE_PROTOCOL)  += subfile.o
 OBJS-$(CONFIG_TEE_PROTOCOL)  += teeproto.o tee_common.o
 OBJS-$(CONFIG_TCP_PROTOCOL)  += tcp.o
+OBJS-$(CONFIG_IPFS_PROTOCOL) += ipfsgateway.o
+OBJS-$(CONFIG_IPNS_PROTOCOL) += ipfsgateway.o
 TLS-OBJS-$(CONFIG_GNUTLS)+= tls_gnutls.o
 TLS-OBJS-$(CONFIG_LIBTLS)+= tls_libtls.o
 TLS-OBJS-$(CONFIG_MBEDTLS)   += tls_mbedtls.o
diff --git a/libavformat/ipfsgateway.c b/libavformat/ipfsgateway.c
new file mode 100644
index 00..05fd37bbc5
--- /dev/null
+++ b/libavformat/ipfsgateway.c
@@ -0,0 +1,339 @@
+/*
+ * IPFS and IPNS protocol support through IPFS Gateway.
+ * Copyright (c) 2022 Mark Gaiser
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include "libavutil/avstring.h"
+#include "libavutil/opt.h"
+#include 
+#include "os_support.h"
+#include "url.h"
+
+typedef struct IPFSGatewayContext {
+AVClass *class;
+URLContext *inner;
+// Is filled by the -gateway argument and not changed after.
+char *gateway;
+// If the above gateway is non null, it will be copied into this buffer.
+// Else this buffer will contain the auto detected gateway.
+// In either case, the 

[FFmpeg-devel] [PATCH v13 0/1] Add IPFS protocol support.

2022-04-06 Thread Mark Gaiser
Hi,

This patch series adds support for IPFS.
V13:
- Apply clang-format on the changes
- Remove trailing whitespace
V12:
- Removed last ifdef, we only need stat for "file exists" purposes.
- To be sure, added back os_support.h as it does change stat to _stati64 on 
  windows.
V11:
- Cleaned up the headers. What's there is actually needed now.
- Some more strict checking (namely on fgets)
- Merged long log in one log entry.
- Another allocation check (this time for "fulluri")
- Lots of formatting changes (not visual) to be more in line with the soft 
  80 char limit.
V10:
- Removed free on c->gateway in ipfs_close to fix a double free.
V9:
- dweb.link as fallback gateway. This is managed by Protocol Labs (like IPFS).
- Change all errors to warnings as not having a gateway still gives you a 
  working video playback.
- Changed the console output to be more clear.
V8:
- Removed unnecessary change to set the first gateway_buffer character to 0.
  It made no sense as the buffer is always overwritten in the function context.
- Change %li to %zu (it's intended to print the sizeof in all cases)
V7:
- Removed sanitize_ipfs_gateway. Only the http/https check stayed and that's
  now in translate_ipfs_to_http.
- Added a check for ipfs_cid. It's only to show an error is someone happens to
  profide `ffplay ipfs://` without a cid.
- All snprintf usages are now checked.
- Adding a / to a gateway if it didn't end with it is now done in the same line
  that composes the full resulting url.
- And a couple more minor things.
V6:
- Moved the gateway buffer (now called gateway_buffer) to IPFSGatewayContext
- Changed nearly all PATH_MAX uses to sizeof(...) uses for future flexibility
- The rest is relatively minor feedback changes
V5:
- "c->gateway" is now not modified anymore
- Moved most variables to the stack
- Even more strict checks with the auto detection logic
- Errors are now AVERROR :)
- Added more logging and changed some debug ones to info ones as they are 
  valuable to aid debugging as a user when something goes wrong.
V3 (V4):
- V4: title issue from V3..
- A lot of style changes
- Made url checks a lot more strict
- av_asprintf leak fixes
- So many changes that a diff to v2 is again not sensible.
V2:
- Squashed and changed so much that a diff to v1 was not sensible.

The following is a short summary. In the IPFS ecosystem you access it's content
by a "Content IDentifier" (CID). This CID is, in simplified terms, a hash of 
the content. IPFS itself is a distributed network where any user can run a node
to be part of the network and access files by their CID. If any reachable node 
within that network has the CID, you can get it.

IPFS (as a technology) has two protocols, ipfs and ipns.
The ipfs protocol is the immutable way to access content.
The ipns protocol is a mutable layer on top of it. It's essentially a new CID 
that points to a ipfs CID. This "pointer" if you will can be changed to point 
to something else.
Much more information on how this technology works can be found here [1].

This patch series allows to interact natively with IPFS. That means being able
to access files like:
- ffplay ipfs://
- ffplay ipns://

There are multiple ways to access files on the IPFS network. This patch series
uses the gateway driven way. An IPFS node - by default - exposes a local 
gateway (say http://localhost:8080) which is then used to get content from IPFS.
The gateway functionality on the IPFS side contains optimizations to
be as ideal to streaming data as it can be. Optimizations that the http protocol
in ffmpeg also has and are thus reused for free in this approach.

A note on other "more appropiate" ways, as I received some feedback on that.
For ffmpeg purposes the gateway approach is ideal! There is a "libipfs" but
that would spin up an ipfs node with the overhead of:
- bootstrapping
- connecting to nodes
- finding other nodes to connect too
- finally finding your file

This alternative approach could take minutes before a file is played. The
gateway approach immediately connects to an already running node thus gives
the file the fastest.

Much of the logic in this patch series is to find that gateway and essentially 
rewrite:

"ipfs://"

to:

"http://localhost:8080/ipfs/"

Once that's found it's forwared to the protocol handler where eventually the
http protocol is going to handle it. Note that it could also be https. There's 
enough flexibility in the implementation to allow the user to provide a 
gateway. There are also public https gateways which can be used just as well.

After this patch is accepted, I'll work on getting IPFS supported in:
- mpv (requires this ffmpeg patch)
- vlc (prefers this patch but can be made to work without this patch)
- kodi (requires this ffmpeg patch)

Best regards,
Mark Gaiser

[1] https://docs.ipfs.io/concepts/

Mark Gaiser (1):
  avformat: Add IPFS protocol support.

 configure |   2 +
 doc/protocols.texi|  30 
 libavformat/Makefile  |   2 +
 

Re: [FFmpeg-devel] [PATCH v2 3/3] libavcodec/qsvdec: using suggested num to set init_pool_size

2022-04-06 Thread Xiang, Haihao
On Sat, 2022-04-02 at 08:52 +, Xiang, Haihao wrote:
> On Mon, 2022-03-28 at 02:26 +, Xiang, Haihao wrote:
> > On Fri, 2022-03-18 at 07:40 +, Soft Works wrote:
> > > > -Original Message-
> > > > From: ffmpeg-devel  On Behalf Of
> > > > Wenbin Chen
> > > > Sent: Friday, March 18, 2022 7:25 AM
> > > > To: ffmpeg-devel@ffmpeg.org
> > > > Subject: [FFmpeg-devel] [PATCH v2 3/3] libavcodec/qsvdec: using
> > > > suggested num to set init_pool_size
> > > > 
> > > > The init_pool_size is set to be 64 and it is too many.
> > > > Use IOSurfQuery to get NumFrameSuggest which is the suggested
> > > > number of frame that needed to be allocated when initializing the
> > > > decoder.
> > > > Considering that the hevc_qsv encoder uses the  most frame buffer,
> > > > async is 4 (default) and max_b_frames is 8 (default) and decoder
> > > > may followed by VPP, use NumFrameSuggest + 16 to set init_pool_size.
> > > > 
> > > > Signed-off-by: Wenbin Chen 
> > > > Signed-off-by: Guangxin Xu 
> > > > ---
> > > >  libavcodec/qsvdec.c | 14 --
> > > >  1 file changed, 12 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c
> > > > index 210bd0c1d5..9875d3d632 100644
> > > > --- a/libavcodec/qsvdec.c
> > > > +++ b/libavcodec/qsvdec.c
> > > > @@ -88,7 +88,7 @@ typedef struct QSVContext {
> > > >  uint32_t fourcc;
> > > >  mfxFrameInfo frame_info;
> > > >  AVBufferPool *pool;
> > > > -
> > > > +int suggest_pool_size;
> > > >  int initialized;
> > > > 
> > > >  // options set by the caller
> > > > @@ -275,7 +275,7 @@ static int qsv_decode_preinit(AVCodecContext
> > > > *avctx, QSVContext *q, enum AVPixel
> > > >  hwframes_ctx->height= FFALIGN(avctx-
> > > > > coded_height, 32);
> > > > 
> > > >  hwframes_ctx->format= AV_PIX_FMT_QSV;
> > > >  hwframes_ctx->sw_format = avctx->sw_pix_fmt;
> > > > -hwframes_ctx->initial_pool_size = 64 + avctx-
> > > > > extra_hw_frames;
> > > > 
> > > > +hwframes_ctx->initial_pool_size = q->suggest_pool_size + 16 +
> > > > avctx->extra_hw_frames;
> > > >  frames_hwctx->frame_type=
> > > > MFX_MEMTYPE_VIDEO_MEMORY_DECODER_TARGET;
> > > > 
> > > >  ret = av_hwframe_ctx_init(avctx->hw_frames_ctx);
> > > > @@ -793,6 +793,9 @@ static int qsv_process_data(AVCodecContext *avctx,
> > > > QSVContext *q,
> > > >  }
> > > > 
> > > >  if (q->reinit_flag || !q->session || !q->initialized) {
> > > > +mfxFrameAllocRequest request;
> > > > +memset(, 0, sizeof(request));
> > > > +
> > > >  q->reinit_flag = 0;
> > > >  ret = qsv_decode_header(avctx, q, pkt, pix_fmt, );
> > > >  if (ret < 0) {
> > > > @@ -802,12 +805,19 @@ static int qsv_process_data(AVCodecContext
> > > > *avctx, QSVContext *q,
> > > >  av_log(avctx, AV_LOG_ERROR, "Error decoding
> > > > header\n");
> > > >  goto reinit_fail;
> > > >  }
> > > > +param.IOPattern = q->iopattern;
> > > > 
> > > >  q->orig_pix_fmt = avctx->pix_fmt = pix_fmt =
> > > > ff_qsv_map_fourcc(param.mfx.FrameInfo.FourCC);
> > > > 
> > > >  avctx->coded_width  = param.mfx.FrameInfo.Width;
> > > >  avctx->coded_height = param.mfx.FrameInfo.Height;
> > > > 
> > > > +ret = MFXVideoDECODE_QueryIOSurf(q->session, ,
> > > > );
> > > > +if (ret < 0)
> > > > +return ff_qsv_print_error(avctx, ret, "Error querying IO
> > > > surface");
> > > > +
> > > > +q->suggest_pool_size = request.NumFrameSuggested;
> > > > +
> > > >  ret = qsv_decode_preinit(avctx, q, pix_fmt, );
> > > >  if (ret < 0)
> > > >  goto reinit_fail;
> > > > --
> > > 
> > > Thanks for the patch! I have that on my list for quite a while.
> > > Will look at it shortly.
> > 
> > Hi Softworz,
> > 
> > This patchset LGTM and works well, do you have any comment ? 
> 
> Ping, I'll apply this patchset in a few days if no more comment.
> 

Applied, thx

-Haihao

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 09/49] fftools/ffmpeg: store output format separately from the muxer context

2022-04-06 Thread James Almer

On 4/4/2022 8:29 AM, Anton Khirnov wrote:

Allows accessing it without going through the muxer context. This will
be useful in the following commits, where the muxer context will be
hidden.
---
  fftools/ffmpeg.c | 18 ++
  fftools/ffmpeg.h |  2 ++
  fftools/ffmpeg_opt.c |  1 +
  3 files changed, 13 insertions(+), 8 deletions(-)


Patches 4 to 9 look ok.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v2] libavcodec/qsvenc: enable LowDelayBRC and MaxFrameSizeI/MaxFrameSizeP for more accurate bitrate control

2022-04-06 Thread Xiang, Haihao
On Mon, 2022-03-21 at 08:33 +, He, Fan F wrote:
> Feature introduction of LowDelayBRC, MaxFrameSizeI and MaxFrameSizeP could be
> found here:
> https://github.com/Intel-Media-SDK/MediaSDK/blob/master/doc/mediasdk-man.md
> 
> Signed-off-by: Dmitry Ermilov 
> Signed-off-by: Fan F He 
> ---
>  doc/encoders.texi   | 26 ++
>  libavcodec/qsvenc.c | 17 +
>  libavcodec/qsvenc.h | 10 --
>  3 files changed, 51 insertions(+), 2 deletions(-)
> 
> diff --git a/doc/encoders.texi b/doc/encoders.texi
> index 1bd38671ca..47c8577e09 100644
> --- a/doc/encoders.texi
> +++ b/doc/encoders.texi
> @@ -3264,6 +3264,14 @@ Enable rate distortion optimization.
>  @item @var{max_frame_size}
>  Maximum encoded frame size in bytes.
>  
> +@item @var{max_frame_size_i}
> +Maximum encoded frame size for I frames in bytes. If this value is set as
> larger
> +than zero, then for I frames the value set by max_frame_size is ignored.
> +
> +@item @var{max_frame_size_p}
> +Maximum encoded frame size for P frames in bytes. If this value is set as
> larger
> +than zero, then for P frames the value set by max_frame_size is ignored.
> +
>  @item @var{max_slice_size}
>  Maximum encoded slice size in bytes.
>  
> @@ -3280,6 +3288,11 @@ Setting this flag enables macroblock level bitrate
> control that generally
>  improves subjective visual quality. Enabling this flag may have negative
> impact
>  on performance and objective visual quality metric.
>  
> +@item @var{low_delay_brc}
> +Setting this flag turns on or off LowDelayBRC feautre in qsv plugin, which
> provides
> +more accurate bitrate control to minimize the variance of bitstream size
> frame
> +by frame. Value: -1-default 0-off 1-on
> +
>  @item @var{adaptive_i}
>  This flag controls insertion of I frames by the QSV encoder. Turn ON this
> flag
>  to allow changing of frame type from P and B to I.
> @@ -3392,6 +3405,14 @@ Enable rate distortion optimization.
>  @item @var{max_frame_size}
>  Maximum encoded frame size in bytes.
>  
> +@item @var{max_frame_size_i}
> +Maximum encoded frame size for I frames in bytes. If this value is set as
> larger
> +than zero, then for I frames the value set by max_frame_size is ignored.
> +
> +@item @var{max_frame_size_p}
> +Maximum encoded frame size for P frames in bytes. If this value is set as
> larger
> +than zero, then for P frames the value set by max_frame_size is ignored.
> +
>  @item @var{max_slice_size}
>  Maximum encoded slice size in bytes.
>  
> @@ -3400,6 +3421,11 @@ Setting this flag enables macroblock level bitrate
> control that generally
>  improves subjective visual quality. Enabling this flag may have negative
> impact
>  on performance and objective visual quality metric.
>  
> +@item @var{low_delay_brc}
> +Setting this flag turns on or off LowDelayBRC feautre in qsv plugin, which
> provides
> +more accurate bitrate control to minimize the variance of bitstream size
> frame
> +by frame. Value: -1-default 0-off 1-on
> +
>  @item @var{p_strategy}
>  Enable P-pyramid: 0-default 1-simple 2-pyramid(bf need to be set to 0).
>  
> diff --git a/libavcodec/qsvenc.c b/libavcodec/qsvenc.c
> index 55ce3d2499..d7441ac447 100644
> --- a/libavcodec/qsvenc.c
> +++ b/libavcodec/qsvenc.c
> @@ -376,6 +376,13 @@ static void dump_video_param(AVCodecContext *avctx,
> QSVEncContext *q,
>  #if QSV_VERSION_ATLEAST(1, 16)
>  av_log(avctx, AV_LOG_VERBOSE, "IntRefCycleDist: %"PRId16"\n", co3-
> >IntRefCycleDist);
>  #endif
> +#if QSV_VERSION_ATLEAST(1, 23)
> +av_log(avctx, AV_LOG_VERBOSE, "LowDelayBRC: %s\n", print_threestate(co3-
> >LowDelayBRC));
> +#endif
> +#if QSV_VERSION_ATLEAST(1, 19)
> +av_log(avctx, AV_LOG_VERBOSE, "MaxFrameSizeI: %d; ", co3->MaxFrameSizeI);
> +av_log(avctx, AV_LOG_VERBOSE, "MaxFrameSizeP: %d\n", co3->MaxFrameSizeP);
> +#endif
>  }
>  
>  static void dump_video_vp9_param(AVCodecContext *avctx, QSVEncContext *q,
> @@ -990,6 +997,16 @@ static int init_video_param(AVCodecContext *avctx,
> QSVEncContext *q)
>  #if QSV_VERSION_ATLEAST(1, 16)
>  if (q->int_ref_cycle_dist >= 0)
>  q->extco3.IntRefCycleDist = q->int_ref_cycle_dist;
> +#endif
> +#if QSV_VERSION_ATLEAST(1, 23)
> +if (q->low_delay_brc >= 0)
> +q->extco3.LowDelayBRC = q->low_delay_brc ?
> MFX_CODINGOPTION_ON : MFX_CODINGOPTION_OFF;
> +#endif
> +#if QSV_VERSION_ATLEAST(1, 19)
> +if (q->max_frame_size_p >= 0)
> +q->extco3.MaxFrameSizeI = q->max_frame_size_i;
> +if (q->max_frame_size_p >= 0)
> +q->extco3.MaxFrameSizeP = q->max_frame_size_p;
>  #endif
>  }
>  
> diff --git a/libavcodec/qsvenc.h b/libavcodec/qsvenc.h
> index 2bda858427..cb84723dfa 100644
> --- a/libavcodec/qsvenc.h
> +++ b/libavcodec/qsvenc.h
> @@ -90,8 +90,10 @@
>  { "slower",  NULL, 0, AV_OPT_TYPE_CONST, { .i64 =
> MFX_TARGETUSAGE_2  },INT_MIN, INT_MAX, VE, "preset"
> },\

Re: [FFmpeg-devel] [PATCH] libavutil/hwcontext_vaapi: Re-enable support for libva v1

2022-04-06 Thread Xiang, Haihao
On Thu, 2022-03-31 at 15:26 +, Xiang, Haihao wrote:
> On Thu, 2022-03-31 at 14:58 +, Xiang, Haihao wrote:
> > On Tue, 2022-03-29 at 14:37 +, Xiang, Haihao wrote:
> > > On Fri, 2022-03-11 at 13:24 +0100, Ingo Brückl wrote:
> > > > Commit e050959103f375e6494937fa28ef2c4d2d15c9ef implemented passing in
> > > > modifiers by using the PRIME_2 memory type, which only exists in v2 of
> > > > the library.
> > > > 
> > > > To still support v1 of the library, conditionally compile using
> > > > VA_CHECK_VERSION() for both the new code and the old code before
> > > > the commit.
> > > > ---
> > > >  libavutil/hwcontext_vaapi.c | 57 -
> > > >  1 file changed, 56 insertions(+), 1 deletion(-)
> > > > 
> > > > diff --git a/libavutil/hwcontext_vaapi.c b/libavutil/hwcontext_vaapi.c
> > > > index 994b744e4d..799490442e 100644
> > > > --- a/libavutil/hwcontext_vaapi.c
> > > > +++ b/libavutil/hwcontext_vaapi.c
> > > > @@ -1026,7 +1026,12 @@ static void
> > > > vaapi_unmap_from_drm(AVHWFramesContext
> > > > *dst_fc,
> > > >  static int vaapi_map_from_drm(AVHWFramesContext *src_fc, AVFrame *dst,
> > > >const AVFrame *src, int flags)
> > > >  {
> > > > +#if VA_CHECK_VERSION(2, 0, 0)
> > > >  VAAPIFramesContext *src_vafc = src_fc->internal->priv;
> > > > +int use_prime2;
> > > > +#else
> > > > +int k;
> > > > +#endif
> > > >  AVHWFramesContext  *dst_fc =
> > > >  (AVHWFramesContext*)dst->hw_frames_ctx->data;
> > > >  AVVAAPIDeviceContext  *dst_dev = dst_fc->device_ctx->hwctx;
> > > > @@ -1034,10 +1039,28 @@ static int vaapi_map_from_drm(AVHWFramesContext
> > > > *src_fc, AVFrame *dst,
> > > >  const VAAPIFormatDescriptor *format_desc;
> > > >  VASurfaceID surface_id;
> > > >  VAStatus vas = VA_STATUS_SUCCESS;
> > > > -int use_prime2;
> > > >  uint32_t va_fourcc;
> > > >  int err, i, j;
> > > >  
> > > > +#if !VA_CHECK_VERSION(2, 0, 0)
> > > > +unsigned long buffer_handle;
> > > > +VASurfaceAttribExternalBuffers buffer_desc;
> > > > +VASurfaceAttrib attrs[2] = {
> > > > +{
> > > > +.type  = VASurfaceAttribMemoryType,
> > > > +.flags = VA_SURFACE_ATTRIB_SETTABLE,
> > > > +.value.type= VAGenericValueTypeInteger,
> > > > +.value.value.i = VA_SURFACE_ATTRIB_MEM_TYPE_DRM_PRIME,
> > > > +},
> > > > +{
> > > > +.type  = VASurfaceAttribExternalBufferDescriptor,
> > > > +.flags = VA_SURFACE_ATTRIB_SETTABLE,
> > > > +.value.type= VAGenericValueTypePointer,
> > > > +.value.value.p = _desc,
> > > > +}
> > > > +};
> > > > +#endif
> > > > +
> > > >  desc = (AVDRMFrameDescriptor*)src->data[0];
> > > >  
> > > >  if (desc->nb_objects != 1) {
> > > > @@ -1072,6 +1095,7 @@ static int vaapi_map_from_drm(AVHWFramesContext
> > > > *src_fc,
> > > > AVFrame *dst,
> > > >  format_desc = vaapi_format_from_fourcc(va_fourcc);
> > > >  av_assert0(format_desc);
> > > >  
> > > > +#if VA_CHECK_VERSION(2, 0, 0)
> > > >  use_prime2 = !src_vafc->prime_2_import_unsupported &&
> > > >   desc->objects[0].format_modifier !=
> > > > DRM_FORMAT_MOD_INVALID;
> > > >  if (use_prime2) {
> > > > @@ -1183,6 +1207,37 @@ static int vaapi_map_from_drm(AVHWFramesContext
> > > > *src_fc, AVFrame *dst,
> > > > _id, 1,
> > > > buffer_attrs,
> > > > FF_ARRAY_ELEMS(buffer_attrs));
> > > >  }
> > > > +#else
> > > > +buffer_handle = desc->objects[0].fd;
> > > > +buffer_desc.pixel_format = va_fourcc;
> > > > +buffer_desc.width= src_fc->width;
> > > > +buffer_desc.height   = src_fc->height;
> > > > +buffer_desc.data_size= desc->objects[0].size;
> > > > +buffer_desc.buffers  = _handle;
> > > > +buffer_desc.num_buffers  = 1;
> > > > +buffer_desc.flags= 0;
> > > > +
> > > > +k = 0;
> > > > +for (i = 0; i < desc->nb_layers; i++) {
> > > > +for (j = 0; j < desc->layers[i].nb_planes; j++) {
> > > > +buffer_desc.pitches[k] = desc->layers[i].planes[j].pitch;
> > > > +buffer_desc.offsets[k] = desc->layers[i].planes[j].offset;
> > > > +++k;
> > > > +}
> > > > +}
> > > > +buffer_desc.num_planes = k;
> > > > +
> > > > +if (format_desc->chroma_planes_swapped &&
> > > > +buffer_desc.num_planes == 3) {
> > > > +FFSWAP(uint32_t, buffer_desc.pitches[1],
> > > > buffer_desc.pitches[2]);
> > > > +FFSWAP(uint32_t, buffer_desc.offsets[1],
> > > > buffer_desc.offsets[2]);
> > > > +}
> > > > +
> > > > +vas = vaCreateSurfaces(dst_dev->display, format_desc->rt_format,
> > > > +   src->width, src->height,
> > > > +   _id, 1,
> > > > +   attrs, FF_ARRAY_ELEMS(attrs));
> > > > +#endif
> > 

Re: [FFmpeg-devel] [PATCH v12 1/1] avformat: Add IPFS protocol support.

2022-04-06 Thread Mark Gaiser
On Wed, Apr 6, 2022 at 4:18 AM "zhilizhao(赵志立)" 
wrote:

>
>
> > On Apr 6, 2022, at 5:34 AM, Mark Gaiser  wrote:
> >
> > On Tue, Apr 5, 2022 at 11:27 PM Mark Gaiser  wrote:
> >
> >>
> >>
> >> On Tue, Apr 5, 2022 at 11:01 PM Michael Niedermayer <
> >> mich...@niedermayer.cc> wrote:
> >>
> >>> On Mon, Apr 04, 2022 at 12:38:25AM +0200, Mark Gaiser wrote:
>  This patch adds support for:
>  - ffplay ipfs://
>  - ffplay ipns://
> 
>  IPFS data can be played from so called "ipfs gateways".
>  A gateway is essentially a webserver that gives access to the
>  distributed IPFS network.
> 
>  This protocol support (ipfs and ipns) therefore translates
>  ipfs:// and ipns:// to a http:// url. This resulting url is
>  then handled by the http protocol. It could also be https
>  depending on the gateway provided.
> 
>  To use this protocol, a gateway must be provided.
>  If you do nothing it will try to find it in your
>  $HOME/.ipfs/gateway file. The ways to set it manually are:
>  1. Define a -gateway  to the gateway.
>  2. Define $IPFS_GATEWAY with the full http link to the gateway.
>  3. Define $IPFS_PATH and point it to the IPFS data path.
>  4. Have IPFS running in your local user folder (under $HOME/.ipfs).
> 
> >>> [...]
> >>>
>  +goto err;
>  +}
>  +
>  +// Read a single line (fgets stops at new line mark).
>  +if (!fgets(c->gateway_buffer, sizeof(c->gateway_buffer) - 1,
> >>> gateway_file)) {
>  +  av_log(h, AV_LOG_WARNING, "Unable to read from file (full uri:
> >>> %s).\n",
>  + ipfs_gateway_file);
>  +  ret = AVERROR(ENOENT);
>  +  goto err;
>  +}
> >>>
> >>> The indention is not consistent
> >>>
> >>
> >> What's the intended indentation here?
> >> In my editor (QtCreator, it's set to 2 spaces for tabs) the
> >> "ipfs_gateway_file" is aligned directly underneath the first argument of
> >> av_log.
> >> That is as it should be, right?
> >>
> >> For this and your other comments, I see no issue on my side. Also no
> >> trailing whitespace.
> >>
> >> Here's an image of what i see with spaces visualizes:
> >> https://i.imgur.com/37k68RH.png
> >> Is there something wrong on my end?
> >>
> >
> > Just checked patchwork:
> >
> https://patchwork.ffmpeg.org/project/ffmpeg/patch/20220403223825.26764-2-mark...@gmail.com/
> > It also shows the indentation as I intended which should be according to
> > the ffmpeg coding style guidelines.
>
> Indent size is 4. I use git log -p to do one more check before sending
> patches.
>

Ah right.
Well, I did have a clang-format I used very early on but never since.

That along with git log -p helped :)
V13 coming up!


> >
> > Same with:
> > https://ffmpeg.org/pipermail/ffmpeg-devel/2022-April/294932.html
> >
> >
> >>
> >>
> >>>
> >>>
> >>> [...]
>  +// Populate c->gateway_buffer with whatever is in c->gateway
>  +if (c->gateway != NULL) {
>  +if (snprintf(c->gateway_buffer, sizeof(c->gateway_buffer),
> >>> "%s",
>  + c->gateway) >= sizeof(c->gateway_buffer)) {
>  +av_log(h, AV_LOG_WARNING, "The -gateway parameter is too
> >>> long. "
>  +  "We allow a max of %zu
> >>> characters\n",
>  +   sizeof(c->gateway_buffer));
>  +ret = AVERROR(EINVAL);
>  +goto err;
>  +}
>  +} else {
>  +// Populate the IPFS gateway if we have any.
>  +// If not, inform the user how to properly set one.
>  +ret = populate_ipfs_gateway(h);
>  +
>  +if (ret < 1) {
>  +// We fallback on dweb.link (managed by Protocol Labs).
>  +snprintf(c->gateway_buffer, sizeof(c->gateway_buffer), "
> >>> https://dweb.link;);
>  +
>  +av_log(h, AV_LOG_WARNING, "IPFS does not appear to be
> >>> running. "
>  +  "You’re now using the public
> >>> gateway at dweb.link.\n");
>  +av_log(h, AV_LOG_INFO, "Installing IPFS locally is
> >>> recommended to "
>  +   "improve performance and
> >>> reliability, "
>  +   "and not share all your activity
> >>> with a single IPFS gateway.\n"
>  +   "There are multiple options to define this
> >>> gateway.\n"
>  +   "1. Call ffmpeg with a gateway param, "
>  +   "without a trailing slash:
> -gateway
> >>> .\n"
>  +   "2. Define an $IPFS_GATEWAY environment variable
> >>> with the "
>  +   "full HTTP URL to the gateway "
>  +   "without trailing forward
> slash.\n"
>  +   "3. Define an $IPFS_PATH environment variable "
>  +  

Re: [FFmpeg-devel] [PATCH] avfilter/alimiter: Add "flush_buffer" option to flush the remaining valid data to the output

2022-04-06 Thread Paul B Mahol
On Tue, Apr 5, 2022 at 8:57 PM Wang Cao 
wrote:

> On Mon, Apr 4, 2022 at 3:28 PM Marton Balint  wrote:
>
> >
> >
> > On Mon, 4 Apr 2022, Paul B Mahol wrote:
> >
> > > On Sun, Mar 27, 2022 at 11:41 PM Marton Balint  wrote:
> > >
> > >>
> > >>
> > >> On Sat, 26 Mar 2022, Wang Cao wrote:
> > >>
> > >>> The change in the commit will add some samples to the end of the
> audio
> > >>> stream. The intention is to add a "zero_delay" option eventually to
> not
> > >>> have the delay in the begining the output from alimiter due to
> > >>> lookahead.
> > >>
> > >> I was very much suprised to see that the alimiter filter actually
> delays
> > >> the audio - as in extra samples are inserted in the beginning and some
> > >> samples are cut in the end. This trashes A-V sync, so it is a bug
> IMHO.
> > >>
> > >> So unless somebody has some valid usecase for the legacy way of
> > operation
> > >> I'd just simply change it to be "zero delay" without any additional
> user
> > >> option, in a single patch.
> > >>
> > >
> > >
> > > This is done by this patch in very complicated way and also it really
> > > should be optional.
> >
> > But why does it make sense to keep the current (IMHO buggy) operational
> > mode which adds silence in the beginning and trims the end? I understand
> > that the original implementation worked like this, but libavfilter has
> > packet timestamps and N:M filtering so there is absolutely no reason to
> > use an 1:1 implementation and live with its limitations.
> >
> Hello Paul and Marton, thank you so much for taking time to review my
> patch.
> I totally understand that my patch may seem a little bit complicated but I
> can
> show with a FATE test that if we set the alimiter to behave as a
> passthrough filter,
> the output frames will be the same from "framecrc" with my patch. The
> existing
> behavior will not work for all gapless audio processing.
>
> The complete patch to fix this issue is at
>
> https://patchwork.ffmpeg.org/project/ffmpeg/patch/20220330210314.2055201-1-wang...@google.com/
>
> Regarding Paul's concern, I personally don't have any preference whether to
> put
> the patch as an extra option or not. With respect to the implementation,
> the patch
> is the best I can think of by preserving as much information as possible
> from input
> frames. I also understand it may break concept that "filter_frame" outputs
> one frame
> at a time. For alimiter with my patch, depending on the size of the
> lookahead buffer,
> it may take a few frames before one output frame can be generated. This is
> inevitable
> to compensate for the delay of the lookahead buffer.
>
> Thanks again for reviewing my patch and I'm looking forward to hearing from
> you :)
>

Better than (because its no more 1 frame X nb_samples in, 1 frame X
nb_samples out) to replace .filter_frame/.request_frame with .activate
logic.

And make this output delay compensation filtering optional.

In this process make sure that output PTS frame timestamps are unchanged
from input one, by keeping reference of needed frames in filter queue.

Look how speechnorm/dynaudnorm does it.


> --
>
> Wang Cao |  Software Engineer |  wang...@google.com |  650-203-7807
> <(650)%20203-7807>
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 3/3] lavc/encode: pick a sane default for bits_per_raw_sample if it's not set

2022-04-06 Thread Paul B Mahol
Ping for this. Current state is imho bad.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 02/49] fftools/ffmpeg: move a comment to a more appropriate place

2022-04-06 Thread James Almer




On 4/4/2022 8:29 AM, Anton Khirnov wrote:

---
  fftools/ffmpeg.c | 10 +-
  1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/fftools/ffmpeg.c b/fftools/ffmpeg.c
index afa1b012a6..13be32f0cf 100644
--- a/fftools/ffmpeg.c
+++ b/fftools/ffmpeg.c
@@ -1238,6 +1238,11 @@ static void do_video_out(OutputFile *of,
  }
  }
  
+/*

+ * For video, number of frames in == number of packets out.
+ * But there may be reordering, so we can't throw away frames on encoder
+ * flush, we need to limit them here, before they go into encoder.
+ */
  nb_frames = FFMIN(nb_frames, ost->max_frames - ost->frame_number);
  nb0_frames = FFMIN(nb0_frames, nb_frames);
  
@@ -1392,11 +1397,6 @@ static void do_video_out(OutputFile *of,

  }
  }
  ost->sync_opts++;
-/*
- * For video, number of frames in == number of packets out.
- * But there may be reordering, so we can't throw away frames on 
encoder
- * flush, we need to limit them here, before they go into encoder.
- */
  ost->frame_number++;
  
  if (vstats_filename && frame_size)


Patches 1 and 2 are trivial and lgtm.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 03/49] fftools/ffmpeg: stop using OutputStream.frame_number for streamcopy

2022-04-06 Thread James Almer

On 4/4/2022 8:29 AM, Anton Khirnov wrote:

This field is currently used by checks
- skipping packets before the first keyframe
- skipping packets before start time
to test whether any packets have been output already. But since
frame_number is incremented after the bitstream filters are applied
(which may involve delay), this use is incorrect. The keyframe check
works around this by adding an extra flag, the start-time check does
not.

Simplify both checks by replacing the seen_kf flag with a flag tracking
whether any packets have been output by do_streamcopy().
---
  fftools/ffmpeg.c | 10 +-
  fftools/ffmpeg.h |  2 +-
  2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/fftools/ffmpeg.c b/fftools/ffmpeg.c
index 13be32f0cf..29b01f9d93 100644
--- a/fftools/ffmpeg.c
+++ b/fftools/ffmpeg.c
@@ -894,8 +894,6 @@ static void output_packet(OutputFile *of, AVPacket *pkt,
  
  /* apply the output bitstream filters */

  if (ost->bsf_ctx) {
-if (pkt->flags & AV_PKT_FLAG_KEY)
-ost->seen_kf = 1;
  ret = av_bsf_send_packet(ost->bsf_ctx, eof ? NULL : pkt);
  if (ret < 0)
  goto finish;
@@ -2043,11 +2041,11 @@ static void do_streamcopy(InputStream *ist, 
OutputStream *ost, const AVPacket *p
  return;
  }
  
-if ((!ost->frame_number && !(pkt->flags & AV_PKT_FLAG_KEY)) &&

-!ost->copy_initial_nonkeyframes && !ost->seen_kf)
+if (!ost->streamcopy_started && !(pkt->flags & AV_PKT_FLAG_KEY) &&
+!ost->copy_initial_nonkeyframes)
  return;
  
-if (!ost->frame_number && !ost->copy_prior_start) {

+if (!ost->streamcopy_started && !ost->copy_prior_start) {
  int64_t comp_start = start_time;
  if (copy_ts && f->start_time != AV_NOPTS_VALUE)
  comp_start = FFMAX(start_time, f->start_time + f->ts_offset);
@@ -2101,6 +2099,8 @@ static void do_streamcopy(InputStream *ist, OutputStream 
*ost, const AVPacket *p
  ost->sync_opts += opkt->duration;
  
  output_packet(of, opkt, ost, 0);

+
+ost->streamcopy_started = 1;
  }
  
  int guess_input_channel_layout(InputStream *ist)

diff --git a/fftools/ffmpeg.h b/fftools/ffmpeg.h
index 1e14bf9fa9..04369df139 100644
--- a/fftools/ffmpeg.h
+++ b/fftools/ffmpeg.h
@@ -536,7 +536,7 @@ typedef struct OutputStream {
  int inputs_done;
  
  const char *attachment_filename;

-int seen_kf;
+int streamcopy_started;
  int copy_initial_nonkeyframes;
  int copy_prior_start;
  char *disposition;


fate-ffmpeg-setts-bsf, which depends on this code, still passes, so lgtm.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [RFC] Switching ffmpeg.c to a threaded architecture

2022-04-06 Thread Paul B Mahol
On Tue, Apr 5, 2022 at 11:20 PM Soft Works  wrote:

>
>
> > -Original Message-
> > From: ffmpeg-devel  On Behalf Of Paul
> > B Mahol
> > Sent: Tuesday, April 5, 2022 11:19 PM
> > To: FFmpeg development discussions and patches  > de...@ffmpeg.org>
> > Subject: Re: [FFmpeg-devel] [RFC] Switching ffmpeg.c to a threaded
> > architecture
> >
> > On Tue, Apr 5, 2022 at 11:06 PM Soft Works 
> > wrote:
> >
> > >
> > >
> > > > -Original Message-
> > > > From: ffmpeg-devel  On Behalf Of
> > > > Anton Khirnov
> > > > Sent: Tuesday, April 5, 2022 9:46 PM
> > > > To: FFmpeg development discussions and patches  > > > de...@ffmpeg.org>
> > > > Subject: Re: [FFmpeg-devel] [RFC] Switching ffmpeg.c to a threaded
> > > > architecture
> > > >
> > > > Quoting Michael Niedermayer (2022-04-05 21:15:42)
> > > > > On Mon, Apr 04, 2022 at 01:29:48PM +0200, Anton Khirnov wrote:
> > > > > > Hi,
> > > > > > this WIP patchset is the first major part of my ongoing work
> > to
> > > > change
> > > > > > ffmpeg.c architecture such that every
> > > > > > - demuxer
> > > > > > - decoder
> > > > > > - filtergraph
> > > > > > - encoder
> > > > > > - muxer
> > > > > > lives in its own thread. The advantages of doing this, beyond
> > > > increased
> > > > > > throughput, would be enforced separation between these
> > components,
> > > > > > making the code more local and easier to reason about.
> > > > > >
> > > > > > This set implements threading for muxers. My tentative plan is
> > to
> > > > > > continue with encoders and then filters. The patches still
> > need
> > > > some
> > > > > > polishing, especially the last one. Two FATE tests do not yet
> > > > pass, this
> > > > > > will be fixed in later iterations.
> > > > > >
> > > > > > Meanwhile, comments on the overall approach are especially
> > > > welcome.
> > > > >
> > > > > I agree that cleanup/modularization to make the code easier to
> > > > > understand is a good idea!
> > > > > Didnt really look at the patchset yet.
> > > > > I assume these changes have no real disadvantage ?
> > > >
> > > > Playing the devil's advocate, I can think of the following:
> > > > 1) ffmpeg.c will hard-depend on threads
> > > > 2) execution flow will become non-deterministic
> > > > 3) overall resource usage will likely go up due to inter-thread
> > > >synchronization and overhead related to new objects
> > > > 4) large-scale code changes always carry a higher risk of
> > regressions
> > > >
> > > > re 1): should not be a problem for any serious system
> > > > re 2): I spent a lot of effort to ensure the _output_ remains
> > > >deterministic (it actually becomes more predictable for
> > some
> > > >cases)
> > > > re 3): I expect the impact to be small and negligible,
> > respectively,
> > > > but
> > > >would have to be measured once the conversion is complete
> > > > re 4): the only way to avoid this completely would be to stop
> > > >development
> > > >
> > > > Overall, I believe the advantages far outweigh the potential
> > > > negatives.
> > >
> > > Hi,
> > >
> > > do I understand it right that there won't be a single-thread
> > > operation mode that replicates/corresponds the current behavior?
> > >
> > > Not that I wouldn't welcome the performance improvements, but one
> > > concern I have is debugging filtergraph operations. This is already
> > > a pretty tedious task in itself, because many relevant decisions
> > > are made in sub-sub-sub-sub-sub-functions, spread over many places.
> > > When adding an additional - not even deterministic - part to the
> > > game, it won't make things easier. It could even create situations
> > > where it could no longer be possible to replicate an error in a
> > > debugger - in case the existence of a debugger would cause a
> > variance
> > > within the constraints of the non-determinism range.
> > >
> > >
> > Can you elaborate more?, otherwise this is PEBKAC.
>
> You mean like WKOFAIT?
>

You failed to provide useful facts to backup your claims above.

So I can not take your inputs seriously at this time.

___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH v3] libavutil/hwcontext_qsv: Align width and heigh when download qsv frame

2022-04-06 Thread Wenbin Chen
The width and height for qsv frame to download need to be
aligned with 16. Add the alignment operation.
Now the following command works:
ffmpeg -hwaccel qsv -f rawvideo -s 1920x1080 -pix_fmt yuv420p -i \
input.yuv -vf "hwupload=extra_hw_frames=16,format=qsv,hwdownload, \
format=nv12" -f null -

Signed-off-by: Wenbin Chen 
---
 libavutil/hwcontext_qsv.c | 47 ++-
 1 file changed, 42 insertions(+), 5 deletions(-)

diff --git a/libavutil/hwcontext_qsv.c b/libavutil/hwcontext_qsv.c
index 95f8071abe..66c0e38955 100644
--- a/libavutil/hwcontext_qsv.c
+++ b/libavutil/hwcontext_qsv.c
@@ -91,7 +91,8 @@ typedef struct QSVFramesContext {
 
 mfxExtOpaqueSurfaceAlloc opaque_alloc;
 mfxExtBuffer *ext_buffers[1];
-AVFrame realigned_tmp_frame;
+AVFrame realigned_upload_frame;
+AVFrame realigned_download_frame;
 } QSVFramesContext;
 
 static const struct {
@@ -303,7 +304,8 @@ static void qsv_frames_uninit(AVHWFramesContext *ctx)
 av_freep(>surface_ptrs);
 av_freep(>surfaces_internal);
 av_freep(>handle_pairs_internal);
-av_frame_unref(>realigned_tmp_frame);
+av_frame_unref(>realigned_upload_frame);
+av_frame_unref(>realigned_download_frame);
 av_buffer_unref(>child_frames_ref);
 }
 
@@ -1058,21 +1060,46 @@ static int qsv_transfer_data_from(AVHWFramesContext 
*ctx, AVFrame *dst,
 mfxSyncPoint sync = NULL;
 mfxStatus err;
 int ret = 0;
+/* download to temp frame if the output is not padded as libmfx requires */
+AVFrame *tmp_frame = >realigned_download_frame;
+AVFrame *dst_frame;
+int realigned = 0;
 
 ret = qsv_internal_session_check_init(ctx, 0);
 if (ret < 0)
 return ret;
 
+/* According to MSDK spec for mfxframeinfo, "Width must be a multiple of 
16.
+ * Height must be a multiple of 16 for progressive frame sequence and a
+ * multiple of 32 otherwise.", so allign all frames to 16 before 
downloading. */
+if (dst->height & 15 || dst->linesize[0] & 15) {
+realigned = 1;
+if (tmp_frame->format != dst->format ||
+tmp_frame->width  != FFALIGN(dst->linesize[0], 16) ||
+tmp_frame->height != FFALIGN(dst->height, 16)) {
+av_frame_unref(tmp_frame);
+
+tmp_frame->format = dst->format;
+tmp_frame->width  = FFALIGN(dst->linesize[0], 16);
+tmp_frame->height = FFALIGN(dst->height, 16);
+ret = av_frame_get_buffer(tmp_frame, 0);
+if (ret < 0)
+return ret;
+}
+}
+
+dst_frame = realigned ? tmp_frame : dst;
+
 if (!s->session_download) {
 if (s->child_frames_ref)
-return qsv_transfer_data_child(ctx, dst, src);
+return qsv_transfer_data_child(ctx, dst_frame, src);
 
 av_log(ctx, AV_LOG_ERROR, "Surface download not possible\n");
 return AVERROR(ENOSYS);
 }
 
 out.Info = in->Info;
-map_frame_to_surface(dst, );
+map_frame_to_surface(dst_frame, );
 
 do {
 err = MFXVideoVPP_RunFrameVPPAsync(s->session_download, in, , 
NULL, );
@@ -1093,6 +1120,16 @@ static int qsv_transfer_data_from(AVHWFramesContext 
*ctx, AVFrame *dst,
 return AVERROR_UNKNOWN;
 }
 
+if (realigned) {
+tmp_frame->width  = dst->width;
+tmp_frame->height = dst->height;
+ret = av_frame_copy(dst, tmp_frame);
+tmp_frame->width  = FFALIGN(dst->linesize[0], 16);
+tmp_frame->height = FFALIGN(dst->height, 16);
+if (ret < 0)
+return ret;
+}
+
 return 0;
 }
 
@@ -1108,7 +1145,7 @@ static int qsv_transfer_data_to(AVHWFramesContext *ctx, 
AVFrame *dst,
 mfxStatus err;
 int ret = 0;
 /* make a copy if the input is not padded as libmfx requires */
-AVFrame *tmp_frame = >realigned_tmp_frame;
+AVFrame *tmp_frame = >realigned_upload_frame;
 const AVFrame *src_frame;
 int realigned = 0;
 
-- 
2.32.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [RFC] Switching ffmpeg.c to a threaded architecture

2022-04-06 Thread Anton Khirnov
To clarify futher, the [RFC] tag applies mainly to patches 34-49/49,
which will certainly require some changes, possibly substatial ones.

Previous patches should IMO be acceptable for master as they are. I
would appreciate reviews for those, so I can push them sooner rather
than later and thus reduce the number of future rebase conflicts.

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH v3 2/2] libavcodec/qsvenc: Add more pixel format support to qsvenc

2022-04-06 Thread Wenbin Chen
Qsv encoder only support input P010 and nv12 format directly from system
memory. For other format, we need to upload frame to device memory and
input qsv format to encoder. Now add other system memory format support
to qsv encoder.

Signed-off-by: Wenbin Chen 
---
 libavcodec/qsvenc.c  | 30 --
 libavcodec/qsvenc_hevc.c |  2 ++
 2 files changed, 6 insertions(+), 26 deletions(-)

diff --git a/libavcodec/qsvenc.c b/libavcodec/qsvenc.c
index 55ce3d2499..345df0190c 100644
--- a/libavcodec/qsvenc.c
+++ b/libavcodec/qsvenc.c
@@ -1606,32 +1606,10 @@ static int submit_frame(QSVEncContext *q, const AVFrame 
*frame,
 else if (frame->repeat_pict == 4)
 qf->surface.Info.PicStruct |= MFX_PICSTRUCT_FRAME_TRIPLING;
 
-qf->surface.Data.PitchLow  = qf->frame->linesize[0];
-qf->surface.Data.Y = qf->frame->data[0];
-qf->surface.Data.UV= qf->frame->data[1];
-
-/* The SDK checks Data.V when using system memory for VP9 encoding */
-switch (frame->format) {
-case AV_PIX_FMT_NV12:
-qf->surface.Data.V = qf->surface.Data.UV + 1;
-break;
-
-case AV_PIX_FMT_P010:
-qf->surface.Data.V = qf->surface.Data.UV + 2;
-break;
-
-case AV_PIX_FMT_X2RGB10:
-case AV_PIX_FMT_BGRA:
-qf->surface.Data.B = qf->frame->data[0];
-qf->surface.Data.G = qf->frame->data[0] + 1;
-qf->surface.Data.R = qf->frame->data[0] + 2;
-qf->surface.Data.A = qf->frame->data[0] + 3;
-break;
-
-default:
-/* should not reach here */
-av_assert0(0);
-break;
+ret = ff_qsv_map_frame_to_surface(qf->frame, >surface);
+if (ret < 0) {
+av_log(q->avctx, AV_LOG_ERROR, "map frame to surface failed.\n");
+return ret;
 }
 }
 qf->surface.Data.TimeStamp = av_rescale_q(frame->pts, q->avctx->time_base, 
(AVRational){1, 9});
diff --git a/libavcodec/qsvenc_hevc.c b/libavcodec/qsvenc_hevc.c
index 36c2d484ad..3a2d50e332 100644
--- a/libavcodec/qsvenc_hevc.c
+++ b/libavcodec/qsvenc_hevc.c
@@ -303,6 +303,8 @@ const FFCodec ff_hevc_qsv_encoder = {
 .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HYBRID,
 .p.pix_fmts = (const enum AVPixelFormat[]){ AV_PIX_FMT_NV12,
 AV_PIX_FMT_P010,
+AV_PIX_FMT_YUYV422,
+AV_PIX_FMT_Y210,
 AV_PIX_FMT_QSV,
 #if QSV_VERSION_ATLEAST(1, 17)
 AV_PIX_FMT_BGRA,
-- 
2.32.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH v3 1/2] libavcodec/qsvdec: Add more pixel format support to qsvdec

2022-04-06 Thread Wenbin Chen
Qsv decoder only supports directly output nv12 and p010 to system
memory. For other format, we need to download frame from qsv format
to system memory. Now add other supported format to qsvdec.

Signed-off-by: Wenbin Chen 
---
 libavcodec/qsv.c  | 36 
 libavcodec/qsv_internal.h |  3 +++
 libavcodec/qsvdec.c   | 23 +--
 3 files changed, 56 insertions(+), 6 deletions(-)

diff --git a/libavcodec/qsv.c b/libavcodec/qsv.c
index 67d0e3934a..b86c20b153 100644
--- a/libavcodec/qsv.c
+++ b/libavcodec/qsv.c
@@ -244,6 +244,42 @@ int ff_qsv_map_pixfmt(enum AVPixelFormat format, uint32_t 
*fourcc)
 }
 }
 
+int ff_qsv_map_frame_to_surface(const AVFrame *frame, mfxFrameSurface1 
*surface)
+{
+switch (frame->format) {
+case AV_PIX_FMT_NV12:
+case AV_PIX_FMT_P010:
+surface->Data.Y  = frame->data[0];
+surface->Data.UV = frame->data[1];
+/* The SDK checks Data.V when using system memory for VP9 encoding */
+surface->Data.V  = surface->Data.UV + 1;
+break;
+case AV_PIX_FMT_X2RGB10LE:
+case AV_PIX_FMT_BGRA:
+surface->Data.B = frame->data[0];
+surface->Data.G = frame->data[0] + 1;
+surface->Data.R = frame->data[0] + 2;
+surface->Data.A = frame->data[0] + 3;
+break;
+case AV_PIX_FMT_YUYV422:
+surface->Data.Y = frame->data[0];
+surface->Data.U = frame->data[0] + 1;
+surface->Data.V = frame->data[0] + 3;
+break;
+
+case AV_PIX_FMT_Y210:
+surface->Data.Y16 = (mfxU16 *)frame->data[0];
+surface->Data.U16 = (mfxU16 *)frame->data[0] + 1;
+surface->Data.V16 = (mfxU16 *)frame->data[0] + 3;
+break;
+default:
+return AVERROR(ENOSYS);
+}
+surface->Data.PitchLow  = frame->linesize[0];
+
+return 0;
+}
+
 int ff_qsv_find_surface_idx(QSVFramesContext *ctx, QSVFrame *frame)
 {
 int i;
diff --git a/libavcodec/qsv_internal.h b/libavcodec/qsv_internal.h
index 58186ea7ca..e2aecdcbd6 100644
--- a/libavcodec/qsv_internal.h
+++ b/libavcodec/qsv_internal.h
@@ -147,4 +147,7 @@ int ff_qsv_find_surface_idx(QSVFramesContext *ctx, QSVFrame 
*frame);
 void ff_qsv_frame_add_ext_param(AVCodecContext *avctx, QSVFrame *frame,
 mfxExtBuffer *param);
 
+int ff_qsv_map_frame_to_surface(const AVFrame *frame, mfxFrameSurface1 
*surface);
+
+
 #endif /* AVCODEC_QSV_INTERNAL_H */
diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c
index de4af1754d..c4296f80d7 100644
--- a/libavcodec/qsvdec.c
+++ b/libavcodec/qsvdec.c
@@ -132,21 +132,28 @@ static int qsv_get_continuous_buffer(AVCodecContext 
*avctx, AVFrame *frame,
 frame->linesize[0] = FFALIGN(avctx->width, 128);
 break;
 case AV_PIX_FMT_P010:
+case AV_PIX_FMT_YUYV422:
 frame->linesize[0] = 2 * FFALIGN(avctx->width, 128);
 break;
+case AV_PIX_FMT_Y210:
+frame->linesize[0] = 4 * FFALIGN(avctx->width, 128);
+break;
 default:
 av_log(avctx, AV_LOG_ERROR, "Unsupported pixel format.\n");
 return AVERROR(EINVAL);
 }
 
-frame->linesize[1] = frame->linesize[0];
 frame->buf[0]  = av_buffer_pool_get(pool);
 if (!frame->buf[0])
 return AVERROR(ENOMEM);
 
 frame->data[0] = frame->buf[0]->data;
-frame->data[1] = frame->data[0] +
-frame->linesize[0] * FFALIGN(avctx->height, 64);
+if (avctx->pix_fmt == AV_PIX_FMT_NV12 ||
+avctx->pix_fmt == AV_PIX_FMT_P010) {
+frame->linesize[1] = frame->linesize[0];
+frame->data[1] = frame->data[0] +
+frame->linesize[0] * FFALIGN(avctx->height, 64);
+}
 
 ret = ff_attach_decode_data(frame);
 if (ret < 0)
@@ -426,9 +433,11 @@ static int alloc_frame(AVCodecContext *avctx, QSVContext 
*q, QSVFrame *frame)
 if (frame->frame->format == AV_PIX_FMT_QSV) {
 frame->surface = *(mfxFrameSurface1*)frame->frame->data[3];
 } else {
-frame->surface.Data.PitchLow = frame->frame->linesize[0];
-frame->surface.Data.Y= frame->frame->data[0];
-frame->surface.Data.UV   = frame->frame->data[1];
+ret = ff_qsv_map_frame_to_surface(frame->frame, >surface);
+if (ret < 0) {
+av_log(avctx, AV_LOG_ERROR, "map frame to surface failed.\n");
+return ret;
+}
 }
 
 frame->surface.Info = q->frame_info;
@@ -992,6 +1001,8 @@ const FFCodec ff_##x##_qsv_decoder = { \
 .p.priv_class   = ##_qsv_class, \
 .p.pix_fmts = (const enum AVPixelFormat[]){ AV_PIX_FMT_NV12, \
 AV_PIX_FMT_P010, \
+AV_PIX_FMT_YUYV422, \
+AV_PIX_FMT_Y210, \
 AV_PIX_FMT_QSV, \
 AV_PIX_FMT_NONE }, \
 

Re: [FFmpeg-devel] movenc: add write_btrt option

2022-04-06 Thread zhilizhao(赵志立)


> On Apr 3, 2022, at 1:07 PM, Eran Kornblau  wrote:
> 
> Trying my luck in a new thread…
> 
> This patch is in continuation to this discussion –
> https://ffmpeg.org/pipermail/ffmpeg-devel/2022-March/294623.html
> 
> supports forcing or disabling the writing of the btrt atom.
> the default behavior is to write the atom only for mp4 mode.
> ---
>  libavformat/movenc.c | 30 +++---
>  libavformat/movenc.h |  1 +
>  2 files changed, 20 insertions(+), 11 deletions(-)
> 
> diff --git a/libavformat/movenc.c b/libavformat/movenc.c
> index 4c868919ae..b75f1c6909 100644
> --- a/libavformat/movenc.c
> +++ b/libavformat/movenc.c
> 
[…]
>  
> -if (track->mode == MODE_MP4 &&
> -((ret = mov_write_btrt_tag(pb, track)) < 0))
> -return ret;
> +if ((mov->write_btrt == -1 && track->mode == MODE_MP4) || 
> mov->write_btrt == 1) {
> +if ((ret = mov_write_btrt_tag(pb, track)) < 0) {
> +return ret;
> +}
> +}

I prefer to handle the auto mode (mov->write_btrt == -1) in a single place,
so we don’t need to change multiple lines if the condition changed, e.g.,
enable btrt for MODE_MOV. Please correct me if I’m wrong, mov_init() has all
of the contexts to overwrite mov->write_btrt.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [RFC] Switching ffmpeg.c to a threaded architecture

2022-04-06 Thread Anton Khirnov
Quoting Soft Works (2022-04-05 23:05:57)
> do I understand it right that there won't be a single-thread
> operation mode that replicates/corresponds the current behavior?

Correct, maintaining a single-threaded mode would require massive
amounts of extra effort for questionable gain.

> 
> Not that I wouldn't welcome the performance improvements, but one
> concern I have is debugging filtergraph operations. This is already 
> a pretty tedious task in itself, because many relevant decisions 
> are made in sub-sub-sub-sub-sub-functions, spread over many places.
> When adding an additional - not even deterministic - part to the 
> game, it won't make things easier. It could even create situations
> where it could no longer be possible to replicate an error in a 
> debugger - in case the existence of a debugger would cause a variance
> within the constraints of the non-determinism range. 

I don't think debugging filtegraph internals will get significantly
harders, it might even become slightly easier because you will have a
thread entirely dedicated to filtering, with nothing else going on in
it.

> 
> From another point of view, this is a change, so fundamental like
> ffmpeg(.c) hasn't seen in a long time.
> I would at least suppose that this could cause issues at many ends,
> and from experience, there may be additional ends where it's rather
> unexpected to  have effects.
> 
> In that context, I think that doing a change of such a wide scope
> in an irreversible way like this, would impose quite a burden on
> many other developers, because sooner or later, other developers
> will run into situations where something is no longer working like 
> before and you'll regularly wonder whether this might be a consequence
> of ffmpeg.c threading change or caused by other changes.
> But then, you won't be able anymore to bisect on that suspicion,
> because the threading change can't be reverted and (as long as it's
> not shortly after the change) there might have been too many other 
> changes to easily port them back to a state before the threading
> change.
> 
> I wonder whether this couldn't be done in a way that the current
> behavior can be preserved and activated by option?
> 
> Wouldn't it be possible to follow an approach like this:
> 
> - Assuming the code would be fine and it would mark the desired 
>   end result
> - Put it aside and start over from the current HEAD
> - Iteratively morph the code current code in a (possibly) long
>   sequence of refactoring operations where every single one
>   (and hence in sum) are semantically neutral - until the code
>   is turned more and more into what has already been developed
> - eventually, only few differences will be left, and these can 
>   be made switchable by an option - as a result, both - old and
>   new operation modes would be available.

If I understand correctly what you're suggesting then I don't believe
this approach is feasible. The goal is not "add threading to improve
performance", keeping everything else intact as much as possible. The
goal is "improve architecture to make the code easier to
understand/maintain/extend", threads are a means towards that goal. The
fact that this should also improve throughput is more of a nice side
effect than anything else.

This patchset already changes behaviour in certain cases, making the
output more predictable and consistent. Reordering it somehow to
separate "semantically neutral" patches would require vast amounts of
extra work. Note that progressing at all without obviously breaking
anything is already quite hard --- I've been working on this since
November and this is just the first step. I really do not want to make
my work 10x harder for the vague benefit of maybe making some debugging
slightly easier.

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v5 1/4] lavc/vaapi_encode_h265: Add GPB frame support for hevc_vaapi

2022-04-06 Thread Xiang, Haihao
On Tue, 2022-03-29 at 12:44 +, Xiang, Haihao wrote:
> On Thu, 2022-03-17 at 14:41 +0800, Fei Wang wrote:
> > From: Linjie Fu 
> > 
> > Use GPB frames to replace regular P/B frames if backend driver does not
> > support it.
> > 
> > - GPB:
> > Generalized P and B picture. Regular P/B frames replaced by B
> > frames with previous-predict only, L0 == L1. Normal B frames
> > still have 2 different ref_lists and allow bi-prediction
> > 
> > Signed-off-by: Linjie Fu 
> > Signed-off-by: Fei Wang 
> > ---
> > update:
> > 1. fall back logic that implemented in v3.
> > 2. refine debug message.
> > 
> >  libavcodec/vaapi_encode.c  | 67 +++---
> >  libavcodec/vaapi_encode.h  |  1 +
> >  libavcodec/vaapi_encode_h265.c | 15 
> >  3 files changed, 78 insertions(+), 5 deletions(-)
> > 
> > diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c
> > index 3bf379b1a0..081eb475a3 100644
> > --- a/libavcodec/vaapi_encode.c
> > +++ b/libavcodec/vaapi_encode.c
> > @@ -1827,6 +1827,7 @@ static av_cold int
> > vaapi_encode_init_gop_structure(AVCodecContext *avctx)
> >  VAStatus vas;
> >  VAConfigAttrib attr = { VAConfigAttribEncMaxRefFrames };
> >  uint32_t ref_l0, ref_l1;
> > +int prediction_pre_only;
> >  
> >  vas = vaGetConfigAttributes(ctx->hwctx->display,
> >  ctx->va_profile,
> > @@ -1845,6 +1846,51 @@ static av_cold int
> > vaapi_encode_init_gop_structure(AVCodecContext *avctx)
> >  ref_l1 = attr.value >> 16 & 0x;
> >  }
> >  
> > +ctx->p_to_gpb = 0;
> > +prediction_pre_only = 0;
> > +
> > +#if VA_CHECK_VERSION(1, 9, 0)
> > +if (!(ctx->codec->flags & FLAG_INTRA_ONLY ||
> > +avctx->gop_size <= 1)) {
> > +attr = (VAConfigAttrib) { VAConfigAttribPredictionDirection };
> > +vas = vaGetConfigAttributes(ctx->hwctx->display,
> > +ctx->va_profile,
> > +ctx->va_entrypoint,
> > +, 1);
> > +if (vas != VA_STATUS_SUCCESS) {
> > +av_log(avctx, AV_LOG_WARNING, "Failed to query prediction
> > direction "
> > +   "attribute: %d (%s).\n", vas, vaErrorStr(vas));
> > +return AVERROR_EXTERNAL;
> > +} else if (attr.value == VA_ATTRIB_NOT_SUPPORTED) {
> > +av_log(avctx, AV_LOG_VERBOSE, "Driver does not report any
> > additional "
> > +   "prediction constraints.\n");
> > +} else {
> > +if (((ref_l0 > 0 || ref_l1 > 0) && !(attr.value &
> > VA_PREDICTION_DIRECTION_PREVIOUS)) ||
> > +((ref_l1 == 0) && (attr.value &
> > (VA_PREDICTION_DIRECTION_FUTURE | VA_PREDICTION_DIRECTION_BI_NOT_EMPTY {
> > +av_log(avctx, AV_LOG_ERROR, "Driver report incorrect
> > prediction "
> > +   "direction attribute.\n");
> > +return AVERROR_EXTERNAL;
> > +}
> > +
> > +if (!(attr.value & VA_PREDICTION_DIRECTION_FUTURE)) {
> > +if (ref_l0 > 0 && ref_l1 > 0) {
> > +prediction_pre_only = 1;
> > +av_log(avctx, AV_LOG_VERBOSE, "Driver only support same
> > reference "
> > +   "lists for B-frames.\n");
> > +}
> > +}
> > +
> > +if (attr.value & VA_PREDICTION_DIRECTION_BI_NOT_EMPTY) {
> > +if (ref_l0 > 0 && ref_l1 > 0) {
> > +ctx->p_to_gpb = 1;
> > +av_log(avctx, AV_LOG_VERBOSE, "Driver does not support
> > P-
> > frames, "
> > +   "replacing them with B-frames.\n");
> > +}
> > +}
> > +}
> > +}
> > +#endif
> > +
> >  if (ctx->codec->flags & FLAG_INTRA_ONLY ||
> >  avctx->gop_size <= 1) {
> >  av_log(avctx, AV_LOG_VERBOSE, "Using intra frames only.\n");
> > @@ -1854,15 +1900,26 @@ static av_cold int
> > vaapi_encode_init_gop_structure(AVCodecContext *avctx)
> > "reference frames.\n");
> >  return AVERROR(EINVAL);
> >  } else if (!(ctx->codec->flags & FLAG_B_PICTURES) ||
> > -   ref_l1 < 1 || avctx->max_b_frames < 1) {
> > -av_log(avctx, AV_LOG_VERBOSE, "Using intra and P-frames "
> > -   "(supported references: %d / %d).\n", ref_l0, ref_l1);
> > +   ref_l1 < 1 || avctx->max_b_frames < 1 ||
> > +   prediction_pre_only) {
> > +if (ctx->p_to_gpb)
> > +   av_log(avctx, AV_LOG_VERBOSE, "Using intra and B-frames "
> > +  "(supported references: %d / %d).\n",
> > +  ref_l0, ref_l1);
> > +else
> > +av_log(avctx, AV_LOG_VERBOSE, "Using intra and P-frames "
> > +   "(supported references: %d / %d).\n", ref_l0, ref_l1);
> >  ctx->gop_size = avctx->gop_size;
> >  ctx->p_per_i  = 

Re: [FFmpeg-devel] [PATCH v2 1/2] libavcodec/qsvdec: Add more pixel format support to qsvdec

2022-04-06 Thread Chen, Wenbin
> On Sat, 2022-04-02 at 17:35 +0800, Wenbin Chen wrote:
> > Qsv decoder only supports directly output nv12 and p010 to system
> > memory. For other format, we need to download frame from qsv format
> > to system memory. Now add other supported format to qsvdec.
> >
> > Signed-off-by: Wenbin Chen 
> > ---
> >  libavcodec/qsv.c  | 36 
> >  libavcodec/qsv_internal.h |  3 +++
> >  libavcodec/qsvdec.c   | 23 +--
> >  3 files changed, 56 insertions(+), 6 deletions(-)
> >
> > diff --git a/libavcodec/qsv.c b/libavcodec/qsv.c
> > index b75877e698..cc1352aa2a 100644
> > --- a/libavcodec/qsv.c
> > +++ b/libavcodec/qsv.c
> > @@ -244,6 +244,42 @@ int ff_qsv_map_pixfmt(enum AVPixelFormat
> format, uint32_t
> > *fourcc)
> >  }
> >  }
> >
> > +int ff_qsv_map_frame_to_surface(const AVFrame *frame,
> mfxFrameSurface1
> > *surface)
> > +{
> > +switch (frame->format) {
> > +case AV_PIX_FMT_NV12:
> > +case AV_PIX_FMT_P010:
> > +surface->Data.Y  = frame->data[0];
> > +surface->Data.UV = frame->data[1];
> > +/* The SDK checks Data.V when using system memory for VP9
> encoding */
> > +surface->Data.V  = surface->Data.UV + 1;
> > +break;
> > +case AV_PIX_FMT_X2RGB10LE:
> > +case AV_PIX_FMT_BGRA:
> > +surface->Data.B = frame->data[0];
> > +surface->Data.G = frame->data[0] + 1;
> > +surface->Data.R = frame->data[0] + 2;
> > +surface->Data.A = frame->data[0] + 3;
> > +break;
> > +case AV_PIX_FMT_YUYV422:
> > +surface->Data.Y = frame->data[0];
> > +surface->Data.U = frame->data[0] + 1;
> > +surface->Data.V = frame->data[0] + 3;
> > +break;
> > +
> > +case AV_PIX_FMT_Y210:
> > +surface->Data.Y16 = (mfxU16 *)frame->data[0];
> > +surface->Data.U16 = (mfxU16 *)frame->data[0] + 1;
> > +surface->Data.V16 = (mfxU16 *)frame->data[0] + 3;
> > +break;
> > +default:
> > +return AVERROR(ENOSYS);
> > +}
> > +surface->Data.PitchLow  = frame->linesize[0];
> > +
> > +return 0;
> > +}
> > +
> >  int ff_qsv_find_surface_idx(QSVFramesContext *ctx, QSVFrame *frame)
> >  {
> >  int i;
> > diff --git a/libavcodec/qsv_internal.h b/libavcodec/qsv_internal.h
> > index 58186ea7ca..e2aecdcbd6 100644
> > --- a/libavcodec/qsv_internal.h
> > +++ b/libavcodec/qsv_internal.h
> > @@ -147,4 +147,7 @@ int ff_qsv_find_surface_idx(QSVFramesContext
> *ctx,
> > QSVFrame *frame);
> >  void ff_qsv_frame_add_ext_param(AVCodecContext *avctx, QSVFrame
> *frame,
> >  mfxExtBuffer *param);
> >
> > +int ff_qsv_map_frame_to_surface(const AVFrame *frame,
> mfxFrameSurface1
> > *surface);
> > +
> > +
> >  #endif /* AVCODEC_QSV_INTERNAL_H */
> > diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c
> > index 6236391357..f1d56b2af3 100644
> > --- a/libavcodec/qsvdec.c
> > +++ b/libavcodec/qsvdec.c
> > @@ -129,21 +129,28 @@ static int
> qsv_get_continuous_buffer(AVCodecContext
> > *avctx, AVFrame *frame,
> >  frame->linesize[0] = FFALIGN(avctx->width, 128);
> >  break;
> >  case AV_PIX_FMT_P010:
> > +case AV_PIX_FMT_YUYV422:
> >  frame->linesize[0] = 2 * FFALIGN(avctx->width, 128);
> >  break;
> > +case AV_PIX_FMT_Y210:
> > +frame->linesize[0] = 4 * FFALIGN(avctx->width, 128);
> > +break;
> >  default:
> >  av_log(avctx, AV_LOG_ERROR, "Unsupported pixel format.\n");
> >  return AVERROR(EINVAL);
> >  }
> >
> > -frame->linesize[1] = frame->linesize[0];
> >  frame->buf[0]  = av_buffer_pool_get(pool);
> >  if (!frame->buf[0])
> >  return AVERROR(ENOMEM);
> >
> >  frame->data[0] = frame->buf[0]->data;
> > -frame->data[1] = frame->data[0] +
> > -frame->linesize[0] * FFALIGN(avctx->height, 
> > 64);
> > +if (avctx->pix_fmt == AV_PIX_FMT_NV12 ||
> > +avctx->pix_fmt == AV_PIX_FMT_P010) {
> > +frame->linesize[1] = frame->linesize[0];
> > +frame->data[1] = frame->data[0] +
> > +frame->linesize[0] * FFALIGN(avctx->height, 64);
> > +}
> >
> >  ret = ff_attach_decode_data(frame);
> >  if (ret < 0)
> > @@ -423,9 +430,11 @@ static int alloc_frame(AVCodecContext *avctx,
> QSVContext
> > *q, QSVFrame *frame)
> >  if (frame->frame->format == AV_PIX_FMT_QSV) {
> >  frame->surface = *(mfxFrameSurface1*)frame->frame->data[3];
> >  } else {
> > -frame->surface.Data.PitchLow = frame->frame->linesize[0];
> > -frame->surface.Data.Y= frame->frame->data[0];
> > -frame->surface.Data.UV   = frame->frame->data[1];
> > +ret = ff_qsv_map_frame_to_surface(frame->frame, 
> >surface);
> > +if (ret < 0) {
> > +av_log(avctx, AV_LOG_ERROR, "map frame to surface failed.\n");
> > +return ret;
> > +}
> >  }
> >
> >  

Re: [FFmpeg-devel] [PATCH v2 1/2] libavcodec/qsvdec: Add more pixel format support to qsvdec

2022-04-06 Thread Xiang, Haihao
On Sat, 2022-04-02 at 17:35 +0800, Wenbin Chen wrote:
> Qsv decoder only supports directly output nv12 and p010 to system
> memory. For other format, we need to download frame from qsv format
> to system memory. Now add other supported format to qsvdec.
> 
> Signed-off-by: Wenbin Chen 
> ---
>  libavcodec/qsv.c  | 36 
>  libavcodec/qsv_internal.h |  3 +++
>  libavcodec/qsvdec.c   | 23 +--
>  3 files changed, 56 insertions(+), 6 deletions(-)
> 
> diff --git a/libavcodec/qsv.c b/libavcodec/qsv.c
> index b75877e698..cc1352aa2a 100644
> --- a/libavcodec/qsv.c
> +++ b/libavcodec/qsv.c
> @@ -244,6 +244,42 @@ int ff_qsv_map_pixfmt(enum AVPixelFormat format, uint32_t
> *fourcc)
>  }
>  }
>  
> +int ff_qsv_map_frame_to_surface(const AVFrame *frame, mfxFrameSurface1
> *surface)
> +{
> +switch (frame->format) {
> +case AV_PIX_FMT_NV12:
> +case AV_PIX_FMT_P010:
> +surface->Data.Y  = frame->data[0];
> +surface->Data.UV = frame->data[1];
> +/* The SDK checks Data.V when using system memory for VP9 encoding */
> +surface->Data.V  = surface->Data.UV + 1;
> +break;
> +case AV_PIX_FMT_X2RGB10LE:
> +case AV_PIX_FMT_BGRA:
> +surface->Data.B = frame->data[0];
> +surface->Data.G = frame->data[0] + 1;
> +surface->Data.R = frame->data[0] + 2;
> +surface->Data.A = frame->data[0] + 3;
> +break;
> +case AV_PIX_FMT_YUYV422:
> +surface->Data.Y = frame->data[0];
> +surface->Data.U = frame->data[0] + 1;
> +surface->Data.V = frame->data[0] + 3;
> +break;
> +
> +case AV_PIX_FMT_Y210:
> +surface->Data.Y16 = (mfxU16 *)frame->data[0];
> +surface->Data.U16 = (mfxU16 *)frame->data[0] + 1;
> +surface->Data.V16 = (mfxU16 *)frame->data[0] + 3;
> +break;
> +default:
> +return AVERROR(ENOSYS);
> +}
> +surface->Data.PitchLow  = frame->linesize[0];
> +
> +return 0;
> +}
> +
>  int ff_qsv_find_surface_idx(QSVFramesContext *ctx, QSVFrame *frame)
>  {
>  int i;
> diff --git a/libavcodec/qsv_internal.h b/libavcodec/qsv_internal.h
> index 58186ea7ca..e2aecdcbd6 100644
> --- a/libavcodec/qsv_internal.h
> +++ b/libavcodec/qsv_internal.h
> @@ -147,4 +147,7 @@ int ff_qsv_find_surface_idx(QSVFramesContext *ctx,
> QSVFrame *frame);
>  void ff_qsv_frame_add_ext_param(AVCodecContext *avctx, QSVFrame *frame,
>  mfxExtBuffer *param);
>  
> +int ff_qsv_map_frame_to_surface(const AVFrame *frame, mfxFrameSurface1
> *surface);
> +
> +
>  #endif /* AVCODEC_QSV_INTERNAL_H */
> diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c
> index 6236391357..f1d56b2af3 100644
> --- a/libavcodec/qsvdec.c
> +++ b/libavcodec/qsvdec.c
> @@ -129,21 +129,28 @@ static int qsv_get_continuous_buffer(AVCodecContext
> *avctx, AVFrame *frame,
>  frame->linesize[0] = FFALIGN(avctx->width, 128);
>  break;
>  case AV_PIX_FMT_P010:
> +case AV_PIX_FMT_YUYV422:
>  frame->linesize[0] = 2 * FFALIGN(avctx->width, 128);
>  break;
> +case AV_PIX_FMT_Y210:
> +frame->linesize[0] = 4 * FFALIGN(avctx->width, 128);
> +break;
>  default:
>  av_log(avctx, AV_LOG_ERROR, "Unsupported pixel format.\n");
>  return AVERROR(EINVAL);
>  }
>  
> -frame->linesize[1] = frame->linesize[0];
>  frame->buf[0]  = av_buffer_pool_get(pool);
>  if (!frame->buf[0])
>  return AVERROR(ENOMEM);
>  
>  frame->data[0] = frame->buf[0]->data;
> -frame->data[1] = frame->data[0] +
> -frame->linesize[0] * FFALIGN(avctx->height, 64);
> +if (avctx->pix_fmt == AV_PIX_FMT_NV12 ||
> +avctx->pix_fmt == AV_PIX_FMT_P010) {
> +frame->linesize[1] = frame->linesize[0];
> +frame->data[1] = frame->data[0] +
> +frame->linesize[0] * FFALIGN(avctx->height, 64);
> +}
>  
>  ret = ff_attach_decode_data(frame);
>  if (ret < 0)
> @@ -423,9 +430,11 @@ static int alloc_frame(AVCodecContext *avctx, QSVContext
> *q, QSVFrame *frame)
>  if (frame->frame->format == AV_PIX_FMT_QSV) {
>  frame->surface = *(mfxFrameSurface1*)frame->frame->data[3];
>  } else {
> -frame->surface.Data.PitchLow = frame->frame->linesize[0];
> -frame->surface.Data.Y= frame->frame->data[0];
> -frame->surface.Data.UV   = frame->frame->data[1];
> +ret = ff_qsv_map_frame_to_surface(frame->frame, >surface);
> +if (ret < 0) {
> +av_log(avctx, AV_LOG_ERROR, "map frame to surface failed.\n");
> +return ret;
> +}
>  }
>  
>  frame->surface.Info = q->frame_info;
> @@ -990,6 +999,8 @@ const AVCodec ff_##x##_qsv_decoder = { \
>  .priv_class = ##_qsv_class, \
>  .pix_fmts   = (const enum AVPixelFormat[]){ AV_PIX_FMT_NV12, \
>  

Re: [FFmpeg-devel] [PATCH 3/3] tests: add README.md file with simple instructions

2022-04-06 Thread Thilo Borgmann

Am 05.04.22 um 23:40 schrieb Stefano Sabatini:

On date Monday 2022-04-04 10:30:27 +0200, Thilo Borgmann wrote:

Hi,

Am 03.04.22 um 15:59 schrieb Stefano Sabatini:

---
   tests/README.md | 48 
   1 file changed, 48 insertions(+)
   create mode 100644 tests/README.md

diff --git a/tests/README.md b/tests/README.md
new file mode 100644
index 00..4bcae0b403
--- /dev/null
+++ b/tests/README.md


currently we got part of that in doc/fate.texi. Doesn't it make more sense to 
add that there?


Makes sense, totally missed it, updated.


Pushed, thanks!

-Thilo
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] lavc/flacdec: Increase residual limit from INT_MAX to UINT_MAX

2022-04-06 Thread Martijn van Beurden
Op di 5 apr. 2022 om 15:37 schreef Martijn van Beurden :
>
> ---
>  libavcodec/flacdec.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/libavcodec/flacdec.c b/libavcodec/flacdec.c
> index dd6026f9de..cb32d7cae8 100644
> --- a/libavcodec/flacdec.c
> +++ b/libavcodec/flacdec.c
> @@ -260,7 +260,7 @@ static int decode_residuals(FLACContext *s, int32_t 
> *decoded, int pred_order)
>  for (; i < samples; i++)
>  *decoded++ = get_sbits_long(, tmp);
>  } else {
> -int real_limit = tmp ? (INT_MAX >> tmp) + 2 : INT_MAX;
> +int real_limit = (tmp > 1) ? (INT_MAX >> (tmp - 1)) + 2 : 
> INT_MAX;
>  for (; i < samples; i++) {
>  int v = get_sr_golomb_flac(, tmp, real_limit, 1);
>  if (v == 0x8000){
> --
> 2.30.2
>

A file needing this patch to decode properly can be found here:
https://github.com/ktmf01/flac-test-files/blob/main/subset/63%20-%20predictor%20overflow%20check%2C%2024-bit.flac

Kind regards, Martijn van Beurden
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".