Re: [FFmpeg-devel] [PATCH v3 2/2] lavc/vulkan_av1: port to the new stable API

2024-03-18 Thread Dave Airlie
  >
>  > -/* Workaround for a spec issue.
>  > - *Can be removed once no longer needed, and threading can be enabled. 
> */
>  > +/* TODO: investigate if this can be removed to make decoding 
> completely
>  > + * independent. */
>  >  FFVulkanDecodeContext  *dec;
>
> Can you explain what the id_alloc_mask thing is doing which needs this?  (The 
> 32 size in particular seems suspicious.)

This is for DPB slot management, 32 is the wrong limit, I think I just
picked uint32_t and bitshifting as a quick option. We should limit
this to maxDpbSlots I suspect, probably still use a uint32_t bitmask
to track it.

Dave.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH] avformat/hlsenc: Only prevent init_time from being used when splitting by time

2023-11-06 Thread Dave Johansen
---
 libavformat/hlsenc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/libavformat/hlsenc.c b/libavformat/hlsenc.c
index 4ef84c05c1..3548299770 100644
--- a/libavformat/hlsenc.c
+++ b/libavformat/hlsenc.c
@@ -3090,7 +3090,7 @@ static int hls_init(AVFormatContext *s)
 if (hls->flags & HLS_APPEND_LIST) {
 parse_playlist(s, vs->m3u8_name, vs);
 vs->discontinuity = 1;
-if (hls->init_time > 0) {
+if ((hls->flags & HLS_SPLIT_BY_TIME) && (hls->init_time > 0)) {
 av_log(s, AV_LOG_WARNING, "append_list mode does not support 
hls_init_time,"
" hls_init_time value will have no effect\n");
 hls->init_time = 0;
-- 
2.39.2 (Apple Git-143)

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH] Use existing function to parse time

2023-11-06 Thread Dave Johansen
---
 libavformat/hlsenc.c | 32 +++-
 1 file changed, 7 insertions(+), 25 deletions(-)

diff --git a/libavformat/hlsenc.c b/libavformat/hlsenc.c
index e1714d4eed..1baefe852f 100644
--- a/libavformat/hlsenc.c
+++ b/libavformat/hlsenc.c
@@ -35,6 +35,7 @@
 #include "libavutil/intreadwrite.h"
 #include "libavutil/opt.h"
 #include "libavutil/log.h"
+#include "libavutil/parseutils.h"
 #include "libavutil/random_seed.h"
 #include "libavutil/time.h"
 #include "libavutil/time_internal.h"
@@ -1251,27 +1252,6 @@ static int hls_append_segment(struct AVFormatContext *s, 
HLSContext *hls,
 return 0;
 }
 
-static double parse_iso8601(const char *ptr) {
-struct tm program_date_time;
-int y,M,d,h,m;
-double s;
-int num_scanned = sscanf(ptr, "%d-%d-%dT%d:%d:%lf", , , , , , 
);
-
-if (num_scanned < 6) {
-return -1;
-}
-
-program_date_time.tm_year = y - 1900;
-program_date_time.tm_mon = M - 1;
-program_date_time.tm_mday = d;
-program_date_time.tm_hour = h;
-program_date_time.tm_min = m;
-program_date_time.tm_sec = 0;
-program_date_time.tm_isdst = -1;
-
-return mktime(_date_time) + s;
-}
-
 static int parse_playlist(AVFormatContext *s, const char *url, VariantStream 
*vs)
 {
 HLSContext *hls = s->priv_data;
@@ -1281,6 +1261,7 @@ static int parse_playlist(AVFormatContext *s, const char 
*url, VariantStream *vs
 char line[MAX_URL_SIZE];
 const char *ptr;
 const char *end;
+int64_t parsed_time;
 double discont_program_date_time = 0;
 
 if ((ret = ffio_open_whitelist(, url, AVIO_FLAG_READ,
@@ -1337,11 +1318,11 @@ static int parse_playlist(AVFormatContext *s, const 
char *url, VariantStream *vs
 }
 }
 } else if (av_strstart(line, "#EXT-X-PROGRAM-DATE-TIME:", )) {
-discont_program_date_time = parse_iso8601(ptr);
-if (discont_program_date_time < 0) {
+if (av_parse_time(_time, ptr, 0)) {
 ret = AVERROR_INVALIDDATA;
 goto fail;
 }
+discont_program_date_time = parsed_time / 100.0;
 } else if (av_strstart(line, "#", NULL)) {
 continue;
 } else if (line[0]) {
@@ -2933,6 +2914,7 @@ static int hls_init(AVFormatContext *s)
 int i = 0;
 int j = 0;
 HLSContext *hls = s->priv_data;
+int64_t parsed_time;
 const char *pattern;
 VariantStream *vs = NULL;
 const char *vtt_pattern = hls->flags & HLS_SINGLE_FILE ? ".vtt" : "%d.vtt";
@@ -2942,11 +2924,11 @@ static int hls_init(AVFormatContext *s)
 double initial_program_date_time;
 
 if (hls->init_program_date_time) {
-initial_program_date_time = parse_iso8601(hls->init_program_date_time);
-if (initial_program_date_time < 0) {
+if (av_parse_time(_time, hls->init_program_date_time, 0)) {
 av_log(s, AV_LOG_ERROR, "Invalid init_program_date_time\n");
 return AVERROR(EINVAL);
 }
+initial_program_date_time = parsed_time / 100.0;
 } else {
 initial_program_date_time = av_gettime() / 100.0;
 }
-- 
2.39.2 (Apple Git-143)

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH] avformat/hlsenc: Allow setting master_pl_publish_rate to a negative value to have it write immediately

2023-11-03 Thread Dave Johansen
---
 libavformat/hlsenc.c | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/libavformat/hlsenc.c b/libavformat/hlsenc.c
index 4ef84c05c1..76d8094de6 100644
--- a/libavformat/hlsenc.c
+++ b/libavformat/hlsenc.c
@@ -245,7 +245,7 @@ typedef struct HLSContext {
 char *var_stream_map; /* user specified variant stream map string */
 char *cc_stream_map; /* user specified closed caption streams map string */
 char *master_pl_name;
-unsigned int master_publish_rate;
+int master_publish_rate;
 int http_persistent;
 AVIOContext *m3u8_out;
 AVIOContext *sub_m3u8_out;
@@ -1388,7 +1388,9 @@ static int create_master_playlist(AVFormatContext *s,
 char temp_filename[MAX_URL_SIZE];
 
 input_vs->m3u8_created = 1;
-if (!hls->master_m3u8_created) {
+if (hls->master_publish_rate < 0) {
+hls->master_publish_rate = 0;
+} else if (!hls->master_m3u8_created) {
 /* For the first time, wait until all the media playlists are created 
*/
 for (i = 0; i < hls->nb_varstreams; i++)
 if (!hls->var_streams[i].m3u8_created)
@@ -3162,7 +3164,7 @@ static const AVOption options[] = {
 {"var_stream_map", "Variant stream map string", OFFSET(var_stream_map), 
AV_OPT_TYPE_STRING, {.str = NULL},  0, 0,E},
 {"cc_stream_map", "Closed captions stream map string", 
OFFSET(cc_stream_map), AV_OPT_TYPE_STRING, {.str = NULL},  0, 0,E},
 {"master_pl_name", "Create HLS master playlist with this name", 
OFFSET(master_pl_name), AV_OPT_TYPE_STRING, {.str = NULL},  0, 0,E},
-{"master_pl_publish_rate", "Publish master play list every after this many 
segment intervals", OFFSET(master_publish_rate), AV_OPT_TYPE_INT, {.i64 = 0}, 
0, UINT_MAX, E},
+{"master_pl_publish_rate", "Publish master play list every after this many 
segment intervals", OFFSET(master_publish_rate), AV_OPT_TYPE_INT, {.i64 = 0}, 
INT_MIN, INT_MAX, E},
 {"http_persistent", "Use persistent HTTP connections", 
OFFSET(http_persistent), AV_OPT_TYPE_BOOL, {.i64 = 0 }, 0, 1, E },
 {"timeout", "set timeout for socket I/O operations", OFFSET(timeout), 
AV_OPT_TYPE_DURATION, { .i64 = -1 }, -1, INT_MAX, .flags = E },
 {"ignore_io_errors", "Ignore IO errors for stable long-duration runs with 
network output", OFFSET(ignore_io_errors), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 
1, E },
-- 
2.39.2 (Apple Git-143)

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH] avformat/hlsenc: Only append postfix to fmp4 init filename if not in the subdir

2023-11-02 Thread Dave Johansen
---
 libavformat/hlsenc.c | 26 +-
 1 file changed, 25 insertions(+), 1 deletion(-)

diff --git a/libavformat/hlsenc.c b/libavformat/hlsenc.c
index 4ef84c05c1..dd1a461cce 100644
--- a/libavformat/hlsenc.c
+++ b/libavformat/hlsenc.c
@@ -1931,6 +1931,30 @@ fail:
 return ret;
 }
 
+static int validate_subdir(const char *fn)
+{
+const char *subdir_name;
+char *fn_dup = NULL;
+int ret = 0;
+
+if (!fn)
+return AVERROR(EINVAL);
+
+fn_dup = av_strdup(fn);
+if (!fn_dup)
+return AVERROR(ENOMEM);
+subdir_name = av_dirname(fn_dup);
+
+if (!av_stristr(subdir_name, "%v")) {
+ret = AVERROR(EINVAL);
+goto fail;
+}
+
+fail:
+av_freep(_dup);
+return ret;
+}
+
 static int format_name(const char *buf, char **s, int index, const char 
*varname)
 {
 const char *proto, *dir;
@@ -3019,7 +3043,7 @@ static int hls_init(AVFormatContext *s)
 av_freep(>fmp4_init_filename);
 ret = format_name(hls->fmp4_init_filename,
   >fmp4_init_filename, i, 
vs->varname);
-} else {
+} else if (validate_subdir(s->url) < 0) {
 ret = append_postfix(vs->fmp4_init_filename, 
fmp4_init_filename_len, i);
 }
 if (ret < 0)
-- 
2.39.2 (Apple Git-143)

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH] avformat/hlsenc: Move lrint outside of loop

2023-10-27 Thread Dave Johansen
---
 libavformat/hlsenc.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/libavformat/hlsenc.c b/libavformat/hlsenc.c
index 4ef84c05c1..e59a38b497 100644
--- a/libavformat/hlsenc.c
+++ b/libavformat/hlsenc.c
@@ -1538,7 +1538,7 @@ static int hls_window(AVFormatContext *s, int last, 
VariantStream *vs)
 {
 HLSContext *hls = s->priv_data;
 HLSSegment *en;
-int target_duration = 0;
+double target_duration = 0;
 int ret = 0;
 char temp_filename[MAX_URL_SIZE];
 char temp_vtt_filename[MAX_URL_SIZE];
@@ -1589,12 +1589,12 @@ static int hls_window(AVFormatContext *s, int last, 
VariantStream *vs)
 
 for (en = vs->segments; en; en = en->next) {
 if (target_duration <= en->duration)
-target_duration = lrint(en->duration);
+target_duration = en->duration;
 }
 
 vs->discontinuity_set = 0;
 ff_hls_write_playlist_header(byterange_mode ? hls->m3u8_out : vs->out, 
hls->version, hls->allowcache,
- target_duration, sequence, hls->pl_type, 
hls->flags & HLS_I_FRAMES_ONLY);
+ lrint(target_duration), sequence, 
hls->pl_type, hls->flags & HLS_I_FRAMES_ONLY);
 
 if ((hls->flags & HLS_DISCONT_START) && sequence==hls->start_sequence && 
vs->discontinuity_set==0) {
 avio_printf(byterange_mode ? hls->m3u8_out : vs->out, 
"#EXT-X-DISCONTINUITY\n");
@@ -1643,7 +1643,7 @@ static int hls_window(AVFormatContext *s, int last, 
VariantStream *vs)
 goto fail;
 }
 ff_hls_write_playlist_header(hls->sub_m3u8_out, hls->version, 
hls->allowcache,
- target_duration, sequence, 
PLAYLIST_TYPE_NONE, 0);
+ lrint(target_duration), sequence, 
PLAYLIST_TYPE_NONE, 0);
 for (en = vs->segments; en; en = en->next) {
 ret = ff_hls_write_file_entry(hls->sub_m3u8_out, 0, byterange_mode,
   en->duration, 0, en->size, en->pos,
-- 
2.39.2 (Apple Git-143)

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH] avformat/hlsenc: Handle when fractional seconds not set and error out when init_program_date_time can't be parsed

2023-10-27 Thread Dave Johansen
---
 libavformat/hlsenc.c | 26 +++---
 1 file changed, 19 insertions(+), 7 deletions(-)

diff --git a/libavformat/hlsenc.c b/libavformat/hlsenc.c
index f613e35984..e1714d4eed 100644
--- a/libavformat/hlsenc.c
+++ b/libavformat/hlsenc.c
@@ -1253,9 +1253,11 @@ static int hls_append_segment(struct AVFormatContext *s, 
HLSContext *hls,
 
 static double parse_iso8601(const char *ptr) {
 struct tm program_date_time;
-int y,M,d,h,m,s;
-double ms;
-if (sscanf(ptr, "%d-%d-%dT%d:%d:%d.%lf", , , , , , , ) != 
7) {
+int y,M,d,h,m;
+double s;
+int num_scanned = sscanf(ptr, "%d-%d-%dT%d:%d:%lf", , , , , , 
);
+
+if (num_scanned < 6) {
 return -1;
 }
 
@@ -1264,10 +1266,10 @@ static double parse_iso8601(const char *ptr) {
 program_date_time.tm_mday = d;
 program_date_time.tm_hour = h;
 program_date_time.tm_min = m;
-program_date_time.tm_sec = s;
+program_date_time.tm_sec = 0;
 program_date_time.tm_isdst = -1;
 
-return mktime(_date_time) + (double)(ms / 1000);
+return mktime(_date_time) + s;
 }
 
 static int parse_playlist(AVFormatContext *s, const char *url, VariantStream 
*vs)
@@ -2937,7 +2939,17 @@ static int hls_init(AVFormatContext *s)
 char *p = NULL;
 int http_base_proto = ff_is_http_proto(s->url);
 int fmp4_init_filename_len = strlen(hls->fmp4_init_filename) + 1;
-double initial_program_date_time = hls->init_program_date_time ? 
parse_iso8601(hls->init_program_date_time) : av_gettime() / 100.0;
+double initial_program_date_time;
+
+if (hls->init_program_date_time) {
+initial_program_date_time = parse_iso8601(hls->init_program_date_time);
+if (initial_program_date_time < 0) {
+av_log(s, AV_LOG_ERROR, "Invalid init_program_date_time\n");
+return AVERROR(EINVAL);
+}
+} else {
+initial_program_date_time = av_gettime() / 100.0;
+}
 
 if (hls->use_localtime) {
 pattern = get_default_pattern_localtime_fmt(s);
@@ -3216,7 +3228,7 @@ static const AVOption options[] = {
 {"split_by_time", "split the hls segment by time which user set by 
hls_time", 0, AV_OPT_TYPE_CONST, {.i64 = HLS_SPLIT_BY_TIME }, 0, UINT_MAX,   E, 
"flags"},
 {"append_list", "append the new segments into old hls segment list", 0, 
AV_OPT_TYPE_CONST, {.i64 = HLS_APPEND_LIST }, 0, UINT_MAX,   E, "flags"},
 {"program_date_time", "add EXT-X-PROGRAM-DATE-TIME", 0, AV_OPT_TYPE_CONST, 
{.i64 = HLS_PROGRAM_DATE_TIME }, 0, UINT_MAX,   E, "flags"},
-{"init_program_date_time", "Time to start program date time at", 
OFFSET(init_program_date_time), AV_OPT_TYPE_STRING, .flags = E},
+{"init_program_date_time", "Time to start program date time at (must be 
%Y-%m-%dT%H:%M:%S and timezone is ignored)", OFFSET(init_program_date_time), 
AV_OPT_TYPE_STRING, .flags = E},
 {"second_level_segment_index", "include segment index in segment filenames 
when use_localtime", 0, AV_OPT_TYPE_CONST, {.i64 = 
HLS_SECOND_LEVEL_SEGMENT_INDEX }, 0, UINT_MAX,   E, "flags"},
 {"second_level_segment_duration", "include segment duration in segment 
filenames when use_localtime", 0, AV_OPT_TYPE_CONST, {.i64 = 
HLS_SECOND_LEVEL_SEGMENT_DURATION }, 0, UINT_MAX,   E, "flags"},
 {"second_level_segment_size", "include segment size in segment filenames 
when use_localtime", 0, AV_OPT_TYPE_CONST, {.i64 = 
HLS_SECOND_LEVEL_SEGMENT_SIZE }, 0, UINT_MAX,   E, "flags"},
-- 
2.39.2 (Apple Git-143)

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH 4/4] avformat/hlsenc: Add second_level_segment_microsecond for using %%f to specify microseconds of time in segment filename

2023-10-26 Thread Dave Johansen
---
 libavformat/hlsenc.c | 51 +---
 1 file changed, 48 insertions(+), 3 deletions(-)

diff --git a/libavformat/hlsenc.c b/libavformat/hlsenc.c
index 93c47b631b..f613e35984 100644
--- a/libavformat/hlsenc.c
+++ b/libavformat/hlsenc.c
@@ -103,6 +103,7 @@ typedef enum HLSFlags {
 HLS_SECOND_LEVEL_SEGMENT_INDEX = (1 << 8), // include segment index in 
segment filenames when use_localtime  e.g.: %%03d
 HLS_SECOND_LEVEL_SEGMENT_DURATION = (1 << 9), // include segment duration 
(microsec) in segment filenames when use_localtime  e.g.: %%09t
 HLS_SECOND_LEVEL_SEGMENT_SIZE = (1 << 10), // include segment size (bytes) 
in segment filenames when use_localtime  e.g.: %%014s
+HLS_SECOND_LEVEL_SEGMENT_MICROSECOND = (1 << 15), // include microseconds 
of localtime in segment filenames when use_localtime  e.g.: %%f
 HLS_TEMP_FILE = (1 << 11),
 HLS_PERIODIC_REKEY = (1 << 12),
 HLS_INDEPENDENT_SEGMENTS = (1 << 13),
@@ -496,7 +497,7 @@ static int replace_str_data_in_filename(char **s, const 
char *filename, char pla
 return found_count;
 }
 
-static int replace_int_data_in_filename(char **s, const char *filename, char 
placeholder, int64_t number)
+static int replace_int_data_in_filename_forced(char **s, const char *filename, 
char placeholder, int64_t number, int forced_digits)
 {
 const char *p;
 char c;
@@ -521,6 +522,9 @@ static int replace_int_data_in_filename(char **s, const 
char *filename, char pla
 nd = nd * 10 + *(p + addchar_count) - '0';
 addchar_count++;
 }
+if (forced_digits > nd) {
+nd = forced_digits;
+}
 
 if (*(p + addchar_count) == placeholder) {
 av_bprintf(, "%0*"PRId64, (number < 0) ? nd : nd++, 
number);
@@ -544,6 +548,11 @@ static int replace_int_data_in_filename(char **s, const 
char *filename, char pla
 return found_count;
 }
 
+static int replace_int_data_in_filename(char **s, const char *filename, char 
placeholder, int64_t number)
+{
+return replace_int_data_in_filename_forced(s, filename, placeholder, 
number, 0);
+}
+
 static void write_styp(AVIOContext *pb)
 {
 avio_wb32(pb, 24);
@@ -1020,6 +1029,20 @@ static int sls_flags_filename_process(struct 
AVFormatContext *s, HLSContext *hls
 }
 ff_format_set_url(vs->avf, filename);
 }
+if (hls->flags & HLS_SECOND_LEVEL_SEGMENT_MICROSECOND) {
+char *filename = NULL;
+double mod_res;
+if (replace_int_data_in_filename_forced(, vs->avf->url,
+ 'f',  100 * 
modf(vs->curr_prog_date_time, _res), 6) < 1) {
+av_log(hls, AV_LOG_ERROR,
+   "Invalid second level segment filename template '%s', "
+   "you can try to remove second_level_segment_microsecond 
flag\n",
+   vs->avf->url);
+av_freep();
+return AVERROR(EINVAL);
+}
+ff_format_set_url(vs->avf, filename);
+}
 }
 return 0;
 }
@@ -1043,6 +1066,11 @@ static int sls_flag_check_duration_size_index(HLSContext 
*hls)
"second_level_segment_index hls_flag requires strftime to be 
true\n");
 ret = AVERROR(EINVAL);
 }
+if (hls->flags & HLS_SECOND_LEVEL_SEGMENT_MICROSECOND) {
+av_log(hls, AV_LOG_ERROR,
+   "second_level_segment_microsecond hls_flag requires strftime to 
be true\n");
+ret = AVERROR(EINVAL);
+}
 
 return ret;
 }
@@ -1063,12 +1091,17 @@ static int sls_flag_check_duration_size(HLSContext 
*hls, VariantStream *vs)
"second_level_segment_size hls_flag works only with file 
protocol segment names\n");
 ret = AVERROR(EINVAL);
 }
+if ((hls->flags & HLS_SECOND_LEVEL_SEGMENT_MICROSECOND) && 
!segment_renaming_ok) {
+av_log(hls, AV_LOG_ERROR,
+   "second_level_segment_microsecond hls_flag works only with file 
protocol segment names\n");
+ret = AVERROR(EINVAL);
+}
 
 return ret;
 }
 
 static void sls_flag_file_rename(HLSContext *hls, VariantStream *vs, char 
*old_filename) {
-if ((hls->flags & (HLS_SECOND_LEVEL_SEGMENT_SIZE | 
HLS_SECOND_LEVEL_SEGMENT_DURATION)) &&
+if ((hls->flags & (HLS_SECOND_LEVEL_SEGMENT_SIZE | 
HLS_SECOND_LEVEL_SEGMENT_DURATION | HLS_SECOND_LEVEL_SEGMENT_MICROSECOND)) &&
 strlen(vs->current_segment_final_filename_fmt)) {
 ff_rename(old_filename, vs->avf->url, hls);
 }
@@ -1088,7 +1121,7 @@ static int 
sls_flag_use_localtime_filename(AVFormatContext *oc, HLSContext *c, V
 }
 ff_format_set_url(oc, filename);
 }
-if (c->flags & (HLS_SECOND_LEVEL_SEGMENT_SIZE | 
HLS_SECOND_LEVEL_SEGMENT_DURATION)) {
+if (c->flags & (HLS_SECOND_LEVEL_SEGMENT_SIZE | 
HLS_SECOND_LEVEL_SEGMENT_DURATION | HLS_SECOND_LEVEL_SEGMENT_MICROSECOND)) {
 

[FFmpeg-devel] [PATCH 3/4] avformat/hlsenc: Fix name of flag in error message

2023-10-26 Thread Dave Johansen
---
 libavformat/hlsenc.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/libavformat/hlsenc.c b/libavformat/hlsenc.c
index 24a0304f78..93c47b631b 100644
--- a/libavformat/hlsenc.c
+++ b/libavformat/hlsenc.c
@@ -1013,7 +1013,7 @@ static int sls_flags_filename_process(struct 
AVFormatContext *s, HLSContext *hls
  't',  (int64_t)round(duration * 
HLS_MICROSECOND_UNIT)) < 1) {
 av_log(hls, AV_LOG_ERROR,
"Invalid second level segment filename template '%s', "
-   "you can try to remove second_level_segment_time 
flag\n",
+   "you can try to remove second_level_segment_duration 
flag\n",
vs->avf->url);
 av_freep();
 return AVERROR(EINVAL);
@@ -1106,7 +1106,7 @@ static int 
sls_flag_use_localtime_filename(AVFormatContext *oc, HLSContext *c, V
 char *filename = NULL;
 if (replace_int_data_in_filename(, oc->url, 't', 0) < 1) {
 av_log(c, AV_LOG_ERROR, "Invalid second level segment filename 
template '%s', "
-"you can try to remove second_level_segment_time 
flag\n",
+"you can try to remove second_level_segment_duration 
flag\n",
oc->url);
 av_freep();
 return AVERROR(EINVAL);
-- 
2.39.2 (Apple Git-143)

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH 2/4] avformat/hlsenc: Add strftime_prog for using PROGRAM-DATE-TIME in the segment filename

2023-10-26 Thread Dave Johansen
---
 libavformat/hlsenc.c | 36 ++--
 1 file changed, 30 insertions(+), 6 deletions(-)

diff --git a/libavformat/hlsenc.c b/libavformat/hlsenc.c
index 5dfff6b2b6..24a0304f78 100644
--- a/libavformat/hlsenc.c
+++ b/libavformat/hlsenc.c
@@ -159,6 +159,7 @@ typedef struct VariantStream {
 char *m3u8_name;
 
 double initial_prog_date_time;
+double curr_prog_date_time;
 char current_segment_final_filename_fmt[MAX_URL_SIZE]; // when renaming 
segments
 
 char *fmp4_init_filename;
@@ -208,6 +209,7 @@ typedef struct HLSContext {
 
 int use_localtime;  ///< flag to expand filename with localtime
 int use_localtime_mkdir;///< flag to mkdir dirname in timebased filename
+int use_localtime_prog; ///< flag to expand filename with prog date time
 int allowcache;
 int64_t recording_time;
 int64_t max_seg_size; // every segment file max size
@@ -259,10 +261,9 @@ typedef struct HLSContext {
 int has_video_m3u8; /* has video stream m3u8 list */
 } HLSContext;
 
-static int strftime_expand(const char *fmt, char **dest)
+static int strftime_expand_time_t(const char *fmt, const time_t *value, char 
**dest)
 {
 int r = 1;
-time_t now0;
 struct tm *tm, tmpbuf;
 char *buf;
 
@@ -270,8 +271,7 @@ static int strftime_expand(const char *fmt, char **dest)
 if (!buf)
 return AVERROR(ENOMEM);
 
-time();
-tm = localtime_r(, );
+tm = localtime_r(value, );
 r = strftime(buf, MAX_URL_SIZE, fmt, tm);
 if (!r) {
 av_free(buf);
@@ -282,6 +282,19 @@ static int strftime_expand(const char *fmt, char **dest)
 return r;
 }
 
+static int strftime_expand(const char *fmt, char **dest)
+{
+time_t now0;
+time();
+return strftime_expand_time_t(fmt, , dest);
+}
+
+static int strftime_expand_prog(const char *fmt, const double prog_date_time, 
char **dest)
+{
+time_t value = (time_t)prog_date_time;
+return strftime_expand_time_t(fmt, , dest);
+}
+
 static int hlsenc_io_open(AVFormatContext *s, AVIOContext **pb, const char 
*filename,
   AVDictionary **options)
 {
@@ -1721,7 +1734,11 @@ static int hls_start(AVFormatContext *s, VariantStream 
*vs)
 int r;
 char *expanded = NULL;
 
-r = strftime_expand(vs->basename, );
+if (c->use_localtime_prog) {
+r = strftime_expand_prog(vs->basename, 
vs->curr_prog_date_time, );
+} else {
+r = strftime_expand(vs->basename, );
+}
 if (r < 0) {
 av_log(oc, AV_LOG_ERROR, "Could not get segment filename with 
strftime\n");
 return r;
@@ -2615,6 +2632,7 @@ static int hls_write_packet(AVFormatContext *s, AVPacket 
*pkt)
 if (vs->start_pos || hls->segment_type != SEGMENT_TYPE_FMP4) {
 double cur_duration =  (double)(pkt->pts - vs->end_pts) * 
st->time_base.num / st->time_base.den;
 ret = hls_append_segment(s, hls, vs, cur_duration, vs->start_pos, 
vs->size);
+vs->curr_prog_date_time += cur_duration;
 vs->end_pts = pkt->pts;
 vs->duration = 0;
 if (ret < 0) {
@@ -2971,6 +2989,7 @@ static int hls_init(AVFormatContext *s)
 vs->end_pts   = AV_NOPTS_VALUE;
 vs->current_segment_final_filename_fmt[0] = '\0';
 vs->initial_prog_date_time = initial_program_date_time;
+vs->curr_prog_date_time = initial_program_date_time;
 
 for (j = 0; j < vs->nb_streams; j++) {
 vs->has_video += vs->streams[j]->codecpar->codec_type == 
AVMEDIA_TYPE_VIDEO;
@@ -3038,7 +3057,11 @@ static int hls_init(AVFormatContext *s)
 int r;
 char *expanded = NULL;
 
-r = strftime_expand(vs->fmp4_init_filename, );
+if (hls->use_localtime_prog) {
+r = strftime_expand_prog(vs->fmp4_init_filename, 
vs->curr_prog_date_time, );
+} else {
+r = strftime_expand(vs->fmp4_init_filename, );
+}
 if (r < 0) {
 av_log(s, AV_LOG_ERROR, "Could not get segment 
filename with strftime\n");
 return r;
@@ -3158,6 +3181,7 @@ static const AVOption options[] = {
 {"iframes_only", "add EXT-X-I-FRAMES-ONLY, whenever applicable", 0, 
AV_OPT_TYPE_CONST, { .i64 = HLS_I_FRAMES_ONLY }, 0, UINT_MAX, E, "flags"},
 {"strftime", "set filename expansion with strftime at segment creation", 
OFFSET(use_localtime), AV_OPT_TYPE_BOOL, {.i64 = 0 }, 0, 1, E },
 {"strftime_mkdir", "create last directory component in strftime-generated 
filename", OFFSET(use_localtime_mkdir), AV_OPT_TYPE_BOOL, {.i64 = 0 }, 0, 1, E 
},
+{"strftime_prog", "set filename expanish with program date time", 
OFFSET(use_localtime_prog), AV_OPT_TYPE_BOOL, {.i64 = 0 }, 0, 1, E },
 {"hls_playlist_type", "set the HLS playlist type", 

[FFmpeg-devel] [PATCH 1/4] avformat/hlsenc: Add init_program_date_time so start time can be specified

2023-10-26 Thread Dave Johansen
---
 doc/muxers.texi  |  3 +++
 libavformat/hlsenc.c | 41 +
 2 files changed, 28 insertions(+), 16 deletions(-)

diff --git a/doc/muxers.texi b/doc/muxers.texi
index f6071484ff..87c19a5cb9 100644
--- a/doc/muxers.texi
+++ b/doc/muxers.texi
@@ -1086,6 +1086,9 @@ seeking. This flag should be used with the 
@code{hls_time} option.
 @item program_date_time
 Generate @code{EXT-X-PROGRAM-DATE-TIME} tags.
 
+@item init_program_date_time
+Time to start program date time at.
+
 @item second_level_segment_index
 Makes it possible to use segment indexes as %%d in hls_segment_filename 
expression
 besides date/time values when strftime is on.
diff --git a/libavformat/hlsenc.c b/libavformat/hlsenc.c
index 4ef84c05c1..5dfff6b2b6 100644
--- a/libavformat/hlsenc.c
+++ b/libavformat/hlsenc.c
@@ -212,6 +212,8 @@ typedef struct HLSContext {
 int64_t recording_time;
 int64_t max_seg_size; // every segment file max size
 
+char *init_program_date_time;
+
 char *baseurl;
 char *vtt_format_options_str;
 char *subtitle_filename;
@@ -1192,6 +1194,25 @@ static int hls_append_segment(struct AVFormatContext *s, 
HLSContext *hls,
 return 0;
 }
 
+static double parse_iso8601(const char *ptr) {
+struct tm program_date_time;
+int y,M,d,h,m,s;
+double ms;
+if (sscanf(ptr, "%d-%d-%dT%d:%d:%d.%lf", , , , , , , ) != 
7) {
+return -1;
+}
+
+program_date_time.tm_year = y - 1900;
+program_date_time.tm_mon = M - 1;
+program_date_time.tm_mday = d;
+program_date_time.tm_hour = h;
+program_date_time.tm_min = m;
+program_date_time.tm_sec = s;
+program_date_time.tm_isdst = -1;
+
+return mktime(_date_time) + (double)(ms / 1000);
+}
+
 static int parse_playlist(AVFormatContext *s, const char *url, VariantStream 
*vs)
 {
 HLSContext *hls = s->priv_data;
@@ -1257,24 +1278,11 @@ static int parse_playlist(AVFormatContext *s, const 
char *url, VariantStream *vs
 }
 }
 } else if (av_strstart(line, "#EXT-X-PROGRAM-DATE-TIME:", )) {
-struct tm program_date_time;
-int y,M,d,h,m,s;
-double ms;
-if (sscanf(ptr, "%d-%d-%dT%d:%d:%d.%lf", , , , , , , 
) != 7) {
+discont_program_date_time = parse_iso8601(ptr);
+if (discont_program_date_time < 0) {
 ret = AVERROR_INVALIDDATA;
 goto fail;
 }
-
-program_date_time.tm_year = y - 1900;
-program_date_time.tm_mon = M - 1;
-program_date_time.tm_mday = d;
-program_date_time.tm_hour = h;
-program_date_time.tm_min = m;
-program_date_time.tm_sec = s;
-program_date_time.tm_isdst = -1;
-
-discont_program_date_time = mktime(_date_time);
-discont_program_date_time += (double)(ms / 1000);
 } else if (av_strstart(line, "#", NULL)) {
 continue;
 } else if (line[0]) {
@@ -2867,7 +2875,7 @@ static int hls_init(AVFormatContext *s)
 char *p = NULL;
 int http_base_proto = ff_is_http_proto(s->url);
 int fmp4_init_filename_len = strlen(hls->fmp4_init_filename) + 1;
-double initial_program_date_time = av_gettime() / 100.0;
+double initial_program_date_time = hls->init_program_date_time ? 
parse_iso8601(hls->init_program_date_time) : av_gettime() / 100.0;
 
 if (hls->use_localtime) {
 pattern = get_default_pattern_localtime_fmt(s);
@@ -3141,6 +3149,7 @@ static const AVOption options[] = {
 {"split_by_time", "split the hls segment by time which user set by 
hls_time", 0, AV_OPT_TYPE_CONST, {.i64 = HLS_SPLIT_BY_TIME }, 0, UINT_MAX,   E, 
"flags"},
 {"append_list", "append the new segments into old hls segment list", 0, 
AV_OPT_TYPE_CONST, {.i64 = HLS_APPEND_LIST }, 0, UINT_MAX,   E, "flags"},
 {"program_date_time", "add EXT-X-PROGRAM-DATE-TIME", 0, AV_OPT_TYPE_CONST, 
{.i64 = HLS_PROGRAM_DATE_TIME }, 0, UINT_MAX,   E, "flags"},
+{"init_program_date_time", "Time to start program date time at", 
OFFSET(init_program_date_time), AV_OPT_TYPE_STRING, .flags = E},
 {"second_level_segment_index", "include segment index in segment filenames 
when use_localtime", 0, AV_OPT_TYPE_CONST, {.i64 = 
HLS_SECOND_LEVEL_SEGMENT_INDEX }, 0, UINT_MAX,   E, "flags"},
 {"second_level_segment_duration", "include segment duration in segment 
filenames when use_localtime", 0, AV_OPT_TYPE_CONST, {.i64 = 
HLS_SECOND_LEVEL_SEGMENT_DURATION }, 0, UINT_MAX,   E, "flags"},
 {"second_level_segment_size", "include segment size in segment filenames 
when use_localtime", 0, AV_OPT_TYPE_CONST, {.i64 = 
HLS_SECOND_LEVEL_SEGMENT_SIZE }, 0, UINT_MAX,   E, "flags"},
-- 
2.39.2 (Apple Git-143)

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email

[FFmpeg-devel] [PATCH] avformat/hlsenc: Add CHANNELS to EXT-X-MEDIA for Audio

2023-10-26 Thread Dave Johansen
---
 libavformat/dashenc.c | 3 ++-
 libavformat/hlsenc.c  | 8 +++-
 libavformat/hlsplaylist.c | 5 -
 libavformat/hlsplaylist.h | 2 +-
 4 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/libavformat/dashenc.c b/libavformat/dashenc.c
index 96f4a5fbdf..15f700acbc 100644
--- a/libavformat/dashenc.c
+++ b/libavformat/dashenc.c
@@ -1284,7 +1284,8 @@ static int write_manifest(AVFormatContext *s, int final)
 continue;
 get_hls_playlist_name(playlist_file, sizeof(playlist_file), 
NULL, i);
 ff_hls_write_audio_rendition(c->m3u8_out, audio_group,
- playlist_file, NULL, i, 
is_default);
+ playlist_file, NULL, i, 
is_default,
+ 
s->streams[i]->codecpar->ch_layout.nb_channels);
 max_audio_bitrate = FFMAX(st->codecpar->bit_rate +
   os->muxer_overhead, 
max_audio_bitrate);
 if (!av_strnstr(audio_codec_str, os->codec_str, 
sizeof(audio_codec_str))) {
diff --git a/libavformat/hlsenc.c b/libavformat/hlsenc.c
index 4ef84c05c1..7dfb8d0a9f 100644
--- a/libavformat/hlsenc.c
+++ b/libavformat/hlsenc.c
@@ -1386,6 +1386,7 @@ static int create_master_playlist(AVFormatContext *s,
 int is_file_proto = proto && !strcmp(proto, "file");
 int use_temp_file = is_file_proto && ((hls->flags & HLS_TEMP_FILE) || 
hls->master_publish_rate);
 char temp_filename[MAX_URL_SIZE];
+int nb_channels;
 
 input_vs->m3u8_created = 1;
 if (!hls->master_m3u8_created) {
@@ -1434,8 +1435,13 @@ static int create_master_playlist(AVFormatContext *s,
 av_log(s, AV_LOG_ERROR, "Unable to find relative URL\n");
 goto fail;
 }
+nb_channels = 0;
+for (j = 0; j < vs->nb_streams; j++)
+if (vs->streams[j]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO)
+if (vs->streams[j]->codecpar->ch_layout.nb_channels > 
nb_channels)
+nb_channels = 
vs->streams[j]->codecpar->ch_layout.nb_channels;
 
-ff_hls_write_audio_rendition(hls->m3u8_out, vs->agroup, m3u8_rel_name, 
vs->language, i, hls->has_default_key ? vs->is_default : 1);
+ff_hls_write_audio_rendition(hls->m3u8_out, vs->agroup, m3u8_rel_name, 
vs->language, i, hls->has_default_key ? vs->is_default : 1, nb_channels);
 }
 
 /* For variant streams with video add #EXT-X-STREAM-INF tag with 
attributes*/
diff --git a/libavformat/hlsplaylist.c b/libavformat/hlsplaylist.c
index 2bf05f3c7c..4f35d0388f 100644
--- a/libavformat/hlsplaylist.c
+++ b/libavformat/hlsplaylist.c
@@ -39,7 +39,7 @@ void ff_hls_write_playlist_version(AVIOContext *out, int 
version)
 
 void ff_hls_write_audio_rendition(AVIOContext *out, const char *agroup,
   const char *filename, const char *language,
-  int name_id, int is_default)
+  int name_id, int is_default, int nb_channels)
 {
 if (!out || !agroup || !filename)
 return;
@@ -49,6 +49,9 @@ void ff_hls_write_audio_rendition(AVIOContext *out, const 
char *agroup,
 if (language) {
 avio_printf(out, "LANGUAGE=\"%s\",", language);
 }
+if (nb_channels) {
+avio_printf(out, "CHANNELS=\"%d\",", nb_channels);
+}
 avio_printf(out, "URI=\"%s\"\n", filename);
 }
 
diff --git a/libavformat/hlsplaylist.h b/libavformat/hlsplaylist.h
index 1928fe787d..c2744c227c 100644
--- a/libavformat/hlsplaylist.h
+++ b/libavformat/hlsplaylist.h
@@ -38,7 +38,7 @@ typedef enum {
 void ff_hls_write_playlist_version(AVIOContext *out, int version);
 void ff_hls_write_audio_rendition(AVIOContext *out, const char *agroup,
   const char *filename, const char *language,
-  int name_id, int is_default);
+  int name_id, int is_default, int 
nb_channels);
 void ff_hls_write_subtitle_rendition(AVIOContext *out, const char *sgroup,
  const char *filename, const char 
*language,
  int name_id, int is_default);
-- 
2.39.2 (Apple Git-143)

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH] Add init_program_date_time so start time can be specified

2023-10-26 Thread Dave Johansen
---
 doc/muxers.texi  |  3 +++
 libavformat/hlsenc.c | 41 +
 2 files changed, 28 insertions(+), 16 deletions(-)

diff --git a/doc/muxers.texi b/doc/muxers.texi
index f6071484ff..87c19a5cb9 100644
--- a/doc/muxers.texi
+++ b/doc/muxers.texi
@@ -1086,6 +1086,9 @@ seeking. This flag should be used with the 
@code{hls_time} option.
 @item program_date_time
 Generate @code{EXT-X-PROGRAM-DATE-TIME} tags.
 
+@item init_program_date_time
+Time to start program date time at.
+
 @item second_level_segment_index
 Makes it possible to use segment indexes as %%d in hls_segment_filename 
expression
 besides date/time values when strftime is on.
diff --git a/libavformat/hlsenc.c b/libavformat/hlsenc.c
index 4ef84c05c1..5dfff6b2b6 100644
--- a/libavformat/hlsenc.c
+++ b/libavformat/hlsenc.c
@@ -212,6 +212,8 @@ typedef struct HLSContext {
 int64_t recording_time;
 int64_t max_seg_size; // every segment file max size
 
+char *init_program_date_time;
+
 char *baseurl;
 char *vtt_format_options_str;
 char *subtitle_filename;
@@ -1192,6 +1194,25 @@ static int hls_append_segment(struct AVFormatContext *s, 
HLSContext *hls,
 return 0;
 }
 
+static double parse_iso8601(const char *ptr) {
+struct tm program_date_time;
+int y,M,d,h,m,s;
+double ms;
+if (sscanf(ptr, "%d-%d-%dT%d:%d:%d.%lf", , , , , , , ) != 
7) {
+return -1;
+}
+
+program_date_time.tm_year = y - 1900;
+program_date_time.tm_mon = M - 1;
+program_date_time.tm_mday = d;
+program_date_time.tm_hour = h;
+program_date_time.tm_min = m;
+program_date_time.tm_sec = s;
+program_date_time.tm_isdst = -1;
+
+return mktime(_date_time) + (double)(ms / 1000);
+}
+
 static int parse_playlist(AVFormatContext *s, const char *url, VariantStream 
*vs)
 {
 HLSContext *hls = s->priv_data;
@@ -1257,24 +1278,11 @@ static int parse_playlist(AVFormatContext *s, const 
char *url, VariantStream *vs
 }
 }
 } else if (av_strstart(line, "#EXT-X-PROGRAM-DATE-TIME:", )) {
-struct tm program_date_time;
-int y,M,d,h,m,s;
-double ms;
-if (sscanf(ptr, "%d-%d-%dT%d:%d:%d.%lf", , , , , , , 
) != 7) {
+discont_program_date_time = parse_iso8601(ptr);
+if (discont_program_date_time < 0) {
 ret = AVERROR_INVALIDDATA;
 goto fail;
 }
-
-program_date_time.tm_year = y - 1900;
-program_date_time.tm_mon = M - 1;
-program_date_time.tm_mday = d;
-program_date_time.tm_hour = h;
-program_date_time.tm_min = m;
-program_date_time.tm_sec = s;
-program_date_time.tm_isdst = -1;
-
-discont_program_date_time = mktime(_date_time);
-discont_program_date_time += (double)(ms / 1000);
 } else if (av_strstart(line, "#", NULL)) {
 continue;
 } else if (line[0]) {
@@ -2867,7 +2875,7 @@ static int hls_init(AVFormatContext *s)
 char *p = NULL;
 int http_base_proto = ff_is_http_proto(s->url);
 int fmp4_init_filename_len = strlen(hls->fmp4_init_filename) + 1;
-double initial_program_date_time = av_gettime() / 100.0;
+double initial_program_date_time = hls->init_program_date_time ? 
parse_iso8601(hls->init_program_date_time) : av_gettime() / 100.0;
 
 if (hls->use_localtime) {
 pattern = get_default_pattern_localtime_fmt(s);
@@ -3141,6 +3149,7 @@ static const AVOption options[] = {
 {"split_by_time", "split the hls segment by time which user set by 
hls_time", 0, AV_OPT_TYPE_CONST, {.i64 = HLS_SPLIT_BY_TIME }, 0, UINT_MAX,   E, 
"flags"},
 {"append_list", "append the new segments into old hls segment list", 0, 
AV_OPT_TYPE_CONST, {.i64 = HLS_APPEND_LIST }, 0, UINT_MAX,   E, "flags"},
 {"program_date_time", "add EXT-X-PROGRAM-DATE-TIME", 0, AV_OPT_TYPE_CONST, 
{.i64 = HLS_PROGRAM_DATE_TIME }, 0, UINT_MAX,   E, "flags"},
+{"init_program_date_time", "Time to start program date time at", 
OFFSET(init_program_date_time), AV_OPT_TYPE_STRING, .flags = E},
 {"second_level_segment_index", "include segment index in segment filenames 
when use_localtime", 0, AV_OPT_TYPE_CONST, {.i64 = 
HLS_SECOND_LEVEL_SEGMENT_INDEX }, 0, UINT_MAX,   E, "flags"},
 {"second_level_segment_duration", "include segment duration in segment 
filenames when use_localtime", 0, AV_OPT_TYPE_CONST, {.i64 = 
HLS_SECOND_LEVEL_SEGMENT_DURATION }, 0, UINT_MAX,   E, "flags"},
 {"second_level_segment_size", "include segment size in segment filenames 
when use_localtime", 0, AV_OPT_TYPE_CONST, {.i64 = 
HLS_SECOND_LEVEL_SEGMENT_SIZE }, 0, UINT_MAX,   E, "flags"},
-- 
2.39.2 (Apple Git-143)

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email

[FFmpeg-devel] [PATCH] Add init_program_date_time so start time can be specified

2023-10-17 Thread Dave Johansen
---
 doc/muxers.texi  | 3 +++
 libavformat/hlsenc.c | 7 ++-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/doc/muxers.texi b/doc/muxers.texi
index f6071484ff..87c19a5cb9 100644
--- a/doc/muxers.texi
+++ b/doc/muxers.texi
@@ -1086,6 +1086,9 @@ seeking. This flag should be used with the 
@code{hls_time} option.
 @item program_date_time
 Generate @code{EXT-X-PROGRAM-DATE-TIME} tags.
 
+@item init_program_date_time
+Time to start program date time at.
+
 @item second_level_segment_index
 Makes it possible to use segment indexes as %%d in hls_segment_filename 
expression
 besides date/time values when strftime is on.
diff --git a/libavformat/hlsenc.c b/libavformat/hlsenc.c
index 4ef84c05c1..474322cc21 100644
--- a/libavformat/hlsenc.c
+++ b/libavformat/hlsenc.c
@@ -28,6 +28,8 @@
 #include 
 #endif
 
+#include "float.h"
+
 #include "libavutil/avassert.h"
 #include "libavutil/mathematics.h"
 #include "libavutil/avstring.h"
@@ -212,6 +214,8 @@ typedef struct HLSContext {
 int64_t recording_time;
 int64_t max_seg_size; // every segment file max size
 
+double init_program_date_time;
+
 char *baseurl;
 char *vtt_format_options_str;
 char *subtitle_filename;
@@ -2867,7 +2871,7 @@ static int hls_init(AVFormatContext *s)
 char *p = NULL;
 int http_base_proto = ff_is_http_proto(s->url);
 int fmp4_init_filename_len = strlen(hls->fmp4_init_filename) + 1;
-double initial_program_date_time = av_gettime() / 100.0;
+double initial_program_date_time = hls->init_program_date_time ? 
hls->init_program_date_time : av_gettime() / 100.0;
 
 if (hls->use_localtime) {
 pattern = get_default_pattern_localtime_fmt(s);
@@ -3141,6 +3145,7 @@ static const AVOption options[] = {
 {"split_by_time", "split the hls segment by time which user set by 
hls_time", 0, AV_OPT_TYPE_CONST, {.i64 = HLS_SPLIT_BY_TIME }, 0, UINT_MAX,   E, 
"flags"},
 {"append_list", "append the new segments into old hls segment list", 0, 
AV_OPT_TYPE_CONST, {.i64 = HLS_APPEND_LIST }, 0, UINT_MAX,   E, "flags"},
 {"program_date_time", "add EXT-X-PROGRAM-DATE-TIME", 0, AV_OPT_TYPE_CONST, 
{.i64 = HLS_PROGRAM_DATE_TIME }, 0, UINT_MAX,   E, "flags"},
+{"init_program_date_time", "Time to start program date time at", 
OFFSET(init_program_date_time), AV_OPT_TYPE_DOUBLE, {.dbl = 0 }, 0, DBL_MAX,   
E},
 {"second_level_segment_index", "include segment index in segment filenames 
when use_localtime", 0, AV_OPT_TYPE_CONST, {.i64 = 
HLS_SECOND_LEVEL_SEGMENT_INDEX }, 0, UINT_MAX,   E, "flags"},
 {"second_level_segment_duration", "include segment duration in segment 
filenames when use_localtime", 0, AV_OPT_TYPE_CONST, {.i64 = 
HLS_SECOND_LEVEL_SEGMENT_DURATION }, 0, UINT_MAX,   E, "flags"},
 {"second_level_segment_size", "include segment size in segment filenames 
when use_localtime", 0, AV_OPT_TYPE_CONST, {.i64 = 
HLS_SECOND_LEVEL_SEGMENT_SIZE }, 0, UINT_MAX,   E, "flags"},
-- 
2.39.2 (Apple Git-143)

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] av1dec: handle dimension changes via get_format

2023-06-21 Thread Dave Airlie
On Thu, 22 Jun 2023 at 07:36, James Almer  wrote:
>
> On 6/21/2023 6:15 PM, Dave Airlie wrote:
> > On Thu, 22 Jun 2023 at 02:36, James Almer  wrote:
> >>
> >> On 6/20/2023 8:36 PM, airl...@gmail.com wrote:
> >>> From: Dave Airlie 
> >>>
> >>> av1-1-b8-03-sizeup.ivf on vulkan causes gpu hangs as none of the
> >>> images get resized when dimensions change, this detects the dim
> >>> change and calls the get_format to reinit the context.
> >>> ---
> >>>libavcodec/av1dec.c | 12 
> >>>1 file changed, 8 insertions(+), 4 deletions(-)
> >>>
> >>> diff --git a/libavcodec/av1dec.c b/libavcodec/av1dec.c
> >>> index e7f98a6c81..1cec328563 100644
> >>> --- a/libavcodec/av1dec.c
> >>> +++ b/libavcodec/av1dec.c
> >>> @@ -721,6 +721,7 @@ static av_cold int av1_decode_free(AVCodecContext 
> >>> *avctx)
> >>>}
> >>>
> >>>static int set_context_with_sequence(AVCodecContext *avctx,
> >>> + int *dim_change,
> >>> const AV1RawSequenceHeader *seq)
> >>>{
> >>>int width = seq->max_frame_width_minus_1 + 1;
> >>> @@ -753,6 +754,8 @@ static int set_context_with_sequence(AVCodecContext 
> >>> *avctx,
> >>>int ret = ff_set_dimensions(avctx, width, height);
> >>>if (ret < 0)
> >>>return ret;
> >>> +if (dim_change)
> >>> +*dim_change = 1;
> >>>}
> >>>avctx->sample_aspect_ratio = (AVRational) { 1, 1 };
> >>>
> >>> @@ -859,7 +862,7 @@ static av_cold int av1_decode_init(AVCodecContext 
> >>> *avctx)
> >>>goto end;
> >>>}
> >>>
> >>> -ret = set_context_with_sequence(avctx, seq);
> >>> +ret = set_context_with_sequence(avctx, NULL, seq);
> >>>if (ret < 0) {
> >>>av_log(avctx, AV_LOG_WARNING, "Failed to set decoder 
> >>> context.\n");
> >>>goto end;
> >>> @@ -1202,7 +1205,7 @@ static int 
> >>> av1_receive_frame_internal(AVCodecContext *avctx, AVFrame *frame)
> >>>CodedBitstreamUnit *unit = >current_obu.units[i];
> >>>AV1RawOBU *obu = unit->content;
> >>>const AV1RawOBUHeader *header;
> >>> -
> >>> +int dim_change = 0;
> >>>if (!obu)
> >>>continue;
> >>>
> >>> @@ -1220,7 +1223,8 @@ static int 
> >>> av1_receive_frame_internal(AVCodecContext *avctx, AVFrame *frame)
> >>>
> >>>s->raw_seq = >obu.sequence_header;
> >>>
> >>> -ret = set_context_with_sequence(avctx, s->raw_seq);
> >>> +dim_change = 0;
> >>> +ret = set_context_with_sequence(avctx, _change, 
> >>> s->raw_seq);
> >>>if (ret < 0) {
> >>>av_log(avctx, AV_LOG_ERROR, "Failed to set 
> >>> context.\n");
> >>>s->raw_seq = NULL;
> >>> @@ -1229,7 +1233,7 @@ static int 
> >>> av1_receive_frame_internal(AVCodecContext *avctx, AVFrame *frame)
> >>>
> >>>s->operating_point_idc = 
> >>> s->raw_seq->operating_point_idc[s->operating_point];
> >>>
> >>> -if (s->pix_fmt == AV_PIX_FMT_NONE) {
> >>> +if (s->pix_fmt == AV_PIX_FMT_NONE || dim_change) {
> >>>ret = get_pixel_format(avctx);
> >>
> >> Dimensions can change between frames without a seq header showing up to
> >> change the max_width/height values. get_pixel_format() would need to be
> >> called on frame headers instead.
> >
> > It can but my reading of the spec is that it is illegal to go beyond
> > the max in sequence header.
> >
> > 6.8 Frame header OBU semantics
> > 6.8.4 Frame size semantics
> >
> > "
> > It is a requirement of bitstream conformance that frame_width_minus_1
> > is less than or equal to max_frame_width_minus_1.
> > It is a requirement of bitstream conformance that frame_height_minus_1
> > is less than or equal to max_frame_height_minus_1.
> > "
> >
> > Dave.
>
> So Vulkan always allocates buffers for the max allowed dimensions and
> not for the current frame's dimensions?

It doesn't seem to be in vulkan specific code:

av1dec.c:set_context_with_sequence
int width = seq->max_frame_width_minus_1 + 1;
int height = seq->max_frame_height_minus_1 + 1;

is where it sets the values later used to allocate the frames.

Dave.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] av1dec: handle dimension changes via get_format

2023-06-21 Thread Dave Airlie
On Thu, 22 Jun 2023 at 02:36, James Almer  wrote:
>
> On 6/20/2023 8:36 PM, airl...@gmail.com wrote:
> > From: Dave Airlie 
> >
> > av1-1-b8-03-sizeup.ivf on vulkan causes gpu hangs as none of the
> > images get resized when dimensions change, this detects the dim
> > change and calls the get_format to reinit the context.
> > ---
> >   libavcodec/av1dec.c | 12 
> >   1 file changed, 8 insertions(+), 4 deletions(-)
> >
> > diff --git a/libavcodec/av1dec.c b/libavcodec/av1dec.c
> > index e7f98a6c81..1cec328563 100644
> > --- a/libavcodec/av1dec.c
> > +++ b/libavcodec/av1dec.c
> > @@ -721,6 +721,7 @@ static av_cold int av1_decode_free(AVCodecContext 
> > *avctx)
> >   }
> >
> >   static int set_context_with_sequence(AVCodecContext *avctx,
> > + int *dim_change,
> >const AV1RawSequenceHeader *seq)
> >   {
> >   int width = seq->max_frame_width_minus_1 + 1;
> > @@ -753,6 +754,8 @@ static int set_context_with_sequence(AVCodecContext 
> > *avctx,
> >   int ret = ff_set_dimensions(avctx, width, height);
> >   if (ret < 0)
> >   return ret;
> > +if (dim_change)
> > +*dim_change = 1;
> >   }
> >   avctx->sample_aspect_ratio = (AVRational) { 1, 1 };
> >
> > @@ -859,7 +862,7 @@ static av_cold int av1_decode_init(AVCodecContext 
> > *avctx)
> >   goto end;
> >   }
> >
> > -ret = set_context_with_sequence(avctx, seq);
> > +ret = set_context_with_sequence(avctx, NULL, seq);
> >   if (ret < 0) {
> >   av_log(avctx, AV_LOG_WARNING, "Failed to set decoder 
> > context.\n");
> >   goto end;
> > @@ -1202,7 +1205,7 @@ static int av1_receive_frame_internal(AVCodecContext 
> > *avctx, AVFrame *frame)
> >   CodedBitstreamUnit *unit = >current_obu.units[i];
> >   AV1RawOBU *obu = unit->content;
> >   const AV1RawOBUHeader *header;
> > -
> > +int dim_change = 0;
> >   if (!obu)
> >   continue;
> >
> > @@ -1220,7 +1223,8 @@ static int av1_receive_frame_internal(AVCodecContext 
> > *avctx, AVFrame *frame)
> >
> >   s->raw_seq = >obu.sequence_header;
> >
> > -ret = set_context_with_sequence(avctx, s->raw_seq);
> > +dim_change = 0;
> > +ret = set_context_with_sequence(avctx, _change, 
> > s->raw_seq);
> >   if (ret < 0) {
> >   av_log(avctx, AV_LOG_ERROR, "Failed to set context.\n");
> >   s->raw_seq = NULL;
> > @@ -1229,7 +1233,7 @@ static int av1_receive_frame_internal(AVCodecContext 
> > *avctx, AVFrame *frame)
> >
> >   s->operating_point_idc = 
> > s->raw_seq->operating_point_idc[s->operating_point];
> >
> > -if (s->pix_fmt == AV_PIX_FMT_NONE) {
> > +if (s->pix_fmt == AV_PIX_FMT_NONE || dim_change) {
> >   ret = get_pixel_format(avctx);
>
> Dimensions can change between frames without a seq header showing up to
> change the max_width/height values. get_pixel_format() would need to be
> called on frame headers instead.

It can but my reading of the spec is that it is illegal to go beyond
the max in sequence header.

6.8 Frame header OBU semantics
6.8.4 Frame size semantics

"
It is a requirement of bitstream conformance that frame_width_minus_1
is less than or equal to max_frame_width_minus_1.
It is a requirement of bitstream conformance that frame_height_minus_1
is less than or equal to max_frame_height_minus_1.
"

Dave.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] avformat/mxfenc: SMPTE RDD 48:2018 Amd 1:2022 (FFV1 in MXF) support

2023-01-29 Thread Dave Rice


> On Jan 20, 2023, at 10:17 AM, Tomas Härdin  wrote:
> 
> ons 2023-01-18 klockan 15:15 +0100 skrev Jerome Martinez:
>> On 18/01/2023 14:40, Tomas Härdin wrote:
>>> Creating a new subthread because I just noticed something
>> 
>> I am a bit lost there because the line of code below is not part of
>> this 
>> FFV1 patch.
>> Additionally, none on my patches (FFV1 of MXF
>> stored/sampled/displayed 
>> fix) modifies the discussed behavior (FFmpeg behavior would be same 
>> before and after this patch for MPEG-2 and AVC), so should not block
>> any 
>> of them, and a potential fix for that should have its own patch as it
>> would be a separate issue.
> 
> True, it doesn't need to hold up this patch. But some discussion is
> warranted I think. I might create a separate patchset for this.
> 
>> 
>> Anyway:
>> 
>> 
>>> 
>>>> +//Stored height
>>>>   mxf_write_local_tag(s, 4, 0x3202);
>>>>   avio_wb32(pb, stored_height>>sc->interlaced);
>>>> 
>>> Won't this be incorrect for files whose dimensions are multiples of
>>> 16
>>> but not multiples of 32? Isn't each field stored separately with
>>> dimensions a multiple of 16? So while for 1080p we'll have
>>> 
>>>StoredHeight = 1088
>>>SampledHeight = 1080
>>> 
>>> and 1080i:
>>> 
>>>StoredHeight = 544
>>>SampledHeight = 540
>>> 
>>> Where 544 is a multiple of 16, for say 720p we have
>>> 
>>>StoredHeight = 720
>>>SampledHeight = 720
>>> 
>>> but for a hypothetical 720i we'd get
>>> 
>>>StoredHeight = 360
>>>SampledHeight = 360
>>> 
>>> whereas the correct values should be
>>> 
>>>StoredHeight = 368
>>>SampledHeight = 360
>> 
>> AFAIK, it would depend about if the stream has a picture_structure
>> frame 
>> (16x16 applies to the frame?) of field (16x16 applies to the field?),
>> but I really don't know enough there for having a relevant opinion.
>> 
>> I can just say that I don't change the behavior of FFmpeg in your use
>> case, I found the issues when I tried a random width and height of
>> FFV1 
>> stream then checked with MPEG-2 Video and the sampled width was wrong
>> for sure e.g. sampled width of 1920 for a stream having a width of
>> 1912, 
>> with current FFmpeg code, and for your use case I am sure about
>> nothing 
>> so I don't change the behavior with my patch, IMO if there is an
>> issue 
>> with 720i MPEG-2 Video it should be in a separate topic and patch as
>> it 
>> would modify the "stored_height = (st->codecpar->height+15)/16*16" 
>> current code (in my patch I just move this code), unless we are sure
>> of 
>> what should be changed on this side and apply a fix on the way.
>> Better 
>> to fix 1 issue and let 1 open with no change than fixing no issue 
>> because we wouldn't be sure for 1 of the 2.
> 
> I suspect we are lucky because 720i doesn't really exist in the real
> world, and 576i and 480i are both multiples of 32.
> 
> IMO mxfenc shouldn't lie, but looking at S377m StoredWidth/Height are
> "best effort" and thus shall be encoded. Their values will depend on
> FrameLayout which in turn depends on what you say - how exactly the
> interlacing is done.
> 
> TL;DR: this patchset doesn't need to be held up by this.

I'm just nudging on the consideration of merging this patch. I've been testing 
it over the last week with ffv1/mxf content and have found this demuxing 
support very helpful.
Dave Rice
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 1/3] avformat/mxfdec: SMPTE RDD 48:2018 support

2022-07-13 Thread Dave Rice
Y(uid, mxf_ffv1_extradata)) {
> +if (ffv1_sub_descriptor->extradata)
> +av_log(NULL, AV_LOG_WARNING, "Duplicate ffv1_extradata\n");
> +av_free(ffv1_sub_descriptor->extradata);
> +ffv1_sub_descriptor->extradata_size = 0;
> +ffv1_sub_descriptor->extradata = av_malloc(size);
> +if (!ffv1_sub_descriptor->extradata)
> +return AVERROR(ENOMEM);
> +ffv1_sub_descriptor->extradata_size = size;
> +avio_read(pb, ffv1_sub_descriptor->extradata, size);
> +}
> +
> +return 0;
> +}
> +
> static int mxf_read_indirect_value(void *arg, AVIOContext *pb, int size)
> {
> MXFTaggedValue *tagged_value = arg;
> @@ -1554,6 +1583,7 @@ static const MXFCodecUL 
> mxf_picture_essence_container_uls[] = {
> { { 
> 0x06,0x0e,0x2b,0x34,0x04,0x01,0x01,0x02,0x0d,0x01,0x03,0x01,0x02,0x1c,0x01,0x00
>  }, 14, AV_CODEC_ID_PRORES, NULL, 14 }, /* ProRes */
> { { 
> 0x06,0x0e,0x2b,0x34,0x04,0x01,0x01,0x02,0x0d,0x01,0x03,0x01,0x02,0x04,0x60,0x01
>  }, 14, AV_CODEC_ID_MPEG2VIDEO, NULL, 15 }, /* MPEG-ES */
> { { 
> 0x06,0x0e,0x2b,0x34,0x04,0x01,0x01,0x01,0x0d,0x01,0x03,0x01,0x02,0x01,0x04,0x01
>  }, 14, AV_CODEC_ID_MPEG2VIDEO, NULL, 15, D10D11Wrap }, /* SMPTE D-10 mapping 
> */
> +{ { 
> 0x06,0x0e,0x2b,0x34,0x04,0x01,0x01,0x0d,0x0d,0x01,0x03,0x01,0x02,0x23,0x01,0x00
>  }, 14,   AV_CODEC_ID_FFV1, NULL, 14 },
> { { 
> 0x06,0x0e,0x2b,0x34,0x04,0x01,0x01,0x01,0x0d,0x01,0x03,0x01,0x02,0x02,0x41,0x01
>  }, 14,AV_CODEC_ID_DVVIDEO, NULL, 15 }, /* DV 625 25mbps */
> { { 
> 0x06,0x0e,0x2b,0x34,0x04,0x01,0x01,0x01,0x0d,0x01,0x03,0x01,0x02,0x05,0x00,0x00
>  }, 14,   AV_CODEC_ID_RAWVIDEO, NULL, 15, RawVWrap }, /* uncompressed picture 
> */
> { { 
> 0x06,0x0e,0x2b,0x34,0x04,0x01,0x01,0x0a,0x0e,0x0f,0x03,0x01,0x02,0x20,0x01,0x01
>  }, 15, AV_CODEC_ID_HQ_HQA },
> @@ -2444,6 +2474,21 @@ static MXFMCASubDescriptor 
> *find_mca_link_id(MXFContext *mxf, enum MXFMetadataSe
> return NULL;
> }
> 
> +static void parse_ffv1_sub_descriptor(MXFContext *mxf, MXFTrack 
> *source_track, MXFDescriptor *descriptor, AVStream *st)
> +{
> +for (int i = 0; i < descriptor->sub_descriptors_count; i++) {
> +MXFFFV1SubDescriptor *ffv1_sub_descriptor = 
> mxf_resolve_strong_ref(mxf, >sub_descriptors_refs[i], 
> FFV1SubDescriptor);
> +if (ffv1_sub_descriptor == NULL)
> +continue;
> +
> +descriptor->extradata  = ffv1_sub_descriptor->extradata;
> +descriptor->extradata_size = ffv1_sub_descriptor->extradata_size;
> +ffv1_sub_descriptor->extradata = NULL;
> +ffv1_sub_descriptor->extradata_size = 0;
> +break;
> +}
> +}
> +
> static int parse_mca_labels(MXFContext *mxf, MXFTrack *source_track, 
> MXFDescriptor *descriptor, AVStream *st)
> {
> uint64_t routing[FF_SANE_NB_CHANNELS] = {0};
> @@ -2972,6 +3017,8 @@ static int mxf_parse_structural_metadata(MXFContext 
> *mxf)
> st->codecpar->codec_id = AV_CODEC_ID_EIA_608;
> }
> }
> +if (!descriptor->extradata)
> +parse_ffv1_sub_descriptor(mxf, source_track, descriptor, st);
> if (descriptor->extradata) {
> if (!ff_alloc_extradata(st->codecpar, 
> descriptor->extradata_size)) {
> memcpy(st->codecpar->extradata, descriptor->extradata, 
> descriptor->extradata_size);
> @@ -3159,6 +3206,7 @@ static const MXFMetadataReadTableEntry 
> mxf_metadata_read_table[] = {
> { { 
> 0x06,0x0e,0x2b,0x34,0x02,0x53,0x01,0x01,0x0d,0x01,0x01,0x01,0x01,0x01,0x6b,0x00
>  }, mxf_read_mca_sub_descriptor, sizeof(MXFMCASubDescriptor), 
> AudioChannelLabelSubDescriptor },
> { { 
> 0x06,0x0e,0x2b,0x34,0x02,0x53,0x01,0x01,0x0d,0x01,0x01,0x01,0x01,0x01,0x6c,0x00
>  }, mxf_read_mca_sub_descriptor, sizeof(MXFMCASubDescriptor), 
> SoundfieldGroupLabelSubDescriptor },
> { { 
> 0x06,0x0e,0x2b,0x34,0x02,0x53,0x01,0x01,0x0d,0x01,0x01,0x01,0x01,0x01,0x6d,0x00
>  }, mxf_read_mca_sub_descriptor, sizeof(MXFMCASubDescriptor), 
> GroupOfSoundfieldGroupsLabelSubDescriptor },
> +{ { 
> 0x06,0x0e,0x2b,0x34,0x02,0x53,0x01,0x01,0x0d,0x01,0x01,0x01,0x01,0x01,0x81,0x03
>  }, mxf_read_ffv1_sub_descriptor, sizeof(MXFFFV1SubDescriptor), 
> FFV1SubDescriptor },
> { { 
> 0x06,0x0e,0x2b,0x34,0x02,0x53,0x01,0x01,0x0d,0x01,0x01,0x01,0x01,0x01,0x3A,0x00
>  }, mxf_read_track, sizeof(MXFTrack), Track }, /* Static Track */
> { { 
> 0x06,0x0e,0x2b,0x34,0x02,0x53,0x01,0x01,0x0d,0x01,0x01,0x01,0x01,0x01,0x3B,0x00
>  }, mxf_read_track, sizeof(MXFTrack), Track }, /* Generic Track */
> { { 
> 0x06,0x0e,0x2b,0x34,0x02,0x53,0x01,0x01,0x0d,0x01,0x01,0x01,0x01,0x01,0x14,0x00
>  }, mxf_read_timecode_component, sizeof(MXFTimecodeComponent), 
> TimecodeComponent },
> -- 
> 2.17.1

For those interested, this patch supports SMPTE RDD48 which is at 
https://www.digitizationguidelines.gov/guidelines/rdd48-2018_published.pdf 
<https://www.digitizationguidelines.gov/guidelines/rdd48-2018_published.pdf> 
and is one of SMPTE’s first documents published with a Creative Commons 
license. 
Dave Rice

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v2] avcodec/v4l2_m2m_dec: export v4l2 buffer dma-buf

2022-03-24 Thread Dave Stevenson
_init(ctx->hwdevice);
> +if (ret < 0)
> +goto  fail;
> +
> +ctx->hwframes = av_hwframe_ctx_alloc(ctx->hwdevice);
> +if (!ctx->hwframes) {
> +ret = AVERROR(ENOMEM);
> +goto fail;
> +}
> +
> +hwframes = (AVHWFramesContext*)ctx->hwframes->data;
> +hwframes->format = AV_PIX_FMT_DRM_PRIME;
> +hwframes->sw_format = ctx->av_pix_fmt;
> +hwframes->width = ctx->width;
> +hwframes->height = ctx->height;
> +ret = av_hwframe_ctx_init(ctx->hwframes);
> +if (ret < 0)
> +goto fail;
> +
> +return 0;
> +fail:
> +ff_v4l2_context_uninit_hw_ctx(ctx);
> +ctx->support_dma_buf = 0;
> +return ret;
> +}
> +
> +void ff_v4l2_context_uninit_hw_ctx(V4L2Context *ctx)
> +{
> +av_buffer_unref(>hwframes);
> +av_buffer_unref(>hwdevice);
> +}
> +
>  void ff_v4l2_context_release(V4L2Context* ctx)
>  {
>  int ret;
> @@ -708,6 +759,7 @@ void ff_v4l2_context_release(V4L2Context* ctx)
>  av_log(logger(ctx), AV_LOG_WARNING, "V4L2 failed to unmap the %s 
> buffers\n", ctx->name);
>
>  av_freep(>buffers);
> +ff_v4l2_context_uninit_hw_ctx(ctx);
>  }
>
>  int ff_v4l2_context_init(V4L2Context* ctx)
> @@ -742,6 +794,7 @@ int ff_v4l2_context_init(V4L2Context* ctx)
>  return AVERROR(ENOMEM);
>  }
>
> +ctx->support_dma_buf = 1;
>  for (i = 0; i < req.count; i++) {
>  ctx->buffers[i].context = ctx;
>  ret = ff_v4l2_buffer_initialize(>buffers[i], i);
> diff --git a/libavcodec/v4l2_context.h b/libavcodec/v4l2_context.h
> index 6f7460c89a9d..723d622e38c3 100644
> --- a/libavcodec/v4l2_context.h
> +++ b/libavcodec/v4l2_context.h
> @@ -93,6 +93,9 @@ typedef struct V4L2Context {
>   */
>  int done;
>
> +int support_dma_buf;
> +AVBufferRef *hwdevice;
> +AVBufferRef *hwframes;
>  } V4L2Context;
>
>  /**
> @@ -184,4 +187,18 @@ int ff_v4l2_context_enqueue_packet(V4L2Context* ctx, 
> const AVPacket* pkt);
>   */
>  int ff_v4l2_context_enqueue_frame(V4L2Context* ctx, const AVFrame* f);
>
> +/**
> + * Initializes the hw context of V4L2Context.
> + *
> + * @param[in] ctx A pointer to a V4L2Context. See V4L2Context description 
> for required variables.
> + * @return 0 in case of success, a negative value representing the error 
> otherwise.
> + */
> +int ff_v4l2_context_init_hw_ctx(V4L2Context *ctx);
> +
> +/**
> + * Releases the hw context of V4L2Context.
> + *
> + * @param[in] ctx A pointer to a V4L2Context.
> + */
> +void ff_v4l2_context_uninit_hw_ctx(V4L2Context *ctx);
>  #endif // AVCODEC_V4L2_CONTEXT_H
> diff --git a/libavcodec/v4l2_fmt.c b/libavcodec/v4l2_fmt.c
> index 6df47e3f5a3c..a64b6d530283 100644
> --- a/libavcodec/v4l2_fmt.c
> +++ b/libavcodec/v4l2_fmt.c
> @@ -29,83 +29,91 @@
>  #define AV_CODEC(x) AV_CODEC_ID_##x
>  #define AV_FMT(x)   AV_PIX_FMT_##x
>
> +#if CONFIG_LIBDRM
> +#include 
> +#define DRM_FMT(x)  DRM_FORMAT_##x
> +#else
> +#define DRM_FMT(x)  0
> +#endif
> +
>  static const struct fmt_conversion {
>  enum AVPixelFormat avfmt;
>  enum AVCodecID avcodec;
>  uint32_t v4l2_fmt;
> +uint32_t drm_fmt;
>  } fmt_map[] = {
> -{ AV_FMT(RGB555LE),AV_CODEC(RAWVIDEO),V4L2_FMT(RGB555) },
> -{ AV_FMT(RGB555BE),AV_CODEC(RAWVIDEO),V4L2_FMT(RGB555X) },
> -{ AV_FMT(RGB565LE),AV_CODEC(RAWVIDEO),V4L2_FMT(RGB565) },
> -{ AV_FMT(RGB565BE),AV_CODEC(RAWVIDEO),V4L2_FMT(RGB565X) },
> -{ AV_FMT(BGR24),   AV_CODEC(RAWVIDEO),V4L2_FMT(BGR24) },
> -{ AV_FMT(RGB24),   AV_CODEC(RAWVIDEO),V4L2_FMT(RGB24) },
> -{ AV_FMT(BGR0),AV_CODEC(RAWVIDEO),V4L2_FMT(BGR32) },
> -{ AV_FMT(0RGB),AV_CODEC(RAWVIDEO),V4L2_FMT(RGB32) },
> -{ AV_FMT(GRAY8),   AV_CODEC(RAWVIDEO),V4L2_FMT(GREY) },
> -{ AV_FMT(YUV420P), AV_CODEC(RAWVIDEO),V4L2_FMT(YUV420) },
> -{ AV_FMT(YUYV422), AV_CODEC(RAWVIDEO),V4L2_FMT(YUYV) },
> -    { AV_FMT(UYVY422), AV_CODEC(RAWVIDEO),V4L2_FMT(UYVY) },
> -{ AV_FMT(YUV422P), AV_CODEC(RAWVIDEO),V4L2_FMT(YUV422P) },
> -{ AV_FMT(YUV411P), AV_CODEC(RAWVIDEO),V4L2_FMT(YUV411P) },
> -{ AV_FMT(YUV410P), AV_CODEC(RAWVIDEO),V4L2_FMT(YUV410) },
> -{ AV_FMT(YUV410P), AV_CODEC(RAWVIDEO),V4L2_FMT(YVU410) },
> -{ AV_FMT(NV12),AV_CODEC(RAWVIDEO),V4L2_FMT(NV12) },
> -{ AV_FMT(NONE),AV_CODEC(MJPEG),   V4L2_FMT(MJPEG) },
> -{ AV_FMT(NONE),AV_CODEC(MJPEG),   V4L2_FMT(JPEG) },
> +{ AV_FMT(RGB555LE),AV_CO

Re: [FFmpeg-devel] [PATCH v2] avcodec: Add dv marker bsf

2022-03-14 Thread Dave Rice



> On Mar 12, 2022, at 1:09 PM, Michael Niedermayer  
> wrote:
> 
> On Sat, Mar 12, 2022 at 10:11:52AM -0500, Dave Rice wrote:
>> 
>> 
>>> On Mar 10, 2022, at 4:41 AM, Tobias Rapp  wrote:
>>> 
>>> On 09/03/2022 19:18, Michael Niedermayer wrote:
>>>> Signed-off-by: Michael Niedermayer 
>>>> ---
>>>> doc/bitstream_filters.texi   |  30 
>>>> libavcodec/Makefile  |   1 +
>>>> libavcodec/bitstream_filters.c   |   1 +
>>>> libavcodec/dv_error_marker_bsf.c | 127 +++
>>>> 4 files changed, 159 insertions(+)
>>>> create mode 100644 libavcodec/dv_error_marker_bsf.c
>>>> diff --git a/doc/bitstream_filters.texi b/doc/bitstream_filters.texi
>>>> index a0092878c8..8c5d84dceb 100644
>>>> --- a/doc/bitstream_filters.texi
>>>> +++ b/doc/bitstream_filters.texi
>>>> @@ -132,6 +132,36 @@ the header stored in extradata to the key packets:
>>>> ffmpeg -i INPUT -map 0 -flags:v +global_header -c:v libx264 -bsf:v 
>>>> dump_extra out.ts
>>>> @end example
>>>> +@section dv_error_marker
>>>> +
>>>> +Blocks in DV which are marked as damaged are replaced by blocks of the 
>>>> specified color.
>>>> +
>>>> +@table @option
>>>> +@item color
>>>> +The color to replace damaged blocks by
>>>> +@item sta
>>>> +A 16 bit mask which specifies which of the 16 possible error status 
>>>> values are
>>>> +to be replaced by colored blocks. 0xFFFE is the default which replaces 
>>>> all non 0
>>>> +error status values.
>>>> +@table @samp
>>>> +@item ok
>>>> +No error, no concealment
>>>> +@item err
>>>> +Error, No concealment
>>>> +@item res
>>>> +Reserved
>>>> +@item notok
>>>> +Error or concealment
>>>> +@item notres
>>>> +Not reserved
>>>> +@item Aa, Ba, Ca, Ab, Bb, Cb, A, B, C, a, b, erri, erru
>>>> +The specific error status code
>>>> +@end table
>>>> +see page 44-46 or section 5.5 of
>>>> +@url{http://web.archive.org/web/20060927044735/http://www.smpte.org/smpte_store/standards/pdf/s314m.pdf}
>>>> +
>>>> +@end table
>>>> +
>>>> @section eac3_core
>>>> [...]
>>> The filter options look nice to me now. Have not actually tested the 
>>> bitstream filter on DV files, though.
>> 
>> I tested this and this works well for me. Here's a few samples that 
>> demonstrate the filter:
>> 
>> ./ffmpeg -i 
>> https://samples.ffmpeg.org/archive/audio/pcm_s16le/dv+dvvideo+pcm_s16le++dropout.dv
>>   -bsf dv_error_marker=sta=b -f rawvideo -c:v copy - | ffplay -
>> ./ffmpeg -i 
>> https://archive.org/download/DvAnalyzerSampleDvVideoErrorConcealment/DV_Analyzer_Sample_Video_Error_Concealment_original.dv
>>  -bsf dv_error_marker=sta=b -f rawvideo -c:v copy - | ffplay -
> 
> I tested a bit more and it failed with dvcprohd, i have fixed it and will in a
> moment post a version that seems to work with both
> please retest

I retested the new version on variety of DV25 and DV50 content. Looks good to 
me.

> PS: i used some artificially damaged files from fate/dv/
> 
> thx
> 
> [...]
> -- 
> Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
> 
> Good people do not need laws to tell them to act responsibly, while bad
> people will find a way around the laws. -- Plato
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org <mailto:ffmpeg-devel@ffmpeg.org>
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel 
> <https://ffmpeg.org/mailman/listinfo/ffmpeg-devel>
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org <mailto:ffmpeg-devel-requ...@ffmpeg.org> with 
> subject "unsubscribe".

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v2] avcodec: Add dv marker bsf

2022-03-12 Thread Dave Rice



> On Mar 10, 2022, at 4:41 AM, Tobias Rapp  wrote:
> 
> On 09/03/2022 19:18, Michael Niedermayer wrote:
>> Signed-off-by: Michael Niedermayer 
>> ---
>>  doc/bitstream_filters.texi   |  30 
>>  libavcodec/Makefile  |   1 +
>>  libavcodec/bitstream_filters.c   |   1 +
>>  libavcodec/dv_error_marker_bsf.c | 127 +++
>>  4 files changed, 159 insertions(+)
>>  create mode 100644 libavcodec/dv_error_marker_bsf.c
>> diff --git a/doc/bitstream_filters.texi b/doc/bitstream_filters.texi
>> index a0092878c8..8c5d84dceb 100644
>> --- a/doc/bitstream_filters.texi
>> +++ b/doc/bitstream_filters.texi
>> @@ -132,6 +132,36 @@ the header stored in extradata to the key packets:
>>  ffmpeg -i INPUT -map 0 -flags:v +global_header -c:v libx264 -bsf:v 
>> dump_extra out.ts
>>  @end example
>>  +@section dv_error_marker
>> +
>> +Blocks in DV which are marked as damaged are replaced by blocks of the 
>> specified color.
>> +
>> +@table @option
>> +@item color
>> +The color to replace damaged blocks by
>> +@item sta
>> +A 16 bit mask which specifies which of the 16 possible error status values 
>> are
>> +to be replaced by colored blocks. 0xFFFE is the default which replaces all 
>> non 0
>> +error status values.
>> +@table @samp
>> +@item ok
>> +No error, no concealment
>> +@item err
>> +Error, No concealment
>> +@item res
>> +Reserved
>> +@item notok
>> +Error or concealment
>> +@item notres
>> +Not reserved
>> +@item Aa, Ba, Ca, Ab, Bb, Cb, A, B, C, a, b, erri, erru
>> +The specific error status code
>> +@end table
>> +see page 44-46 or section 5.5 of
>> +@url{http://web.archive.org/web/20060927044735/http://www.smpte.org/smpte_store/standards/pdf/s314m.pdf}
>> +
>> +@end table
>> +
>>  @section eac3_core
>> [...]
> The filter options look nice to me now. Have not actually tested the 
> bitstream filter on DV files, though.

I tested this and this works well for me. Here's a few samples that demonstrate 
the filter:

./ffmpeg -i 
https://samples.ffmpeg.org/archive/audio/pcm_s16le/dv+dvvideo+pcm_s16le++dropout.dv
  -bsf dv_error_marker=sta=b -f rawvideo -c:v copy - | ffplay -
./ffmpeg -i 
https://archive.org/download/DvAnalyzerSampleDvVideoErrorConcealment/DV_Analyzer_Sample_Video_Error_Concealment_original.dv
 -bsf dv_error_marker=sta=b -f rawvideo -c:v copy - | ffplay -

Dave Rice

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v2] avformat/webvttdec, enc: correctly process files containing STYLE, REGION blocks

2021-11-24 Thread Dave Evans
Giving this another go. It would be really great to get webvtt with style
and region blocks working correctly in ffmpeg. This patch would solve at
least #9064, probably #8684 and possibly other tickets.

Please let me know if further changes are needed.

Cheers,
Dave



On Tue, Mar 16, 2021 at 9:23 PM Dave Evans  wrote:

> Would it be possible for someone to take a look at this please? Would be
> lovely to get it merged at some point. Please let me know if further
> changes are needed.
>
> Cheers,
> Dave
>
> On Tue, Oct 13, 2020 at 10:25 AM Dave Evans 
> wrote:
>
>> This patch fixes the total failure to parse cues when style and region
>> definition blocks are contained in the input file, and ensures those
>> blocks are written to the output when copying.
>>
>> The sample attached needs to be added to samples at the path shown in
>> the patch in order to validate that the original issue is fixed.
>>
>> Same as v1 except the test has been changed as requested in the original
>> review.
>>
>> Cheers,
>> Dave
>>
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] Mailing list conduct

2021-09-22 Thread Dave Rice
Hello FFmpeg community,

I'm writing on behalf of FFmpeg's code of conduct committee to acknowledge that 
the tone on the mailing list has recently become more discouraging, 
challenging, and argumentative, notably on subjects related to subtitles from 
multiple people. We should be able to do better.

We'd like to remind everyone of the Code of Conduct, 
http://ffmpeg.org/developer.html#Code-of-conduct 
<http://ffmpeg.org/developer.html#Code-of-conduct>, of this community, which 
reminds of our aspirational values of being considerate, cooperative, and not 
putting one another down. From reading the exchanges ongling on ffmpeg-devel 
and reading the code of conduct, we think there is a clear mismatch here.

This reminder may be needed now and then, but please, let's all make our 
community a better place than this. Let's please work to support and encourage 
one another, offer fair critical reviews, and work collaboratively towards 
rough consensus.

Kind Regards,

The CC
James Almer
Thilo Borgmann
Carl Eugen Hoyos
Jean-Baptiste Kempf
    Dave Rice
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] avformat/dv: always set audio packet duration

2021-09-06 Thread Dave Rice


> On Sep 6, 2021, at 2:28 PM, Andreas Rheinhardt 
>  wrote:
> 
> Paul B Mahol:
>> If audio packet is present in DV stream it have duration of 1 in DV timebase 
>> units.
>> 
>> Signed-off-by: Paul B Mahol 
>> ---
>> libavformat/dv.c | 4 
>> 1 file changed, 4 insertions(+)
>> 
>> diff --git a/libavformat/dv.c b/libavformat/dv.c
>> index d7909683c3..b2b74162df 100644
>> --- a/libavformat/dv.c
>> +++ b/libavformat/dv.c
>> @@ -48,6 +48,7 @@ struct DVPacket {
>> int  stream_index;
>> int  flags;
>> int64_t  pos;
>> +int64_t  duration;
>> };
>> 
>> struct DVDemuxContext {
>> @@ -276,6 +277,7 @@ static int dv_extract_audio_info(DVDemuxContext *c, 
>> const uint8_t *frame)
>> c->audio_pkt[i].stream_index = c->ast[i]->index;
>> c->audio_pkt[i].flags   |= AV_PKT_FLAG_KEY;
>> c->audio_pkt[i].pts  = AV_NOPTS_VALUE;
>> +c->audio_pkt[i].duration = 0;
>> c->audio_pkt[i].pos  = -1;
>> }
>> c->ast[i]->codecpar->sample_rate= dv_audio_frequency[freq];
>> @@ -374,6 +376,7 @@ int avpriv_dv_get_packet(DVDemuxContext *c, AVPacket 
>> *pkt)
>> pkt->stream_index = c->audio_pkt[i].stream_index;
>> pkt->flags= c->audio_pkt[i].flags;
>> pkt->pts  = c->audio_pkt[i].pts;
>> +pkt->duration = c->audio_pkt[i].duration;
>> pkt->pos  = c->audio_pkt[i].pos;
>> 
>> c->audio_pkt[i].size = 0;
>> @@ -404,6 +407,7 @@ int avpriv_dv_produce_packet(DVDemuxContext *c, AVPacket 
>> *pkt,
>> c->audio_pkt[i].pos  = pos;
>> c->audio_pkt[i].size = size;
>> c->audio_pkt[i].pts  = (c->sys->height == 720) ? (c->frames & ~1) : 
>> c->frames;
>> +c->audio_pkt[i].duration = 1;
>> ppcm[i] = c->audio_buf[i];
>> }
>> if (c->ach)
>> 
> Sure about that? According to the code, the packets contain slightly
> different amounts of samples; see dv_extract_audio_info(), in particular
> lines 242 and 289.
> (It seems to me that one audio packet is supposed to contain the audio
> for one video frame, yet the standard sample rates and standard video
> framerates (in particular NTSC) are incompatible, so a slight variation
> in the number of samples per frame is used.)
> IMO one should set the audio timebase to be the inverse of the
> samplerate and set the duration based upon the number of samples. But I
> have zero experience with dv.

Yes the audio duration is variable in NTSC DV, see page 17 of 
http://web.archive.org/web/20060927044735/http://www.smpte.org/smpte_store/standards/pdf/s314m.pdf.
 Each frame has either 1600 or 1602 samples, so while the video is 1001/3 
the audio is either 1600/48000 or 1602/48000. So video at ~0.0333667 and 
audio at either ~0.033 or 0.033375.

I’ve tested this patch and it does correct some errors. For instance, before 
this patch frame 3 of the output only offers half of the needed duration, so 
the resulting audio track is (1001/3)/2 shorter than the video. But after 
the patch, the video and audio outputs have the same length.

ffmpeg -i https://archive.org/download/test_a_202108/TESTTHIS.dv 
-filter_complex "[0:a:0]aresample=async=1:min_hard_comp=0.01[aud]" -c:v:0 copy 
-map 0:v:0 -c:a pcm_s16le -map "[aud]” output.mov

Dave Rice

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] avformat/dv: fix timestamps of audio packets in case of dropped corrupt audio frames

2021-08-22 Thread Dave Rice
Hi Marton,

> On Feb 23, 2021, at 3:07 PM, Dave Rice  wrote:
> 
>> On Feb 23, 2021, at 2:42 PM, Marton Balint  wrote:
>> 
>> On Sat, 20 Feb 2021, Dave Rice wrote:
>> 
>>> Hi,
>>> 
>>>> On Oct 31, 2020, at 5:15 PM, Marton Balint >>> <mailto:c...@passwd.hu>> wrote:
>>>> On Sat, 31 Oct 2020, Dave Rice wrote:
>>>>>> On Oct 31, 2020, at 3:47 PM, Marton Balint >>>>> <mailto:c...@passwd.hu>> wrote:
>>>>>> On Sat, 31 Oct 2020, Dave Rice wrote:
>>>>>>> Hi Marton,
>>>>>>>> On Oct 31, 2020, at 12:56 PM, Marton Balint >>>>>>> <mailto:c...@passwd.hu>> wrote:
>>>>>>>> Fixes out of sync timestamps in ticket #8762.
>>>>>>> Although Michael’s recent patch does address the issue documented in 
>>>>>>> 8762, I haven’t found this patch to fix the issue. I tried with -c:a 
>>>>>>> copy and with -c:a pcm_s16le with some sample files that exhibit this 
>>>>>>> issue but each output was out of sync. I put an output at 
>>>>>>> https://gist.github.com/dericed/659bd843bd38b6f24a60198b5e345795 
>>>>>>> <https://gist.github.com/dericed/659bd843bd38b6f24a60198b5e345795>. 
>>>>>>> That output notes that 3597 packages of video are read and 3586 packets 
>>>>>>> of audio. In the resulting file, at the end of the timeline the audio 
>>>>>>> is 9 frames out of sync and my output video stream is 00:02:00.020 and 
>>>>>>> output audio stream is 00:01:59.653.
>>>>>>> Beyond copying or encoding the audio, are there other options I should 
>>>>>>> use to test this?
>>>>>> Well, it depends on what you want. After this patch you should get a 
>>>>>> file which has audio packets synced to video, but the audio stream is 
>>>>>> sparse, not every video packet has a corresponding audio packet. (It 
>>>>>> looks like our MOV muxer does not support muxing of sparse audio 
>>>>>> therefore does not produce proper timestamps. But MKV does, please try 
>>>>>> that.)
>>>>>> You can also make ffmpeg generate the missing audio based on packet 
>>>>>> timestamps. Swresample has an async=1 option, so something like this 
>>>>>> should get you synced audio with continous audio packets:
>>>>>> ffmpeg -y -i 167052_12.dv -c:v copy \
>>>>>> -af aresample=async=1:min_hard_comp=0.01 -c:a pcm_s16le 167052_12.mov
>>>>> Thank you for this. With the patch and async, the result is synced and 
>>>>> the resulting audio was the same as Michael’s patch.
>>>>> Could you explain why you used min_hard_comp here? IIUC min_hard_comp is 
>>>>> a set a threshold between the strategies of trim/fill or stretch/squeeze 
>>>>> to align the audio to time; however, the async documentation says 
>>>>> "Setting this to 1 will enable filling and trimming, larger values 
>>>>> represent the maximum amount in samples that the data may be stretched or 
>>>>> squeezed” so I thought that async=1 would not permit stretch/squeeze 
>>>>> anyway.
>>>> It is documented poorly, but if you check the source code you will see 
>>>> that async=1 implicitly sets min_comp to 0.001 enabling trimming/dropping. 
>>>> min_hard_comp decides the threshold when silence injection actually 
>>>> happens, and the default for that is 0.1, which is more than a frame, 
>>>> therefore not acceptable if we want to maintain <1 frame accuracy. Or at 
>>>> least that is how I think it should work.
>>> 
>> 
>>> I’ve found that aresample=async=1:min_hard_comp=0.01, as discussed here, 
>>> works well to add audio samples to maintain timestamp accuracy when muxing 
>>> into a format like mov. However, this approach doesn’t work if the 
>>> sparseness of the audio stream is at the end of the stream. Is there a way 
>>> to use min_hard_comp to consider differences between a timestamp and audio 
>>> data when one of the ends of that range is the end of the file?
>> 
>> I am not aware of a smart method to generate missing audio in the end until 
>> the end of video.
>> 
>> As a possible workaround you may query the video length using
>> ffprobe or mediainfo, and then use a second filter, apad to pad audio:
>> 
&g

Re: [FFmpeg-devel] FFMPEG for V4L2 M2M devices ?

2021-07-12 Thread Dave Stevenson
On Mon, 12 Jul 2021 at 14:51, Andrii  wrote:
>>
>> A quick Google implies that NVidia already has a stateful V4L2 M2M
>> driver in their vendor kernel. Other than the strange choice of device
>> node name (/dev/nvhost-nvdec), the details at [3] make it look like a
>> normal V4L2 M2M decoder that has a good chance of working against
>> h264_v4l2m2m.
>
>
> Not only does it have a strange node name, it also uses two nodes. One for 
> decoding, another for converting. Capture plane of the decoder stores frames 
> in V4L2_PIX_FMT_NV12M format.
> Converter able to convert it to a different format[1].

Those appear to be two different hardware blocks.
If you can consume NV12M (YUV420 with interleaved UV plane), then I
see no reason why you have to pass the data through the
"/dev/nvhost-vic" device.

We have a similar thing where /dev/video10 is the decoder (stateful
decode), /dev/video11 is the encoder, and /dev/video12 is the ISP
(Image Sensor Pipeline) wrapped in the V4L2 API.

> Could you point me at documentation of Pi V4L2 spec?

It just implements the relevant APIs that I've already linked to. If
it doesn't follow the API, then we fix it so that it does.

Stateful H264 implementation is
https://github.com/raspberrypi/linux/tree/rpi-5.10.y/drivers/staging/vc04_services/bcm2835-codec
Stateless HEVC is
https://github.com/raspberrypi/linux/tree/rpi-5.10.y/drivers/staging/media/rpivid

  Dave

> [1] https://docs.nvidia.com/jetson/l4t-multimedia/group__V4L2Conv.html
>
> Andrii
>
> On Mon, Jul 12, 2021 at 6:02 AM Dave Stevenson 
>  wrote:
>>
>> On Sat, 10 Jul 2021 at 00:56, Brad Hards  wrote:
>> >
>> > On Saturday, 10 July 2021 8:53:27 AM AEST Andrii wrote:
>> > > I am working on porting a Kodi player to an NVidia Jetson Nano device. 
>> > > I've
>> > > been developing a decoder for quite some time now, and realized that the
>> > > best approach would be to have it inside of ffmpeg, instead of embedding
>> > > the decoder into Kodi as it heavily relies on FFMPEG. Just wondering if
>> > > there is any effort in making FFMPEG suppring M2M V4L devices ?
>> >
>> > https://git.ffmpeg.org/gitweb/ffmpeg.git/blob_plain/HEAD:/libavcodec/v4l2_m2m.c[1]
>> >
>> > I guess that would be the basis for further work as required to meet your 
>> > needs.
>>
>> Do note that there are 2 V4L2 M2M decoder APIs - the stateful API[1] ,
>> and the stateless API [2]. They differ in the amount of bitstream
>> parsing and buffer management that the driver implements vs expecting
>> the client to do.
>>
>> The *_v4l2m2m drivers within FFMPEG support the stateful API (ie the
>> kernel driver has bitstream parsing). For Raspberry Pi we use that to
>> support the (older) H264 implementation, and FFMPEG master does that
>> very well.
>>
>> The Pi HEVC decoder uses the V4L2 stateless API. Stateless HEVC
>> support hasn't been merged to the mainline kernel as yet, so there are
>> downstream patches to support that.
>>
>> A quick Google implies that NVidia already has a stateful V4L2 M2M
>> driver in their vendor kernel. Other than the strange choice of device
>> node name (/dev/nvhost-nvdec), the details at [3] make it look like a
>> normal V4L2 M2M decoder that has a good chance of working against
>> h264_v4l2m2m.
>>
>> [1] 
>> https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/dev-decoder.html
>> [2] 
>> https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/dev-stateless-decoder.html
>> [3] https://docs.nvidia.com/jetson/l4t-multimedia/group__V4L2Dec.html
>>
>>   Dave
>>
>> > Brad
>> >
>> > 
>> > [1] 
>> > https://git.ffmpeg.org/gitweb/ffmpeg.git/blob_plain/HEAD:/libavcodec/v4l2_m2m.c
>> > ___
>> > ffmpeg-devel mailing list
>> > ffmpeg-devel@ffmpeg.org
>> > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>> >
>> > To unsubscribe, visit link above, or email
>> > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] FFMPEG for V4L2 M2M devices ?

2021-07-12 Thread Dave Stevenson
On Sat, 10 Jul 2021 at 00:56, Brad Hards  wrote:
>
> On Saturday, 10 July 2021 8:53:27 AM AEST Andrii wrote:
> > I am working on porting a Kodi player to an NVidia Jetson Nano device. I've
> > been developing a decoder for quite some time now, and realized that the
> > best approach would be to have it inside of ffmpeg, instead of embedding
> > the decoder into Kodi as it heavily relies on FFMPEG. Just wondering if
> > there is any effort in making FFMPEG suppring M2M V4L devices ?
>
> https://git.ffmpeg.org/gitweb/ffmpeg.git/blob_plain/HEAD:/libavcodec/v4l2_m2m.c[1]
>
> I guess that would be the basis for further work as required to meet your 
> needs.

Do note that there are 2 V4L2 M2M decoder APIs - the stateful API[1] ,
and the stateless API [2]. They differ in the amount of bitstream
parsing and buffer management that the driver implements vs expecting
the client to do.

The *_v4l2m2m drivers within FFMPEG support the stateful API (ie the
kernel driver has bitstream parsing). For Raspberry Pi we use that to
support the (older) H264 implementation, and FFMPEG master does that
very well.

The Pi HEVC decoder uses the V4L2 stateless API. Stateless HEVC
support hasn't been merged to the mainline kernel as yet, so there are
downstream patches to support that.

A quick Google implies that NVidia already has a stateful V4L2 M2M
driver in their vendor kernel. Other than the strange choice of device
node name (/dev/nvhost-nvdec), the details at [3] make it look like a
normal V4L2 M2M decoder that has a good chance of working against
h264_v4l2m2m.

[1] 
https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/dev-decoder.html
[2] 
https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/dev-stateless-decoder.html
[3] https://docs.nvidia.com/jetson/l4t-multimedia/group__V4L2Dec.html

  Dave

> Brad
>
> 
> [1] 
> https://git.ffmpeg.org/gitweb/ffmpeg.git/blob_plain/HEAD:/libavcodec/v4l2_m2m.c
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v2] avformat/webvttdec, enc: correctly process files containing STYLE, REGION blocks

2021-03-16 Thread Dave Evans
Would it be possible for someone to take a look at this please? Would be
lovely to get it merged at some point. Please let me know if further
changes are needed.

Cheers,
Dave

On Tue, Oct 13, 2020 at 10:25 AM Dave Evans  wrote:

> This patch fixes the total failure to parse cues when style and region
> definition blocks are contained in the input file, and ensures those
> blocks are written to the output when copying.
>
> The sample attached needs to be added to samples at the path shown in
> the patch in order to validate that the original issue is fixed.
>
> Same as v1 except the test has been changed as requested in the original
> review.
>
> Cheers,
> Dave
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avformat/dv: fix timestamps of audio packets in case of dropped corrupt audio frames

2021-02-23 Thread Dave Rice


> On Feb 23, 2021, at 2:42 PM, Marton Balint  wrote:
> 
> 
> 
> On Sat, 20 Feb 2021, Dave Rice wrote:
> 
>> Hi,
>> 
>>> On Oct 31, 2020, at 5:15 PM, Marton Balint >> <mailto:c...@passwd.hu>> wrote:
>>> On Sat, 31 Oct 2020, Dave Rice wrote:
>>>>> On Oct 31, 2020, at 3:47 PM, Marton Balint >>>> <mailto:c...@passwd.hu>> wrote:
>>>>> On Sat, 31 Oct 2020, Dave Rice wrote:
>>>>>> Hi Marton,
>>>>>>> On Oct 31, 2020, at 12:56 PM, Marton Balint >>>>>> <mailto:c...@passwd.hu>> wrote:
>>>>>>> Fixes out of sync timestamps in ticket #8762.
>>>>>> Although Michael’s recent patch does address the issue documented in 
>>>>>> 8762, I haven’t found this patch to fix the issue. I tried with -c:a 
>>>>>> copy and with -c:a pcm_s16le with some sample files that exhibit this 
>>>>>> issue but each output was out of sync. I put an output at 
>>>>>> https://gist.github.com/dericed/659bd843bd38b6f24a60198b5e345795 
>>>>>> <https://gist.github.com/dericed/659bd843bd38b6f24a60198b5e345795>. That 
>>>>>> output notes that 3597 packages of video are read and 3586 packets of 
>>>>>> audio. In the resulting file, at the end of the timeline the audio is 9 
>>>>>> frames out of sync and my output video stream is 00:02:00.020 and output 
>>>>>> audio stream is 00:01:59.653.
>>>>>> Beyond copying or encoding the audio, are there other options I should 
>>>>>> use to test this?
>>>>> Well, it depends on what you want. After this patch you should get a file 
>>>>> which has audio packets synced to video, but the audio stream is sparse, 
>>>>> not every video packet has a corresponding audio packet. (It looks like 
>>>>> our MOV muxer does not support muxing of sparse audio therefore does not 
>>>>> produce proper timestamps. But MKV does, please try that.)
>>>>> You can also make ffmpeg generate the missing audio based on packet 
>>>>> timestamps. Swresample has an async=1 option, so something like this 
>>>>> should get you synced audio with continous audio packets:
>>>>> ffmpeg -y -i 167052_12.dv -c:v copy \
>>>>> -af aresample=async=1:min_hard_comp=0.01 -c:a pcm_s16le 167052_12.mov
>>>> Thank you for this. With the patch and async, the result is synced and the 
>>>> resulting audio was the same as Michael’s patch.
>>>> Could you explain why you used min_hard_comp here? IIUC min_hard_comp is a 
>>>> set a threshold between the strategies of trim/fill or stretch/squeeze to 
>>>> align the audio to time; however, the async documentation says "Setting 
>>>> this to 1 will enable filling and trimming, larger values represent the 
>>>> maximum amount in samples that the data may be stretched or squeezed” so I 
>>>> thought that async=1 would not permit stretch/squeeze anyway.
>>> It is documented poorly, but if you check the source code you will see that 
>>> async=1 implicitly sets min_comp to 0.001 enabling trimming/dropping. 
>>> min_hard_comp decides the threshold when silence injection actually 
>>> happens, and the default for that is 0.1, which is more than a frame, 
>>> therefore not acceptable if we want to maintain <1 frame accuracy. Or at 
>>> least that is how I think it should work.
>> 
> 
>> I’ve found that aresample=async=1:min_hard_comp=0.01, as discussed here, 
>> works well to add audio samples to maintain timestamp accuracy when muxing 
>> into a format like mov. However, this approach doesn’t work if the 
>> sparseness of the audio stream is at the end of the stream. Is there a way 
>> to use min_hard_comp to consider differences between a timestamp and audio 
>> data when one of the ends of that range is the end of the file?
> 
> I am not aware of a smart method to generate missing audio in the end until 
> the end of video.
> 
> As a possible workaround you may query the video length using
> ffprobe or mediainfo, and then use a second filter, apad to pad audio:
> 
> -af aresample=async=1:min_hard_comp=0.01,apad=whole_dur=
> 
> Tnis might do what you want, but requires an additional step to query the 
> video length…


[…]
Perfect, thanks for sharing this idea.
Dave

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avformat/dv: fix timestamps of audio packets in case of dropped corrupt audio frames

2021-02-20 Thread Dave Rice
Hi,

> On Oct 31, 2020, at 5:15 PM, Marton Balint  <mailto:c...@passwd.hu>> wrote:
> On Sat, 31 Oct 2020, Dave Rice wrote:
>>> On Oct 31, 2020, at 3:47 PM, Marton Balint >> <mailto:c...@passwd.hu>> wrote:
>>> On Sat, 31 Oct 2020, Dave Rice wrote:
>>>> Hi Marton,
>>>>> On Oct 31, 2020, at 12:56 PM, Marton Balint >>>> <mailto:c...@passwd.hu>> wrote:
>>>>> Fixes out of sync timestamps in ticket #8762.
>>>> Although Michael’s recent patch does address the issue documented in 8762, 
>>>> I haven’t found this patch to fix the issue. I tried with -c:a copy and 
>>>> with -c:a pcm_s16le with some sample files that exhibit this issue but 
>>>> each output was out of sync. I put an output at 
>>>> https://gist.github.com/dericed/659bd843bd38b6f24a60198b5e345795 
>>>> <https://gist.github.com/dericed/659bd843bd38b6f24a60198b5e345795>. That 
>>>> output notes that 3597 packages of video are read and 3586 packets of 
>>>> audio. In the resulting file, at the end of the timeline the audio is 9 
>>>> frames out of sync and my output video stream is 00:02:00.020 and output 
>>>> audio stream is 00:01:59.653.
>>>> Beyond copying or encoding the audio, are there other options I should use 
>>>> to test this?
>>> Well, it depends on what you want. After this patch you should get a file 
>>> which has audio packets synced to video, but the audio stream is sparse, 
>>> not every video packet has a corresponding audio packet. (It looks like our 
>>> MOV muxer does not support muxing of sparse audio therefore does not 
>>> produce proper timestamps. But MKV does, please try that.)
>>> You can also make ffmpeg generate the missing audio based on packet 
>>> timestamps. Swresample has an async=1 option, so something like this should 
>>> get you synced audio with continous audio packets:
>>> ffmpeg -y -i 167052_12.dv -c:v copy \
>>> -af aresample=async=1:min_hard_comp=0.01 -c:a pcm_s16le 167052_12.mov
>> 
>> Thank you for this. With the patch and async, the result is synced and the 
>> resulting audio was the same as Michael’s patch.
>> 
>> Could you explain why you used min_hard_comp here? IIUC min_hard_comp is a 
>> set a threshold between the strategies of trim/fill or stretch/squeeze to 
>> align the audio to time; however, the async documentation says "Setting this 
>> to 1 will enable filling and trimming, larger values represent the maximum 
>> amount in samples that the data may be stretched or squeezed” so I thought 
>> that async=1 would not permit stretch/squeeze anyway.
> 
> It is documented poorly, but if you check the source code you will see that 
> async=1 implicitly sets min_comp to 0.001 enabling trimming/dropping. 
> min_hard_comp decides the threshold when silence injection actually happens, 
> and the default for that is 0.1, which is more than a frame, therefore not 
> acceptable if we want to maintain <1 frame accuracy. Or at least that is how 
> I think it should work.

I’ve found that aresample=async=1:min_hard_comp=0.01, as discussed here, works 
well to add audio samples to maintain timestamp accuracy when muxing into a 
format like mov. However, this approach doesn’t work if the sparseness of the 
audio stream is at the end of the stream. Is there a way to use min_hard_comp 
to consider differences between a timestamp and audio data when one of the ends 
of that range is the end of the file?
Best Regards,
Dave Rice

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avformat/webvttdec: Fix WebVTT decoder truncating files at first STYLE block

2021-01-12 Thread Dave Evans
Hijacking with related, similar patch from a few months ago:
https://patchwork.ffmpeg.org/project/ffmpeg/list/?series=2538


On Tue, Jan 12, 2021 at 4:56 PM Roderich Schupp 
wrote:

> Bug-ID: 9064
>
> The webvtt decoder truncates the file at the first such block.
> Since these blocks typically occur at the top of the webvtt file, this
> results
> in an empty file (except for the WEBVTT header line).
>
> Reason is that at STYLE block neither parses as a valid cue block nor
> is it skipped like the WEBVTT (i.e. header) or NOTE blocks, hence
> decoding stops.
>
> Solution is to add STYLE to list of skipped blocks. And while we're at it,
> add REGION, too.
> ---
>  libavformat/webvttdec.c | 6 --
>  1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/libavformat/webvttdec.c b/libavformat/webvttdec.c
> index 8d2fdfe..5a982dd 100644
> --- a/libavformat/webvttdec.c
> +++ b/libavformat/webvttdec.c
> @@ -89,10 +89,12 @@ static int webvtt_read_header(AVFormatContext *s)
>  p = identifier = cue.str;
>  pos = avio_tell(s->pb);
>
> -/* ignore header chunk */
> +/* ignore header, NOTE, STYLE and REGION chunks */
>  if (!strncmp(p, "\xEF\xBB\xBFWEBVTT", 9) ||
>  !strncmp(p, "WEBVTT", 6) ||
> -!strncmp(p, "NOTE", 4))
> +!strncmp(p, "NOTE", 4) ||
> +!strncmp(p, "STYLE", 5) ||
> +!strncmp(p, "REGION", 6))
>  continue;
>
>  /* optional cue identifier (can be a number like in SRT or some
> kind of
> --
> 2.30.0
>
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avformat/dv: fix timestamps of audio packets in case of dropped corrupt audio frames

2020-12-03 Thread Dave Rice

> On Nov 14, 2020, at 7:14 PM, Marton Balint  wrote:
> 
> On Fri, 6 Nov 2020, Michael Niedermayer wrote:
> 
>> On Wed, Nov 04, 2020 at 10:44:56PM +0100, Marton Balint wrote:
>>> 
>>> 
>>> On Wed, 4 Nov 2020, Michael Niedermayer wrote:
>>> 
>>>> we have "millisecond" based formats, rounded timestamps
>>>> we have "exact" cases, maybe the timebase being 1 packet/frame per tick
>>>> we have "high precission" where the timebase is so precisse it doesnt 
>>>> matter
>>>> 
>>>> This here though is a bit an oddball, the size if 1 PCM frame is 1 sample
>>>> The timebase is not a millisecond based one, its not 1 frame either nor is
>>>> it exact nor high precission.
>>>> Its 1 video frame, and whatever amount of audio there is in the container
>>>> 
>>>> which IIUC can differ from 1 video frame even rounded.
>>>> maybe this just doesnt occur and each frame has a count of samples always
>>>> rounded to the closes integer count for the video frame.
>>> 
>>> The difference between the audio timestamp and the video timestamp for
>>> packets from the same DV frame is at most 0.3929636797*frame_duration as the
>>> specification says, as Dave quoted, so I don't see how the error can be
>>> bigger than this.
>>> 
>>> It looks to me you are mixing timestamps coming from a demuxer, and
>>> timestamps you calculate by counting the number of demuxed/decoded audio
>>> samples or video frames. Synchronization is done using the former.
>>> 
>> 
>>>> 
>>>> But if for example some hardware was using internally a 16 sample buffer
>>>> and only put multiplies of 16 samples in frames this would introduce a
>>>> considerable amount of jitter in the timestamps in relation to the actual
>>>> duration. And using async to fix this without introducing more problems
>>>> might require some care.
>>> 
>>> I still don't see why timestamp or duration jitter is a problem
>> 
>>> as long as
>>> the error is below frame_duration/2. You can safely use async with
>>> min_hard_comp set to frame_duration/2.
>> 
>> Thats exactly what i meant. an async like filter which behaves differently
>> or async with a different value there can mess this up.
>> IMHO such mess up is ok when the input is corrupted or invalid. OTOH
>> here it is valid and correct data.
>> 
>> 
>>> 
>>> In general, don't you find it problematic that the dv demuxer can return
>>> different timestamps if you read packets sequentially and if you seek to the
>>> end of a file? It looks like a huge bug
>> 
>> yes, this is not great
>> but even with your patch you still have this effect
>> when seeking to some point in time a player has to output video and
>> audio to the user at an exact time and that will differ even with async
>> from linear playbacks presentation
>> 
>> 
>>> which is not fixable if you insist
>>> on sample counting...
>> 
>> I think you misunderstood me, or maybe i didnt state my opinion well,
>> iam not saying that i consider what dv in git does good. Rather that there
>> is a problem beyond what these patches fix.
>> Some concept of timestamp accuracy independant of the distance of 
>> representable
>> values would be usefull.
>> if you take teh 1/25 or whatever they are based on dv timestamps and convert 
>> that
>> to teh mpeg 90khz based ones thats not making it that accurate.
>> OTOH if you take 1/25 based audio where each packet is 1/25sec worth of 
>> samples
>> that very well might be sample accurate or even beyond.
>> knowing this accuracy is usefull for configuring a async like filter or also 
>> in
>> knowing how to deal with inconsistencies, is that timestamp jtter ? or the 
>> sample
>> rate jittering / some droped samples ?
>> Its important to know as in one instance its the timestamps that need 
>> adjustment
>> while in the other the samples need adjustment
>> ATM its down to the user to figure out on a file by file base how to deal or
>> ignore this. Instead it should be possible for an automated system to
>> compensate such issues ...
> 
> OK, but the automated solution is far from trivial, e.g. it should start with 
> a analysis of the file to check if the sample rate is accurate or not... And 
> if it is not, is the difference constant througout the file? Then there are 
> several methods to fix i

Re: [FFmpeg-devel] [PATCH] avformat/dv: fix timestamps of audio packets in case of dropped corrupt audio frames

2020-11-09 Thread Dave Rice


> On Nov 6, 2020, at 4:03 PM, Michael Niedermayer  
> wrote:
> 
> On Wed, Nov 04, 2020 at 10:44:56PM +0100, Marton Balint wrote:
>> 
>> On Wed, 4 Nov 2020, Michael Niedermayer wrote:
>> 
>>> we have "millisecond" based formats, rounded timestamps
>>> we have "exact" cases, maybe the timebase being 1 packet/frame per tick
>>> we have "high precission" where the timebase is so precisse it doesnt matter
>>> 
>>> This here though is a bit an oddball, the size if 1 PCM frame is 1 sample
>>> The timebase is not a millisecond based one, its not 1 frame either nor is
>>> it exact nor high precission.
>>> Its 1 video frame, and whatever amount of audio there is in the container
>>> 
>>> which IIUC can differ from 1 video frame even rounded.
>>> maybe this just doesnt occur and each frame has a count of samples always
>>> rounded to the closes integer count for the video frame.
>> 
>> The difference between the audio timestamp and the video timestamp for
>> packets from the same DV frame is at most 0.3929636797*frame_duration as the
>> specification says, as Dave quoted, so I don't see how the error can be
>> bigger than this.
>> 
>> It looks to me you are mixing timestamps coming from a demuxer, and
>> timestamps you calculate by counting the number of demuxed/decoded audio
>> samples or video frames. Synchronization is done using the former.
>> 
> 
>>> 
>>> But if for example some hardware was using internally a 16 sample buffer
>>> and only put multiplies of 16 samples in frames this would introduce a
>>> considerable amount of jitter in the timestamps in relation to the actual
>>> duration. And using async to fix this without introducing more problems
>>> might require some care.
>> 
>> I still don't see why timestamp or duration jitter is a problem 
> 
>> as long as
>> the error is below frame_duration/2. You can safely use async with
>> min_hard_comp set to frame_duration/2.
> 
> Thats exactly what i meant. an async like filter which behaves differently
> or async with a different value there can mess this up.
> IMHO such mess up is ok when the input is corrupted or invalid. OTOH
> here it is valid and correct data.
> 
>> In general, don't you find it problematic that the dv demuxer can return
>> different timestamps if you read packets sequentially and if you seek to the
>> end of a file? It looks like a huge bug
> 
> yes, this is not great
> but even with your patch you still have this effect
> when seeking to some point in time a player has to output video and
> audio to the user at an exact time and that will differ even with async
> from linear playbacks presentation

When trying to workaround the loss of audio sync, I use -skip_initial_bytes on 
the dv input to jump to the frame after a missing audio pack to read from that 
point to keep audio and video in sync from that offset in the bytestream (at 
least until the next missing audio source pack).

>> which is not fixable if you insist
>> on sample counting...
> 
> I think you misunderstood me, or maybe i didnt state my opinion well,
> iam not saying that i consider what dv in git does good. Rather that there
> is a problem beyond what these patches fix. 
> Some concept of timestamp accuracy independant of the distance of 
> representable
> values would be usefull.
> if you take teh 1/25 or whatever they are based on dv timestamps and convert 
> that
> to teh mpeg 90khz based ones thats not making it that accurate.
> OTOH if you take 1/25 based audio where each packet is 1/25sec worth of 
> samples
> that very well might be sample accurate or even beyond.
> knowing this accuracy is usefull for configuring a async like filter or also 
> in
> knowing how to deal with inconsistencies, is that timestamp jtter ? or the 
> sample
> rate jittering / some droped samples ? 
> Its important to know as in one instance its the timestamps that need 
> adjustment 
> while in the other the samples need adjustment
> ATM its down to the user to figure out on a file by file base how to deal or
> ignore this. Instead it should be possible for an automated system to 
> compensate such issues ...

As mentioned elsewhere, some automation (or at least a logged hint) would be 
helpful to add or suggest aresample=async=1 to fill the gaps when using 
containers that don’t support sparse audio. With Marton’s patch, the user has 
the opportunity to use that filter to keep the audio in sync.

[…]

Dave Rice

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avformat/dv: fix timestamps of audio packets in case of dropped corrupt audio frames

2020-11-01 Thread Dave Rice


> On Nov 1, 2020, at 3:58 PM, Marton Balint  wrote:
> 
> 
> 
> On Sun, 1 Nov 2020, Michael Niedermayer wrote:
> 
>> On Sat, Oct 31, 2020 at 05:56:24PM +0100, Marton Balint wrote:
>>> Fixes out of sync timestamps in ticket #8762.
>>> 
>>> Signed-off-by: Marton Balint 
>>> ---
>>> libavformat/dv.c   | 16 ++--
>>> tests/ref/seek/lavf-dv | 18 +-
>>> 2 files changed, 11 insertions(+), 23 deletions(-)
>>> 
>>> diff --git a/libavformat/dv.c b/libavformat/dv.c
>>> index 3e0d12c0e3..26a78139f5 100644
>>> --- a/libavformat/dv.c
>>> +++ b/libavformat/dv.c
>>> @@ -49,7 +49,6 @@ struct DVDemuxContext {
>>> uint8_t   audio_buf[4][8192];
>>> int   ach;
>>> int   frames;
>>> -uint64_t  abytes;
>>> };
>>> 
>>> static inline uint16_t dv_audio_12to16(uint16_t sample)
>>> @@ -258,7 +257,7 @@ static int dv_extract_audio_info(DVDemuxContext *c, 
>>> const uint8_t *frame)
>>> c->ast[i] = avformat_new_stream(c->fctx, NULL);
>>> if (!c->ast[i])
>>> break;
>>> -avpriv_set_pts_info(c->ast[i], 64, 1, 3);
>>> +avpriv_set_pts_info(c->ast[i], 64, c->sys->time_base.num, 
>>> c->sys->time_base.den);
>>> c->ast[i]->codecpar->codec_type = AVMEDIA_TYPE_AUDIO;
>>> c->ast[i]->codecpar->codec_id   = AV_CODEC_ID_PCM_S16LE;
>>> 
>>> @@ -387,8 +386,7 @@ int avpriv_dv_produce_packet(DVDemuxContext *c, 
>>> AVPacket *pkt,
>>> for (i = 0; i < c->ach; i++) {
>>> c->audio_pkt[i].pos  = pos;
>>> c->audio_pkt[i].size = size;
>>> -c->audio_pkt[i].pts  = c->abytes * 3 * 8 /
>>> -   c->ast[i]->codecpar->bit_rate;
>>> +c->audio_pkt[i].pts  = (c->sys->height == 720) ? (c->frames & ~1) 
>>> : c->frames;
>>> ppcm[i] = c->audio_buf[i];
>>> }
>>> if (c->ach)
>>> @@ -401,10 +399,7 @@ int avpriv_dv_produce_packet(DVDemuxContext *c, 
>>> AVPacket *pkt,
>>> c->audio_pkt[2].size = c->audio_pkt[3].size = 0;
>>> } else {
>>> c->audio_pkt[0].size = c->audio_pkt[1].size = 0;
>>> -c->abytes   += size;
>>> }
>>> -} else {
>>> -c->abytes += size;
>>> }
>>> 
>>> /* Now it's time to return video packet */
>> 
>> Please correct me if iam wrong but
>> in cases where no audio is missing or damaged, this would also ignore how 
>> much
>> audio is in each packet. So you could have lets say a timestamp difference
>> of excatly 1 second between 2 packets while their is actually not exactly
>> 1 second worth of audio samples between them.
> 
> This is true, by using the frame counter (and the video time base) for audio, 
> we lose some audio packet timestamp precision inherently. However I don't 
> consider this a problem, audio timestamps do not have to be sample accurate, 
> for most formats they are not. Also it is not practical to keep track of how 
> many samples are there in the packets, for example when you do seeking, 
> obviously you can't read all the audio data before the seek point to get a 
> precise sample accurate timestamp.

Good point.

> What matters is that based on what I understand about the DV format (but 
> maybe Dave can confirm or deny this) the divergence between the audio 
> timestamp and the video timestamp in a DV frame must be less than 1/3 frame 
> duration even for unlocked mode:
> 
> http://www.adamwilt.com/DV-FAQ-tech.html#LockedAudio

The divergence could be a little larger than 1/3 frame in unlocked mode. 
IEC61384-2 defines the allowable range of minimum to maximum samples per frame 
and the maximum allowable divergence of accumulated samples per frame.

Mode   | Min-Max   | Allowance of accumulated difference
NTSC 48000 | 1580-1620 | 20
NTSC 44100 | 1452-1489 | 19
NTSC 32000 | 1053-1080 | 14
PAL  48000 | 1896-1944 | 24
PAL  44100 | 1742-1786 | 22
PAL  32000 | 1264-1296 | 16

The divergence between the audio timestamp and video timestamp is conditional 
on the mode, so that would be

Mode   | Max divergence as percentage of frame duration
NTSC 48000 | 0.3742511235
NTSC 44100 | 0.3869807536
NTSC 32000 | 0.3929636797
PAL  48000 | 0.3125
PAL  44100 | 0.3117913832
PAL  32000 | 0.3125

0.3

Re: [FFmpeg-devel] [PATCH] avformat/dv: fix timestamps of audio packets in case of dropped corrupt audio frames

2020-10-31 Thread Dave Rice


> On Oct 31, 2020, at 5:15 PM, Marton Balint  wrote:
> On Sat, 31 Oct 2020, Dave Rice wrote:
>>> On Oct 31, 2020, at 3:47 PM, Marton Balint  wrote:
>>> On Sat, 31 Oct 2020, Dave Rice wrote:
>>>> Hi Marton,
>>>>> On Oct 31, 2020, at 12:56 PM, Marton Balint  wrote:
>>>>> Fixes out of sync timestamps in ticket #8762.
>>>> Although Michael’s recent patch does address the issue documented in 8762, 
>>>> I haven’t found this patch to fix the issue. I tried with -c:a copy and 
>>>> with -c:a pcm_s16le with some sample files that exhibit this issue but 
>>>> each output was out of sync. I put an output at 
>>>> https://gist.github.com/dericed/659bd843bd38b6f24a60198b5e345795. That 
>>>> output notes that 3597 packages of video are read and 3586 packets of 
>>>> audio. In the resulting file, at the end of the timeline the audio is 9 
>>>> frames out of sync and my output video stream is 00:02:00.020 and output 
>>>> audio stream is 00:01:59.653.
>>>> Beyond copying or encoding the audio, are there other options I should use 
>>>> to test this?
>>> Well, it depends on what you want. After this patch you should get a file 
>>> which has audio packets synced to video, but the audio stream is sparse, 
>>> not every video packet has a corresponding audio packet. (It looks like our 
>>> MOV muxer does not support muxing of sparse audio therefore does not 
>>> produce proper timestamps. But MKV does, please try that.)
>>> You can also make ffmpeg generate the missing audio based on packet 
>>> timestamps. Swresample has an async=1 option, so something like this should 
>>> get you synced audio with continous audio packets:
>>> ffmpeg -y -i 167052_12.dv -c:v copy \
>>> -af aresample=async=1:min_hard_comp=0.01 -c:a pcm_s16le 167052_12.mov
>> 
>> Thank you for this. With the patch and async, the result is synced and the 
>> resulting audio was the same as Michael’s patch.
>> 
>> Could you explain why you used min_hard_comp here? IIUC min_hard_comp is a 
>> set a threshold between the strategies of trim/fill or stretch/squeeze to 
>> align the audio to time; however, the async documentation says "Setting this 
>> to 1 will enable filling and trimming, larger values represent the maximum 
>> amount in samples that the data may be stretched or squeezed” so I thought 
>> that async=1 would not permit stretch/squeeze anyway.
> 
> It is documented poorly, but if you check the source code you will see that 
> async=1 implicitly sets min_comp to 0.001 enabling trimming/dropping. 
> min_hard_comp decides the threshold when silence injection actually happens, 
> and the default for that is 0.1, which is more than a frame, therefore not 
> acceptable if we want to maintain <1 frame accuracy. Or at least that is how 
> I think it should work.

Thanks for the explanation.
I’ve tested this patch with larger sample sets and really appreciate that the 
audio timestamps are now properly ordered. For instance with 
https://archive.org/download/dvr-007/DVR_007.dv, before the patch the 
timestamps for audio around 20 - 30 seconds get severely jumbled which makes it 
difficult to use astats metadata. While the documentation could be better, as 
I’ve seen many forums entries of ffmpeg users struggling to keep dv from tape 
transfers in sync, I don’t think that’s blocking. I’d really like to see this 
merged.

Although this patch resolves the issue, I think Michael’s patch is interesting 
too as it would allow access to the audio dif blocks of dv frames with no audio 
source pack metadata. I’d have to test more to find an instance where the 
'dvaudio_concealment pass’ option of Michael’s patch provides a different 
result than the added silence from async.

Thanks so much.
Dave

> Regards,
> Marton
> 
>>>>> Signed-off-by: Marton Balint 
>>>>> ---
>>>>> libavformat/dv.c   | 16 ++--
>>>>> tests/ref/seek/lavf-dv | 18 +-
>>>>> 2 files changed, 11 insertions(+), 23 deletions(-)
>>>>> diff --git a/libavformat/dv.c b/libavformat/dv.c
>>>>> index 3e0d12c0e3..26a78139f5 100644
>>>>> --- a/libavformat/dv.c
>>>>> +++ b/libavformat/dv.c
>>>>> @@ -49,7 +49,6 @@ struct DVDemuxContext {
>>>>>   uint8_t   audio_buf[4][8192];
>>>>>   int   ach;
>>>>>   int   frames;
>>>>> -uint64_t  abytes;
>>>>> };
>>>>> static inline uint16_t dv_audio_12to16(uint16_t 

Re: [FFmpeg-devel] [PATCH] avformat/dv: fix timestamps of audio packets in case of dropped corrupt audio frames

2020-10-31 Thread Dave Rice

> On Oct 31, 2020, at 3:47 PM, Marton Balint  wrote:
> On Sat, 31 Oct 2020, Dave Rice wrote:
> 
>> Hi Marton,
>> 
>>> On Oct 31, 2020, at 12:56 PM, Marton Balint  wrote:
>>> Fixes out of sync timestamps in ticket #8762.
>> 
>> Although Michael’s recent patch does address the issue documented in 8762, I 
>> haven’t found this patch to fix the issue. I tried with -c:a copy and with 
>> -c:a pcm_s16le with some sample files that exhibit this issue but each 
>> output was out of sync. I put an output at 
>> https://gist.github.com/dericed/659bd843bd38b6f24a60198b5e345795. That 
>> output notes that 3597 packages of video are read and 3586 packets of audio. 
>> In the resulting file, at the end of the timeline the audio is 9 frames out 
>> of sync and my output video stream is 00:02:00.020 and output audio stream 
>> is 00:01:59.653.
>> 
>> Beyond copying or encoding the audio, are there other options I should use 
>> to test this?
> 
> Well, it depends on what you want. After this patch you should get a file 
> which has audio packets synced to video, but the audio stream is sparse, not 
> every video packet has a corresponding audio packet. (It looks like our MOV 
> muxer does not support muxing of sparse audio therefore does not produce 
> proper timestamps. But MKV does, please try that.)
> 
> You can also make ffmpeg generate the missing audio based on packet 
> timestamps. Swresample has an async=1 option, so something like this should 
> get you synced audio with continous audio packets:
> ffmpeg -y -i 167052_12.dv -c:v copy \
> -af aresample=async=1:min_hard_comp=0.01 -c:a pcm_s16le 167052_12.mov

Thank you for this. With the patch and async, the result is synced and the 
resulting audio was the same as Michael’s patch.

Could you explain why you used min_hard_comp here? IIUC min_hard_comp is a set 
a threshold between the strategies of trim/fill or stretch/squeeze to align the 
audio to time; however, the async documentation says "Setting this to 1 will 
enable filling and trimming, larger values represent the maximum amount in 
samples that the data may be stretched or squeezed” so I thought that async=1 
would not permit stretch/squeeze anyway.

> Regards,
> Marton
> 
> 
>> 
>>> Signed-off-by: Marton Balint 
>>> ---
>>> libavformat/dv.c   | 16 ++--
>>> tests/ref/seek/lavf-dv | 18 +-
>>> 2 files changed, 11 insertions(+), 23 deletions(-)
>>> diff --git a/libavformat/dv.c b/libavformat/dv.c
>>> index 3e0d12c0e3..26a78139f5 100644
>>> --- a/libavformat/dv.c
>>> +++ b/libavformat/dv.c
>>> @@ -49,7 +49,6 @@ struct DVDemuxContext {
>>>uint8_t   audio_buf[4][8192];
>>>int   ach;
>>>int   frames;
>>> -uint64_t  abytes;
>>> };
>>> static inline uint16_t dv_audio_12to16(uint16_t sample)
>>> @@ -258,7 +257,7 @@ static int dv_extract_audio_info(DVDemuxContext *c, 
>>> const uint8_t *frame)
>>>c->ast[i] = avformat_new_stream(c->fctx, NULL);
>>>if (!c->ast[i])
>>>break;
>>> -avpriv_set_pts_info(c->ast[i], 64, 1, 3);
>>> +avpriv_set_pts_info(c->ast[i], 64, c->sys->time_base.num, 
>>> c->sys->time_base.den);
>>>c->ast[i]->codecpar->codec_type = AVMEDIA_TYPE_AUDIO;
>>>c->ast[i]->codecpar->codec_id   = AV_CODEC_ID_PCM_S16LE;
>>> @@ -387,8 +386,7 @@ int avpriv_dv_produce_packet(DVDemuxContext *c, 
>>> AVPacket *pkt,
>>>for (i = 0; i < c->ach; i++) {
>>>c->audio_pkt[i].pos  = pos;
>>>c->audio_pkt[i].size = size;
>>> -c->audio_pkt[i].pts  = c->abytes * 3 * 8 /
>>> -   c->ast[i]->codecpar->bit_rate;
>>> +c->audio_pkt[i].pts  = (c->sys->height == 720) ? (c->frames & ~1) 
>>> : c->frames;
>>>ppcm[i] = c->audio_buf[i];
>>>}
>>>if (c->ach)
>>> @@ -401,10 +399,7 @@ int avpriv_dv_produce_packet(DVDemuxContext *c, 
>>> AVPacket *pkt,
>>>c->audio_pkt[2].size = c->audio_pkt[3].size = 0;
>>>} else {
>>>c->audio_pkt[0].size = c->audio_pkt[1].size = 0;
>>> -c->abytes   += size;
>>>}
>>> -} else {
>>> -c->abytes += size;
>>>}
>>> 
>>>/* Now it's time to retur

Re: [FFmpeg-devel] [PATCH] avformat/dv: fix timestamps of audio packets in case of dropped corrupt audio frames

2020-10-31 Thread Dave Rice
 st: 0 flags:1 dts: 0.96 pts: 0.96 pos:3456000 
> size:144000
> -ret: 0 st: 1 flags:0  ts:-0.058333
> +ret: 0 st: 1 flags:0  ts:-0.04
> ret: 0 st: 0 flags:1 dts: 0.00 pts: 0.00 pos:  0 
> size:144000
> -ret: 0 st: 1 flags:1  ts: 2.835833
> +ret: 0 st: 1 flags:1  ts: 2.84
> ret: 0 st: 0 flags:1 dts: 0.96 pts: 0.96 pos:3456000 
> size:144000
> ret: 0 st:-1 flags:0  ts: 1.730004
> ret: 0 st: 0 flags:1 dts: 0.96 pts: 0.96 pos:3456000 
> size:144000
> @@ -31,10 +31,10 @@ ret: 0 st: 0 flags:0  ts:-0.48
> ret: 0 st: 0 flags:1 dts: 0.00 pts: 0.00 pos:  0 
> size:144000
> ret: 0 st: 0 flags:1  ts: 2.40
> ret: 0 st: 0 flags:1 dts: 0.96 pts: 0.96 pos:3456000 
> size:144000
> -ret: 0 st: 1 flags:0  ts: 1.306667
> -ret: 0 st: 0 flags:1 dts: 0.96 pts: 0.96 pos:3456000 
> size:144000
> -ret: 0 st: 1 flags:1  ts: 0.200833
> +ret: 0 st: 1 flags:0  ts: 1.32
> ret: 0 st: 0 flags:1 dts: 0.96 pts: 0.96 pos:3456000 
> size:144000
> +ret: 0 st: 1 flags:1  ts: 0.20
> +ret: 0 st: 0 flags:1 dts: 0.20 pts: 0.20 pos: 72 
> size:144000
> ret: 0 st:-1 flags:0  ts:-0.904994
> ret: 0 st: 0 flags:1 dts: 0.00 pts: 0.00 pos:  0 
> size:144000
> ret: 0 st:-1 flags:1  ts: 1.989173
> @@ -43,9 +43,9 @@ ret: 0 st: 0 flags:0  ts: 0.88
> ret: 0 st: 0 flags:1 dts: 0.88 pts: 0.88 pos:3168000 
> size:144000
> ret: 0 st: 0 flags:1  ts:-0.24
> ret: 0 st: 0 flags:1 dts: 0.00 pts: 0.00 pos:  0 
> size:144000
> -ret: 0 st: 1 flags:0  ts: 2.671667
> +ret: 0 st: 1 flags:0  ts: 2.68
> ret: 0 st: 0 flags:1 dts: 0.96 pts: 0.96 pos:3456000 
> size:144000
> -ret: 0 st: 1 flags:1  ts: 1.565833
> +ret: 0 st: 1 flags:1  ts: 1.56
> ret: 0 st: 0 flags:1 dts: 0.96 pts: 0.96 pos:3456000 
> size:144000
> ret: 0 st:-1 flags:0  ts: 0.460008
> ret: 0 st: 0 flags:1 dts: 0.48 pts: 0.48 pos:1728000 
> size:144000
> — 
> 2.26.2

Best Regards,
Dave Rice

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH v2] avformat/webvttdec, enc: correctly process files containing STYLE, REGION blocks

2020-10-13 Thread Dave Evans
This patch fixes the total failure to parse cues when style and region
definition blocks are contained in the input file, and ensures those
blocks are written to the output when copying.

The sample attached needs to be added to samples at the path shown in
the patch in order to validate that the original issue is fixed.

Same as v1 except the test has been changed as requested in the original review.

Cheers,
Dave
WEBVTT

REGION
id:son
width:40%
lines:3
regionanchor:20%,80%
viewportanchor:20%,80%
scroll:up

REGION
id:father
width:40%
lines:3
regionanchor:80%,80%
viewportanchor:80%,80%
scroll:up

STYLE
::cue(i) {
  /* make i tags italic */
  font-style: italic
}

STYLE
::cue(v[voice="Son"]) {
  color: magenta
}

STYLE
::cue(v[voice="Father"]) {
  color: yellow
}

00:10.000 --> 00:25.000 region:son align:left
Can I tell you a joke, Dad?

00:12.500 --> 00:27.500 region:father align:right
Sure, I could do with a laugh.

00:15.000 --> 00:30.000 region:son align:left
Where do sheep go to get their hair cut?

00:17.500 --> 00:32.500 region:father align:right
I don't know, son. Where do sheep go to get their hair cut?

00:20.000 --> 00:35.000 region:son align:left
To the baa-baa shop!

00:22.500 --> 00:37.500 region:father align:right
[facepalms]


0001-avformat-webvttdec-enc-correctly-decode-files-contai.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avformat/webvttdec, enc: correctly process files containing STYLE, REGION blocks

2020-10-13 Thread Dave Evans
On Mon, Oct 12, 2020 at 11:20 PM Jan Ekström  wrote:
>
> On Tue, Oct 13, 2020 at 12:07 AM Dave Evans  wrote:
> >
> > This patch fixes the total failure to parse cues when style and region
> > definition blocks are contained in the input file, and ensures those blocks
> > are written to the output when copying.
> >
>
> Thank you for taking time to add a FATE test, but unfortunately I am
> not sure if this test tests the functionality you have added.
>
> You have added what appears to be parsing of webvtt styles into
> extradata in the demuxer, and then writing it out in the muxer. And
> the test just outputs this webvtt into ASS, which doesn't keep any of
> this since the decoder doesn't translate the newly inserted things
> into ASS styles :)
>
> So, I think a more fitting test would be to add this to the webvtt
> tests with -c copy and pushing it out as webvtt, I would think?
>
> Jan

Hi Jan,

Thanks for taking the time to review.

In fact, this was the test I intended to add. The test case was
developed before the fix and serves to show the original issue we saw
(total failure to extract any cues under the circumstances of the
test) is fixed and that there should be no regression with that
content, and I believe it serves those purposes. Apologies for not
making this clearer in the original submission.

That said, I agree that it may also be worth adding test coverage for
the encoder change and I will take the time to add that as well.

Cheers,
Dave
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH] avformat/webvttdec, enc: correctly process files containing STYLE, REGION blocks

2020-10-12 Thread Dave Evans
This patch fixes the total failure to parse cues when style and region
definition blocks are contained in the input file, and ensures those blocks
are written to the output when copying.

The test attached needs to be added to samples at the path shown in the
patch in order to validate that the original issue is fixed.

First patch so please go easy :-)

Cheers,
Dave
WEBVTT

REGION
id:son
width:40%
lines:3
regionanchor:20%,80%
viewportanchor:20%,80%
scroll:up

REGION
id:father
width:40%
lines:3
regionanchor:80%,80%
viewportanchor:80%,80%
scroll:up

STYLE
::cue(i) {
  /* make i tags italic */
  font-style: italic
}

STYLE
::cue(v[voice="Son"]) {
  color: magenta
}

STYLE
::cue(v[voice="Father"]) {
  color: yellow
}

00:10.000 --> 00:25.000 region:son align:left
Can I tell you a joke, Dad?

00:12.500 --> 00:27.500 region:father align:right
Sure, I could do with a laugh.

00:15.000 --> 00:30.000 region:son align:left
Where do sheep go to get their hair cut?

00:17.500 --> 00:32.500 region:father align:right
I don't know, son. Where do sheep go to get their hair cut?

00:20.000 --> 00:35.000 region:son align:left
To the baa-baa shop!

00:22.500 --> 00:37.500 region:father align:right
[facepalms]


0001-avformat-webvttdec-enc-correctly-decode-files-contai.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 1/2] avformat/dv: allow returning damaged audio

2020-09-06 Thread Dave Rice

> On Aug 3, 2020, at 5:16 PM, Michael Niedermayer  
> wrote:
> 
> On Mon, Aug 03, 2020 at 10:38:21PM +0200, Marton Balint wrote:
>> 
>> 
>> On Sun, 2 Aug 2020, Dave Rice wrote:
>> 
>>> 
>>> 
>>>> On Aug 1, 2020, at 5:26 PM, Marton Balint  wrote:
>>>> 
>>>> 
>>>> 
>>>> On Sat, 1 Aug 2020, Michael Niedermayer wrote:
>>>> 
>>>>> On Sat, Aug 01, 2020 at 07:28:53PM +0200, Marton Balint wrote:
>>>>>> 
>>>>>> 
>>>>>> On Sat, 1 Aug 2020, Michael Niedermayer wrote:
>>>>>> 
>>>>>>> Fixes: Ticket8762
>>>>>>> Signed-off-by: Michael Niedermayer 
>>>>>>> ---
>>>>>>> libavformat/dv.c | 49 +---
>>>>>>> 1 file changed, 42 insertions(+), 7 deletions(-)
>>>>>> 
>>>>>> If "dv remux loses sync", then the timestamps should be fixed, not
>>>>>> additional packets should be generated based on previously read packet 
>>>>>> data
>>>>>> (which is a fragile approach to begin with, e.g. what if the first frame 
>>>>>> is
>>>>>> the corrupt one?).
>>>>> 
>>>>> Ticket8762 is about stream copy, so if no packets are returned for audio
>>>>> but are for video and just timestamps are updated this would at least on
>>>>> its own probably not work that well.
>>>> 
>>>> If the timestamps are good, a good player should be able to play it
>>>> correctly, even if audio stream is sparse.
>>>> 
>>>> None of the demuxers generate packets because the timestamps are not
>>>> continous, I just don't think it would be consistent if DV suddenly
>>>> started to do this. E.g. what if the user wants to drop video with
>>>> no audio?
>>> 
>>> In practice, when dv frames with video and no audio are interleaved
>>> within a dv stream that otherwise has both, it is because the playback
>>> videotape player of the dv tape is in pause mode or the tape is damaged.
>>> These frames most common are filled with only video dif blocks that note
>>> concealment (so the image is a copy of a prior image) and the audio
>>> source pack metadata is missing, but the paylock of the audio dif blocks
>>> are filled with error code so they would decode as silence.
>> 
>> But if the audio source pack metadata is missing, then how can you determine
>> the audio settings?

I tested with QuickTime Player 7 and when frames are read with the audio source 
pack metadata missing, the first audio source pack is used. So these frames 
provide silent output as an earlier audio source pack is used. The disadvantage 
here is that a mid stream change such as 32kHz to 48kHz causes QuickTime Player 
7 to mangle the audio by applying the wrong characteristics.

>> Or the number of samples the errornous frame contains
>> (e.g. 1600 v.s 1602)?
> 
> some testcase would be useful here where this is done clearly wrong currently

I put two additional samples at 
https://archive.org/download/001.dv.audiogap/001.dv.audiogap.dv 
<https://archive.org/download/001.dv.audiogap/001.dv.audiogap.dv> and 
https://archive.org/download/001.dv.audiogap/DVC00036_001.dv.audiogap.dv 
<https://archive.org/download/001.dv.audiogap/DVC00036_001.dv.audiogap.dv>. 
Each contains a series of frames in the middle that have all video blocks as 
concealed and all audio blocks are simply error code with no audio source pack.

For each example, both "ffmpeg -i file -c copy out” and “ffmpeg -i file out” 
has a loss of sync in the result and an audio track shorter than the video.

But true, a frame with no audio source pack does not communicate if it should 
be 1600 or 1602 samples.

In the SMPTE specification for DV at 
http://web.archive.org/web/20060927044735/http://www.smpte.org/smpte_store/standards/pdf/s314m.pdf
 
<http://web.archive.org/web/20060927044735/http://www.smpte.org/smpte_store/standards/pdf/s314m.pdf>,
 it says on page 18 that for NTSC systems, the five-frame pattern should be: 
1600, 1602, 1602, 1602, 1602. So if a frame has no audio source pack, the 
pattern of prior frames could be used or simply use this pattern upon finding a 
sequence of such frames starting at 1600. Or possibly the relationship between 
the starting time of the audio data and the starting time for the video data 
could be used to guess if 1600 or 1602 maintains the alignment more closely.

>> Also maybe setting the CORRUPT packet flag should be done in this case?
> 
> yes was thinking that too, that should be in the next revision

In the reference specification, table 26 shows how the STA value is interpreted 
to note if the frame contains concealed video DIF blocks or not. This doesn’t 
necessarily mean that the frame is corrupt, but that it is the product of data 
concealment caused by a misreading of the DV videotape.

[…]
Dave


___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 1/2] avformat/dv: allow returning damaged audio

2020-08-02 Thread Dave Rice


> On Aug 1, 2020, at 5:26 PM, Marton Balint  wrote:
> 
> 
> 
> On Sat, 1 Aug 2020, Michael Niedermayer wrote:
> 
>> On Sat, Aug 01, 2020 at 07:28:53PM +0200, Marton Balint wrote:
>>> 
>>> 
>>> On Sat, 1 Aug 2020, Michael Niedermayer wrote:
>>> 
 Fixes: Ticket8762
 Signed-off-by: Michael Niedermayer 
 ---
 libavformat/dv.c | 49 +---
 1 file changed, 42 insertions(+), 7 deletions(-)
>>> 
>>> If "dv remux loses sync", then the timestamps should be fixed, not
>>> additional packets should be generated based on previously read packet data
>>> (which is a fragile approach to begin with, e.g. what if the first frame is
>>> the corrupt one?).
>> 
>> Ticket8762 is about stream copy, so if no packets are returned for audio
>> but are for video and just timestamps are updated this would at least on
>> its own probably not work that well.
> 
> If the timestamps are good, a good player should be able to play it 
> correctly, even if audio stream is sparse.
> 
> None of the demuxers generate packets because the timestamps are not 
> continous, I just don't think it would be consistent if DV suddenly started 
> to do this. E.g. what if the user wants to drop video with no audio?

In practice, when dv frames with video and no audio are interleaved within a dv 
stream that otherwise has both, it is because the playback videotape player of 
the dv tape is in pause mode or the tape is damaged. These frames most common 
are filled with only video dif blocks that note concealment (so the image is a 
copy of a prior image) and the audio source pack metadata is missing, but the 
paylock of the audio dif blocks are filled with error code so they would decode 
as silence.

I did a test of 114 dv tape transfers
61 no difference in demuxing between pass and drop (using this patch)
31 the difference is only in the final frames so the impact of the 
demuxer option would be hard for the user to see, no loss of sync, just video 
with no audio at the end
22 the difference occurs mid-stream, with the drop option the demuxer 
outputs video and audio at different rate when hitting frames with no audio 
source pack, so the output of the demuxer loses sync

>> about the case of a damaged first frame. Do you have a testcase ?
> 
> No, but it can happen, can't it? If the stream starts with no audio for 1 
> second you will have 1 second A-V desync, as far as I see, that is why I 
> believe fixing the timestamps is the proper fix.

Yes this happens (though it is more rare and didn’t occur in the test noted 
above). In that case, ffmpeg shows no audio at all and I’d have to read the 
stream at later frames using -skip_initial_bytes.

> Regards,
> Marton
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org 
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel 
> 
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org  with 
> subject "unsubscribe".

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avformat/mov: fix memleak

2020-06-28 Thread Dave Rice
Hi Andreas,

> On Jun 27, 2020, at 5:09 PM, Andreas Rheinhardt 
>  wrote:
> 
> Zhao Zhili:
>> Remove the check on dv_demux since dv_fctx will leak if allocate
>> dv_demux failed.
>> ---
>> libavformat/mov.c | 7 +++
>> 1 file changed, 3 insertions(+), 4 deletions(-)
>> 
>> diff --git a/libavformat/mov.c b/libavformat/mov.c
>> index adc52de947..f179b6efdd 100644
>> --- a/libavformat/mov.c
>> +++ b/libavformat/mov.c
>> @@ -7357,10 +7357,9 @@ static int mov_read_close(AVFormatContext *s)
>> av_freep(>coll);
>> }
>> 
>> -if (mov->dv_demux) {
>> -avformat_free_context(mov->dv_fctx);
>> -mov->dv_fctx = NULL;
>> -}
>> +av_freep(>dv_demux);
>> +avformat_free_context(mov->dv_fctx);
>> +mov->dv_fctx = NULL;
>> 
>> if (mov->meta_keys) {
>> for (i = 1; i < mov->meta_keys_count; i++) {
>> 
> Do you have a sample for this? I am asking because there are more
> memleaks related to dv audio and I have a patch [1] for them, but I
> never found any sample for dv in mov/mp4, so it was never applied.

I’m working with a lot of DV lately. Could you clarify what type of sample that 
you are looking for?

> If I am not mistaken, then you are not only fixing a leak of dv_fctx in
> case allocation of dv_demux failed; you are also fixing a leak of
> dv_demux itself in the ordinary case where it could be successfully
> allocated. This should be reflected in the commit message.
> 
> - Andreas
> 
> [1]:
> https://patchwork.ffmpeg.org/project/ffmpeg/patch/20190916155502.17579-3-andreas.rheinha...@gmail.com/
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] OS/2:Support linking against libcx

2020-06-13 Thread Dave Yeo

On 06/13/20 07:03 AM, KO Myung-Hun wrote:

Hi/2.

Dave Yeo wrote:

On 06/11/20 10:26 AM, Michael Niedermayer wrote:

On Wed, Jun 10, 2020 at 09:24:51PM -0700, Dave Yeo wrote:

On 06/10/20 02:09 PM, Michael Niedermayer wrote:

On Tue, Jun 09, 2020 at 11:11:48PM -0700, Dave Yeo wrote:

Hi, could I get this pushed to trunk and the 4.3 branch? Fixes a
build break
in libavformat/ip.c (implicit declaration of function
'getaddrinfo') and
also need the prototype.
Thanks,
Dave

it seems this breaks build on linux


Sorry about that, I'll test on Linux in the future.
Here's a better patch as it doesn't touch configure.
Thanks,
Dave


I can confirm this does not break build anymore, but iam not OS/2
maintainer nor do i have such box so ill leave review / application
to someone better suited for this

thx
[...]


Fair enough, I'll CC KOMH


I have no problems at all with gcc 9.1.0 because FFmpeg already has
replacements for missing functions such as getaddrinfo().

What is your build environment ? Maybe is libcx linked by default ?



The problem only occurs if I link in libcx, LIBS=-lcx.
Note that with libcx, the build system finds its getaddrinfo() and later 
fails due to no prototype and will also fail as part of the addrinfo 
struct uses a different type (char vs int IIRC). Without LIBS=-lcx, the 
build falls back to FFmpeg's builtin getaddrinfo().
One of the main reasons to link to libcx is for the exceptq support. 
Also as it is recommended, others are likely to do the same thing.

Dave
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] OS/2:Support linking against libcx

2020-06-11 Thread Dave Yeo

On 06/11/20 10:26 AM, Michael Niedermayer wrote:

On Wed, Jun 10, 2020 at 09:24:51PM -0700, Dave Yeo wrote:

On 06/10/20 02:09 PM, Michael Niedermayer wrote:

On Tue, Jun 09, 2020 at 11:11:48PM -0700, Dave Yeo wrote:

Hi, could I get this pushed to trunk and the 4.3 branch? Fixes a build break
in libavformat/ip.c (implicit declaration of function 'getaddrinfo') and
also need the prototype.
Thanks,
Dave

it seems this breaks build on linux


Sorry about that, I'll test on Linux in the future.
Here's a better patch as it doesn't touch configure.
Thanks,
Dave


I can confirm this does not break build anymore, but iam not OS/2
maintainer nor do i have such box so ill leave review / application
to someone better suited for this

thx
[...]


Fair enough, I'll CC KOMH
Dave

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] OS/2:Support linking against libcx

2020-06-10 Thread Dave Yeo

On 06/10/20 02:09 PM, Michael Niedermayer wrote:

On Tue, Jun 09, 2020 at 11:11:48PM -0700, Dave Yeo wrote:

Hi, could I get this pushed to trunk and the 4.3 branch? Fixes a build break
in libavformat/ip.c (implicit declaration of function 'getaddrinfo') and
also need the prototype.
Thanks,
Dave

it seems this breaks build on linux


Sorry about that, I'll test on Linux in the future.
Here's a better patch as it doesn't touch configure.
Thanks,
Dave
From 033add727ca8c512a3f4a7d24fde88bb8c0455c8 Mon Sep 17 00:00:00 2001
From: Dave Yeo 
Date: Wed, 10 Jun 2020 18:55:44 -0700
Subject: [PATCH] libavformat/os_support.h:OS/2, support linking against libcx

Libcx contains extensions to libc such as getaddrinfo(), mmap() and poll(). While recommended to link against, it is optional

Signed-off-by: Dave Yeo 
---
 libavformat/os_support.h | 4 
 1 file changed, 4 insertions(+)

diff --git a/libavformat/os_support.h b/libavformat/os_support.h
index 5e6b32d2dc..6c60844b7d 100644
--- a/libavformat/os_support.h
+++ b/libavformat/os_support.h
@@ -56,6 +56,10 @@
 #  define fstat(f,s) _fstati64((f), (s))
 #endif /* defined(_WIN32) */
 
+#if defined (__OS2__) && defined (HAVE_GETADDRINFO)
+#include 
+#define HAVE_STRUCT_ADDRINFO 1
+#endif
 
 #ifdef __ANDROID__
 #  if HAVE_UNISTD_H
-- 
2.11.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH] OS/2:Support linking against libcx

2020-06-10 Thread Dave Yeo
Hi, could I get this pushed to trunk and the 4.3 branch? Fixes a build 
break in libavformat/ip.c (implicit declaration of function 
'getaddrinfo') and also need the prototype.

Thanks,
Dave
From f9fbdaaf6cdb6f886cbdf31c1983e452567cd857 Mon Sep 17 00:00:00 2001
From: Dave Yeo 
Date: Tue, 9 Jun 2020 22:51:53 -0700
Subject: [PATCH] OS/2:Support linking against libcx

Libcx contains extensions to libc such as getaddrinfo(), mmap() and poll(). While recommended to link against, it is optional

Signed-off-by: Dave Yeo 
---
 configure| 1 +
 libavformat/os_support.h | 3 +++
 2 files changed, 4 insertions(+)

diff --git a/configure b/configure
index 8569a60bf8..24ad990b52 100755
--- a/configure
+++ b/configure
@@ -5997,6 +5997,7 @@ if ! disabled network; then
 check_func inet_aton $network_extralibs
 
 check_type netdb.h "struct addrinfo"
+check_type libcx/net.h "struct addrinfo"
 check_type netinet/in.h "struct group_source_req" -D_BSD_SOURCE
 check_type netinet/in.h "struct ip_mreq_source" -D_BSD_SOURCE
 check_type netinet/in.h "struct ipv6_mreq" -D_DARWIN_C_SOURCE
diff --git a/libavformat/os_support.h b/libavformat/os_support.h
index 5e6b32d2dc..1904fc8d5d 100644
--- a/libavformat/os_support.h
+++ b/libavformat/os_support.h
@@ -56,6 +56,9 @@
 #  define fstat(f,s) _fstati64((f), (s))
 #endif /* defined(_WIN32) */
 
+#if defined (__OS2__) && defined (HAVE_GETADDRINFO)
+#include 
+#endif
 
 #ifdef __ANDROID__
 #  if HAVE_UNISTD_H
-- 
2.11.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH] compat/os2threads:define INCL_DOSERRORS

2020-02-12 Thread Dave Yeo


From 6182a7f6b83905fb2315b416ae714a329ec2d0df Mon Sep 17 00:00:00 2001
From: Dave Yeo 
Date: Wed, 12 Feb 2020 20:13:00 -0800
Subject: [PATCH] compat/os2threads:define INCL_DOSERRORS

This is needed to pull in the define for ERROR_TIMEOUT

Signed-off-by: Dave Yeo 
---
 compat/os2threads.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/compat/os2threads.h b/compat/os2threads.h
index eec6f40ae7..a061eaa63d 100644
--- a/compat/os2threads.h
+++ b/compat/os2threads.h
@@ -27,6 +27,7 @@
 #define COMPAT_OS2THREADS_H
 
 #define INCL_DOS
+#define INCL_DOSERRORS
 #include 
 
 #undef __STRICT_ANSI__  /* for _beginthread() */
-- 
2.11.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH v1] avfilter: add colorstats, colorrgbstats, coloryuvstats video filter

2019-12-30 Thread Dave Rice

> On Dec 27, 2019, at 10:49 AM, Paul B Mahol  wrote:
> 
> That is because signalstats is doing more stuff.

signalstats includes options to disable some of the calculations, possibly this 
could be extended to enable or disable the ones you want. It would be 
interesting to merge these ideas rather than have two filters with such a 
substantial overlap.

> On 12/27/19, Limin Wang  wrote:
>> On Fri, Dec 27, 2019 at 03:20:19PM +0100, Paul B Mahol wrote:
>>> On 12/27/19, Limin Wang  wrote:
 On Fri, Dec 27, 2019 at 12:35:25PM +0100, Paul B Mahol wrote:
> You are duplicating some functionality of signalstats filter.
> 
 Yes, I have other function need to use the mean and stdev which is
 support in showinfo filter(only 8bit and don't support packed format,
 no multi-thread), and signalstats don't support rgb format and don't
 have stdev, also it have too many other function and difficult to change
 it, so I think it's more simple to create a new filter to do it.
 
>>> 
>>> No, unacceptable. use signalstats filter.
>> 
>> The performance is one major reason also, below is the profiling result for
>> performance:
>> 
>> ./ffmpeg -nostats -f lavfi -i testsrc2=4k:d=2 -vf
>> bench=start,signalstats,bench=stop -f null -
>> [bench @ 0x3fb9080] t:0.161589 avg:0.165756 max:0.169923 min:0.161589
>> [bench @ 0x3fb9080] t:0.160334 avg:0.163948 max:0.169923 min:0.160334
>> [bench @ 0x3fb9080] t:0.160345 avg:0.163047 max:0.169923 min:0.160334
>> [bench @ 0x3fb9080] t:0.160924 avg:0.162623 max:0.169923 min:0.160334
>> [bench @ 0x3fb9080] t:0.160318 avg:0.162238 max:0.169923 min:0.160318
>> 
>> ./ffmpeg -nostats -f lavfi -i testsrc2=4k:d=2 -vf
>> bench=start,colorstats,bench=stop -f null -
>> [bench @ 0x26f6100] t:0.012596 avg:0.012612 max:0.012628 min:0.012596
>> [bench @ 0x26f6100] t:0.012542 avg:0.012588 max:0.012628 min:0.012542
>> [bench @ 0x26f6100] t:0.012529 avg:0.012573 max:0.012628 min:0.012529
>> [bench @ 0x26f6100] t:0.012532 avg:0.012565 max:0.012628 min:0.012529
>> [bench @ 0x26f6100] t:0.012527 avg:0.012559 max:0.012628 min:0.012527
>> [bench @ 0x26f6100] t:0.012525 avg:0.012554 max:0.012628 min:0.012525
>> [bench @ 0x26f6100] t:0.012522 avg:0.012550 max:0.012628 min:0.012522
>> [bench @ 0x26f6100] t:0.012552 avg:0.012550 max:0.012628 min:0.012522
>> 
>> 
>>> 
 
> On 12/27/19, lance.lmw...@gmail.com  wrote:
>> From: Limin Wang 
>> 
>> Signed-off-by: Limin Wang 
>> ---
>> doc/filters.texi|  74 ++
>> libavfilter/Makefile|   1 +
>> libavfilter/allfilters.c|   3 +
>> libavfilter/vf_colorstats.c | 461
>> 
>> 4 files changed, 539 insertions(+)
>> create mode 100644 libavfilter/vf_colorstats.c
>> 
>> diff --git a/doc/filters.texi b/doc/filters.texi
>> index 8c5d3a5760..81968b2c17 100644
>> --- a/doc/filters.texi
>> +++ b/doc/filters.texi
>> @@ -7695,6 +7695,80 @@ For example to convert the input to
>> SMPTE-240M,
>> use
>> the command:
>> colorspace=smpte240m
>> @end example
>> 
>> +@section colorstats, colorrgbstats, coloryuvstats
>> +The filter provides statistical video measurements such as mean,
>> minimum,
>> maximum and
>> +standard deviation for each frame. The user can check for
>> unexpected/accidental errors
>> +very quickly with them.
>> +
>> +@var{colorrgbstats} report the color stats for RGB input video,
>> @var{coloryuvstats}
>> +to an YUV input video.
>> +
>> +These filters accept the following parameters:
>> +@table @option
>> +@item planes
>> +Set which planes to filter. Default is only the first plane.
>> +@end table
>> +
>> +By default the filter will report these metadata values if the
>> planes
>> +are processed:
>> +
>> +@table @option
>> +@item min.y, min.u, min.v, min.r, min.g, min.b, min.a
>> +Display the minimal Y/U/V/R/G/B/A plane value contained within the
>> input
>> frame.
>> +Expressed in range of [0, 1<> +
>> +@item pmin.y, pmin.u, pmin.v, pmin.r, pmin.g, pmin.b, min.a
>> +Display the minimal Y/U/V/R/G/B/A plane percentage of maximum
>> contained
>> within
>> +the input frame. Expressed in range of [0, 1]
>> +
>> +@item max.y, max.u, max.v, max.r, max.g, max.b, max.a
>> +Display the maximum Y/U/V/R/G/B/A plane value contained within the
>> input
>> frame.
>> +Expressed in range of [0, 1<> +
>> +@item pmax.y, pmax.u, pmax.v, pmax.r, pmax.g, pmax.b, pmax.a
>> +Display the maximum Y/U/V/R/G/B/A plane percentage of maximum
>> contained
>> within
>> +the input frame. Expressed in range of [0, 1]
>> +
>> +@item mean.y, mean.u, mean.v, mean.r, mean.g, mean.b, mean.a
>> +Display the Y/U/V/R/G/B/A plane mean value contained within the
>> input
>> frame.
>> +Expressed in range of [0, 1<> +

Re: [FFmpeg-devel] [PATCH] avformat/movenc: always write a colr atom

2019-07-16 Thread Dave Rice
Hi James,

> On Jul 16, 2019, at 9:47 AM, James Almer  wrote:
> 
> -{ "write_colr", "Write colr atom (Experimental, may be renamed or 
> changed, do not use from scripts)", 0, AV_OPT_TYPE_CONST, {.i64 = 
> FF_MOV_FLAG_WRITE_COLR}, INT_MIN, INT_MAX, AV_OPT_FLAG_ENCODING_PARAM, 
> "movflags" },

The comment said specifically not to use this option in scripts; however, there 
were ffmpeg users (I admit myself as one) who wanted both to use scripts and 
write valid QuickTime files with colr data. I’d suggest permitting the option 
to remain but to change the default behavior from disabled to enabled. If I 
understand correctly, after this patch, encoding with `-movflags write_colr` 
would cause ffmpeg to fail with an error from the mov muxer.

Dave

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCHv3 1/2] ffplay: options to specify window position

2018-10-04 Thread Dave Rice
From caa816d70e69f85d49556ff341addab24ebcd942 Mon Sep 17 00:00:00 2001
From: Dave Rice 
Date: Mon, 1 Oct 2018 17:07:44 -0400
Subject: [PATCH 1/2] ffplay: options to specify window position

---
 doc/ffplay.texi  | 4 
 fftools/ffplay.c | 6 +-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/doc/ffplay.texi b/doc/ffplay.texi
index dcb86ce13c..99e1d7468a 100644
--- a/doc/ffplay.texi
+++ b/doc/ffplay.texi
@@ -74,6 +74,10 @@ as 100.
 Force format.
 @item -window_title @var{title}
 Set window title (default is the input filename).
+@item -left @var{title}
+Set the x position for the left of the window (default is a centered window).
+@item -top @var{title}
+Set the y position for the top of the window (default is a centered window).
 @item -loop @var{number}
 Loops movie playback  times. 0 means forever.
 @item -showmode @var{mode}
diff --git a/fftools/ffplay.c b/fftools/ffplay.c
index e375a32ec2..ab1f9faccf 100644
--- a/fftools/ffplay.c
+++ b/fftools/ffplay.c
@@ -314,6 +314,8 @@ static int default_width  = 640;
 static int default_height = 480;
 static int screen_width  = 0;
 static int screen_height = 0;
+static int screen_left = SDL_WINDOWPOS_CENTERED;
+static int screen_top = SDL_WINDOWPOS_CENTERED;
 static int audio_disable;
 static int video_disable;
 static int subtitle_disable;
@@ -1346,7 +1348,7 @@ static int video_open(VideoState *is)
 SDL_SetWindowTitle(window, window_title);
 
 SDL_SetWindowSize(window, w, h);
-SDL_SetWindowPosition(window, SDL_WINDOWPOS_CENTERED, 
SDL_WINDOWPOS_CENTERED);
+SDL_SetWindowPosition(window, screen_left, screen_top);
 if (is_full_screen)
 SDL_SetWindowFullscreen(window, SDL_WINDOW_FULLSCREEN_DESKTOP);
 SDL_ShowWindow(window);
@@ -3602,6 +3604,8 @@ static const OptionDef options[] = {
 { "framedrop", OPT_BOOL | OPT_EXPERT, {  }, "drop frames when 
cpu is too slow", "" },
 { "infbuf", OPT_BOOL | OPT_EXPERT, { _buffer }, "don't limit the 
input buffer size (useful with realtime streams)", "" },
 { "window_title", OPT_STRING | HAS_ARG, { _title }, "set window 
title", "window title" },
+{ "left", OPT_INT | HAS_ARG | OPT_EXPERT, { _left }, "set the x 
position for the left of the window", "x pos" },
+{ "top", OPT_INT | HAS_ARG | OPT_EXPERT, { _top }, "set the y 
position for the top of the window", "y pos" },
 #if CONFIG_AVFILTER
 { "vf", OPT_EXPERT | HAS_ARG, { .func_arg = opt_add_vfilter }, "set video 
filters", "filter_graph" },
 { "af", OPT_STRING | HAS_ARG, {  }, "set audio filters", 
"filter_graph" },
-- 
2.19.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 2/2] avdevice/sdl2 : add option to set window position

2018-10-03 Thread Dave Rice

> On Oct 2, 2018, at 1:32 AM, Gyan  wrote:
> 
> On Tue, Oct 2, 2018 at 2:47 AM Dave Rice  wrote:
> 
>> Allows arrangement of multiple windows such as:
>> ffmpeg -re -f lavfi -i mandelbrot -f sdl -window_x 1 -window_y 1
>> mandelbrot -vf waveform,format=yuv420p -f sdl -window_x 641 -window_y 1
>> waveform -vf vectorscope,format=yuv420p -f sdl -window_x 1 -window_y 481
>> vectorscop
>> 
>> From 00438983c96b5db227b9975a2c160fc6aac5219d Mon Sep 17 00:00:00 2001
>> From: Dave Rice 
>> Date: Mon, 1 Oct 2018 17:08:35 -0400
>> Subject: [PATCH 2/2] avdevice/sdl2 : add option to set window position
>> 
>> +if (!sdl->window_x)
>> +sdl->window_x = SDL_WINDOWPOS_CENTERED;
>> +if (!sdl->window_y)
>> +sdl->window_y = SDL_WINDOWPOS_CENTERED;
>> +SDL_SetWindowPosition(sdl->window, sdl->window_x, sdl->window_y);
>> 
> 
> What happens if the user value implies fully or partially out-of-canvas
> rendering?

I attempted to add an error message but am uncertain how to access the width 
and height of the canvas used. Any advice?

> For the former case, we may want to print a warning and reposition the
> window.
> If a partial window is drawable, then negative values can be valid and the
> lower range bound should be INT_MIN
> Also, the user can't position a window at top-left (0,0), so the default
> should probably be INT_MAX.
> 
> Gyan
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCHv2 1/2] ffplay: options to specify window position

2018-10-03 Thread Dave Rice
Thanks Marton for comments. Here is a revision to the first patch.

From 3fe6a9e5279a280af9a06843621737ddc44529cc Mon Sep 17 00:00:00 2001
From: Dave Rice 
Date: Mon, 1 Oct 2018 17:07:44 -0400
Subject: [PATCHv2 1/2] ffplay: options to specify window position

---
 doc/ffplay.texi  | 4 
 fftools/ffplay.c | 6 +-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/doc/ffplay.texi b/doc/ffplay.texi
index dcb86ce13c..a3da2cd570 100644
--- a/doc/ffplay.texi
+++ b/doc/ffplay.texi
@@ -74,6 +74,10 @@ as 100.
 Force format.
 @item -window_title @var{title}
 Set window title (default is the input filename).
+@item -screen_left @var{title}
+Set the x position for the left of the window (default is a centered window).
+@item -screen_top @var{title}
+Set the y position for the top of the window (default is a centered window).
 @item -loop @var{number}
 Loops movie playback  times. 0 means forever.
 @item -showmode @var{mode}
diff --git a/fftools/ffplay.c b/fftools/ffplay.c
index e375a32ec2..6cc59b4d33 100644
--- a/fftools/ffplay.c
+++ b/fftools/ffplay.c
@@ -314,6 +314,8 @@ static int default_width  = 640;
 static int default_height = 480;
 static int screen_width  = 0;
 static int screen_height = 0;
+static int left = SDL_WINDOWPOS_CENTERED;
+static int top = SDL_WINDOWPOS_CENTERED;
 static int audio_disable;
 static int video_disable;
 static int subtitle_disable;
@@ -1346,7 +1348,7 @@ static int video_open(VideoState *is)
 SDL_SetWindowTitle(window, window_title);
 
 SDL_SetWindowSize(window, w, h);
-SDL_SetWindowPosition(window, SDL_WINDOWPOS_CENTERED, 
SDL_WINDOWPOS_CENTERED);
+SDL_SetWindowPosition(window, left, top);
 if (is_full_screen)
 SDL_SetWindowFullscreen(window, SDL_WINDOW_FULLSCREEN_DESKTOP);
 SDL_ShowWindow(window);
@@ -3602,6 +3604,8 @@ static const OptionDef options[] = {
 { "framedrop", OPT_BOOL | OPT_EXPERT, {  }, "drop frames when 
cpu is too slow", "" },
 { "infbuf", OPT_BOOL | OPT_EXPERT, { _buffer }, "don't limit the 
input buffer size (useful with realtime streams)", "" },
 { "window_title", OPT_STRING | HAS_ARG, { _title }, "set window 
title", "window title" },
+{ "left", OPT_INT | HAS_ARG | OPT_EXPERT, {  }, "set the x position 
for the left of the window", "x pos" },
+{ "top", OPT_INT | HAS_ARG | OPT_EXPERT, {  }, "set the y position for 
the top of the window", "y pos" },
 #if CONFIG_AVFILTER
 { "vf", OPT_EXPERT | HAS_ARG, { .func_arg = opt_add_vfilter }, "set video 
filters", "filter_graph" },
 { "af", OPT_STRING | HAS_ARG, {  }, "set audio filters", 
"filter_graph" },
-- 
2.19.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 2/2] avdevice/sdl2 : add option to set window position

2018-10-01 Thread Dave Rice
Allows arrangement of multiple windows such as:
ffmpeg -re -f lavfi -i mandelbrot -f sdl -window_x 1 -window_y 1 mandelbrot -vf 
waveform,format=yuv420p -f sdl -window_x 641 -window_y 1 waveform -vf 
vectorscope,format=yuv420p -f sdl -window_x 1 -window_y 481 vectorscop

From 00438983c96b5db227b9975a2c160fc6aac5219d Mon Sep 17 00:00:00 2001
From: Dave Rice 
Date: Mon, 1 Oct 2018 17:08:35 -0400
Subject: [PATCH 2/2] avdevice/sdl2 : add option to set window position

---
 libavdevice/sdl2.c | 9 +
 1 file changed, 9 insertions(+)

diff --git a/libavdevice/sdl2.c b/libavdevice/sdl2.c
index da5143078e..69c541da23 100644
--- a/libavdevice/sdl2.c
+++ b/libavdevice/sdl2.c
@@ -40,6 +40,7 @@ typedef struct {
 SDL_Renderer *renderer;
 char *window_title;
 int window_width, window_height;  /**< size of the window */
+int window_x, window_y;   /**< position of the window */
 int window_fullscreen;
 int window_borderless;
 int enable_quit_action;
@@ -217,6 +218,12 @@ static int sdl2_write_header(AVFormatContext *s)
 
 SDL_SetWindowTitle(sdl->window, sdl->window_title);
 
+if (!sdl->window_x)
+sdl->window_x = SDL_WINDOWPOS_CENTERED;
+if (!sdl->window_y)
+sdl->window_y = SDL_WINDOWPOS_CENTERED;
+SDL_SetWindowPosition(sdl->window, sdl->window_x, sdl->window_y);
+
 sdl->texture = SDL_CreateTexture(sdl->renderer, sdl->texture_fmt, 
SDL_TEXTUREACCESS_STREAMING,
  codecpar->width, codecpar->height);
 
@@ -337,6 +344,8 @@ static int sdl2_write_packet(AVFormatContext *s, AVPacket 
*pkt)
 static const AVOption options[] = {
 { "window_title",  "set SDL window title",   OFFSET(window_title), 
AV_OPT_TYPE_STRING, { .str = NULL }, 0, 0, AV_OPT_FLAG_ENCODING_PARAM },
 { "window_size",   "set SDL window forced size", OFFSET(window_width), 
AV_OPT_TYPE_IMAGE_SIZE, { .str = NULL }, 0, 0, AV_OPT_FLAG_ENCODING_PARAM },
+{ "window_x",  "set SDL window x position",  OFFSET(window_x), 
AV_OPT_TYPE_INT,{ .i64 = 0 },0, INT_MAX, AV_OPT_FLAG_ENCODING_PARAM 
},
+{ "window_y",  "set SDL window y position",  OFFSET(window_y), 
AV_OPT_TYPE_INT,{ .i64 = 0 },0, INT_MAX, AV_OPT_FLAG_ENCODING_PARAM 
},
 { "window_fullscreen", "set SDL window fullscreen",  
OFFSET(window_fullscreen), AV_OPT_TYPE_BOOL,  { .i64 = 0 },0, 1, 
AV_OPT_FLAG_ENCODING_PARAM },
 { "window_borderless", "set SDL window border off",  
OFFSET(window_borderless), AV_OPT_TYPE_BOOL,  { .i64 = 0 },0, 1, 
AV_OPT_FLAG_ENCODING_PARAM },
 { "window_enable_quit", "set if quit action is available", 
OFFSET(enable_quit_action), AV_OPT_TYPE_INT, {.i64=1},   0, 1, 
AV_OPT_FLAG_ENCODING_PARAM },
-- 
2.19.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 1/2] ffplay: options to specify window position

2018-10-01 Thread Dave Rice
From 14d6833b564bd672613d50ecc4c3ede1768eee37 Mon Sep 17 00:00:00 2001
From: Dave Rice 
Date: Mon, 1 Oct 2018 17:07:44 -0400
Subject: [PATCH 1/2] ffplay: options to specify window position

---
 fftools/ffplay.c | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/fftools/ffplay.c b/fftools/ffplay.c
index e375a32ec2..e1ec2e9df2 100644
--- a/fftools/ffplay.c
+++ b/fftools/ffplay.c
@@ -314,6 +314,8 @@ static int default_width  = 640;
 static int default_height = 480;
 static int screen_width  = 0;
 static int screen_height = 0;
+static int window_x;
+static int window_y;
 static int audio_disable;
 static int video_disable;
 static int subtitle_disable;
@@ -1346,7 +1348,11 @@ static int video_open(VideoState *is)
 SDL_SetWindowTitle(window, window_title);
 
 SDL_SetWindowSize(window, w, h);
-SDL_SetWindowPosition(window, SDL_WINDOWPOS_CENTERED, 
SDL_WINDOWPOS_CENTERED);
+if (!window_x)
+window_x = SDL_WINDOWPOS_CENTERED;
+if (!window_y)
+window_y = SDL_WINDOWPOS_CENTERED;
+SDL_SetWindowPosition(window, window_x, window_y);
 if (is_full_screen)
 SDL_SetWindowFullscreen(window, SDL_WINDOW_FULLSCREEN_DESKTOP);
 SDL_ShowWindow(window);
@@ -3602,6 +3608,8 @@ static const OptionDef options[] = {
 { "framedrop", OPT_BOOL | OPT_EXPERT, {  }, "drop frames when 
cpu is too slow", "" },
 { "infbuf", OPT_BOOL | OPT_EXPERT, { _buffer }, "don't limit the 
input buffer size (useful with realtime streams)", "" },
 { "window_title", OPT_STRING | HAS_ARG, { _title }, "set window 
title", "window title" },
+{ "window_x", OPT_INT | HAS_ARG | OPT_EXPERT, { _x }, "set the x 
position of the window", "x pos" },
+{ "window_y", OPT_INT | HAS_ARG | OPT_EXPERT, { _y }, "set the y 
position of the window", "y pos" },
 #if CONFIG_AVFILTER
 { "vf", OPT_EXPERT | HAS_ARG, { .func_arg = opt_add_vfilter }, "set video 
filters", "filter_graph" },
 { "af", OPT_STRING | HAS_ARG, {  }, "set audio filters", 
"filter_graph" },
-- 
2.19.0


___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avformat/matroskaenc: add reserve free space option

2018-09-26 Thread Dave Rice

> On Sep 12, 2018, at 11:56 AM, Sigríður Regína Sigurþórsdóttir 
>  wrote:
> 
> On Thu, Sep 6, 2018 at 3:31 PM James Almer  <mailto:jamr...@gmail.com>> wrote:
>> 
>> On 9/6/2018 4:18 PM, James Darnley wrote:
>>> On 2018-09-06 19:39, Sigríður Regína Sigurþórsdóttir wrote:
>>>> +if (s->metadata_header_padding) {
>>>> +if (s->metadata_header_padding == 1)
>>>> +s->metadata_header_padding++;
>>>> +put_ebml_void(pb, s->metadata_header_padding);
>>>> +}
>>> 
>>> Unfortunately I was forced to make the default -1 so you want to check
>>> that the value is greater than 0 rather than just true.
>>> 
>>> Furthermore I think you will still want to add to Changelog making a
>>> note that the matroska muxer will now listen to metadata_header_padding.
>> 
>> No, this kind of change doesn't justify a Changelog entry as mentioned
>> before.
>> 
>>> That may also want a micro version bump so that library users can check.
>> 
>> Micro version bump is ok.
> 
> 
> Thank you.
> 
> Here is an updated patch with a bump and a change to make sure the value is > 
> 0.
> 
> 
> 
> From 08e140fa0b23274a4db18ce0b201e45fe7c1ac97 Mon Sep 17 00:00:00 2001
> From: Sigga Regina mailto:siggareg...@gmail.com>>
> Date: Wed, 12 Sep 2018 11:47:47 -0400
> Subject: [PATCH] avformat/matroskaenc: add reserve free space option
> 
> ---
> libavformat/matroskaenc.c | 5 +
> libavformat/version.h | 2 +-
> 2 files changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/libavformat/matroskaenc.c b/libavformat/matroskaenc.c
> index 09a62e1..3f5febf 100644
> --- a/libavformat/matroskaenc.c
> +++ b/libavformat/matroskaenc.c
> @@ -2005,6 +2005,11 @@ static int mkv_write_header(AVFormatContext *s)
> ret = AVERROR(ENOMEM);
> goto fail;
> }
> +if (s->metadata_header_padding > 0) {
> +  if (s->metadata_header_padding == 1)
> +s->metadata_header_padding++;
> +  put_ebml_void(pb, s->metadata_header_padding);
> +}
> if ((pb->seekable & AVIO_SEEKABLE_NORMAL) && mkv->reserve_cues_space) {
> mkv->cues_pos = avio_tell(pb);
> if (mkv->reserve_cues_space == 1)
> diff --git a/libavformat/version.h b/libavformat/version.h
> index 4d21583..d7a1a35 100644
> --- a/libavformat/version.h
> +++ b/libavformat/version.h
> @@ -33,7 +33,7 @@
> // Also please add any ticket numbers that you believe might be affected here
> #define LIBAVFORMAT_VERSION_MAJOR  58
> #define LIBAVFORMAT_VERSION_MINOR  18
> -#define LIBAVFORMAT_VERSION_MICRO 100
> +#define LIBAVFORMAT_VERSION_MICRO 101
> 
> #define LIBAVFORMAT_VERSION_INT AV_VERSION_INT(LIBAVFORMAT_VERSION_MAJOR, \
>LIBAVFORMAT_VERSION_MINOR, \
> -- 
> 2.10.1 (Apple Git-78)
> <0001-avformat-matroskaenc-add-reserve-free-space-option 
> (1).patch>___

ping on this, as reserving such space in Matroska headers for later edits to 
the Tracks element would be helpful.
Dave Rice


___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 3/3] configure: speedup x2-x8

2018-08-26 Thread Dave Yeo

On 08/25/18 11:11 AM, avih wrote:

After the previous speedups, configure spent 20-60% of its runtime
at check_deps(). It's particularly slow with bash. After some local
optimizations - mainly avoid pushvar/popvar and abort early in one
notable case (empty deps), it's now x4-x25 faster.


Works great on OS/2, from 700 seconds to 144 seconds
Dave
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Register for VDD 2018 conference

2018-08-22 Thread Dave Rice

>> On Aug 22, 2018, at 14:12, Michael Niedermayer  
>> wrote:
>> 
>> On Wed, Aug 22, 2018 at 03:14:53PM +0200, Jean-Baptiste Kempf wrote:
>> Hello fellow devs,
>> 
>> VideoLAN is happy to invite you to the usual conference of the end of the 
>> summer:
>> VDD2018 is happening in Paris, for the 10 years of the original conf.
>> 
>> As usual, this is a very technical conference focused on open source 
>> multimedia development.
>> We will talk about AV1, FFv1, FFv2, x264/x265, VLC, FFmpeg and other related 
>> technologies.
> 
> what is FFv2 ?

FFV2 is referenced in this patch http://akuvian.org/src/x264/ffv2.94.diff.

Also FFV2 is referenced as a derivative experimental work from Daala. 
https://twitter.com/atomnuker/status/924846477083578368?s=21

Also FFV2 is referenced by Reto Kromer as a forked alternative to the work on 
FFV1 version 4 by the IETF cellar working group. 
https://twitter.com/retokromer/status/884030201050648576?s=21

Dave Rice
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 2/4] libavcodec: v4l2m2m: output AVDRMFrameDescriptor

2018-08-13 Thread Dave Stevenson
On 12 August 2018 at 22:25, Jorge Ramirez-Ortiz  wrote:
> On 08/06/2018 10:12 PM, Mark Thompson wrote:
>>
>> On 06/08/18 16:44, Jorge Ramirez-Ortiz wrote:
>>>
>>> On 08/04/2018 11:43 PM, Mark Thompson wrote:
>
> diff --git a/libavcodec/v4l2_context.c b/libavcodec/v4l2_context.c
> index efcb0426e4..9457fadb1e 100644
> --- a/libavcodec/v4l2_context.c
> +++ b/libavcodec/v4l2_context.c
> @@ -393,22 +393,54 @@ static int v4l2_release_buffers(V4L2Context* ctx)
>struct v4l2_requestbuffers req = {
>.memory = V4L2_MEMORY_MMAP,
>.type = ctx->type,
> -.count = 0, /* 0 -> unmaps buffers from the driver */
> +.count = 0, /* 0 -> unmap all buffers from the driver */
>};
> -int i, j;
> +int ret, i, j;
>  for (i = 0; i < ctx->num_buffers; i++) {
>V4L2Buffer *buffer = >buffers[i];
>  for (j = 0; j < buffer->num_planes; j++) {
>struct V4L2Plane_info *p = >plane_info[j];
> +
> +if (V4L2_TYPE_IS_OUTPUT(ctx->type)) {
> +/* output buffers are not EXPORTED */
> +goto unmap;
> +}
> +
> +if (ctx_to_m2mctx(ctx)->output_drm) {
> +/* use the DRM frame to close */
> +if (buffer->drm_frame.objects[j].fd >= 0) {
> +if (close(buffer->drm_frame.objects[j].fd) < 0) {
> +av_log(logger(ctx), AV_LOG_ERROR, "%s close
> drm fd "
> +"[buffer=%2d, plane=%d, fd=%2d] - %s \n",
> +ctx->name, i, j,
> buffer->drm_frame.objects[j].fd,
> +av_err2str(AVERROR(errno)));
> +}
> +}
> +}
> +unmap:
>if (p->mm_addr && p->length)
>if (munmap(p->mm_addr, p->length) < 0)
> -av_log(logger(ctx), AV_LOG_ERROR, "%s unmap plane
> (%s))\n", ctx->name, av_err2str(AVERROR(errno)));
> +av_log(logger(ctx), AV_LOG_ERROR, "%s unmap plane
> (%s))\n",
> +ctx->name, av_err2str(AVERROR(errno)));
>}

 (This whole function feels like it might fit better in v4l2_buffers.c?)

 To check my understanding here of what you've got currently (please
 correct me if I make a mistake here):
 * When making a new set of buffers (on start or format change),
 VIDIOC_EXPBUF is called once for each V4L2 buffer to make a DRM PRIME fd 
 for
 it.
 * Whenever you want to return a buffer, return the fd instead if using
 DRM PRIME output.
 * When returning a set of buffers (on close or format change), wait for
 all references to disappear and then close all of the fds before releasing
 the V4L2 buffers.
>>>
>>> We kept it as a context operation since release_buffers is not a per
>>> buffer operation (it just an operation that applies on all buffers, like all
>>> context operations IIRC ).
>>> The problem is that even if we close the file descriptors when all
>>> references have gone, the client might still have GEM objects associated so
>>> we would fail at releasing the buffers.
>>
>> Ok.
>>
>>> This was noticed during testing (fixed in the test code with this commit)
>>> [1]
>>> [1]
>>> https://github.com/BayLibre/ffmpeg-drm/commit/714288ab9d86397dd8230068fd9a7d3d4d76b802
>>>
>>> And a reminder was added to ffmpeg below
>>>
>}
>-return ioctl(ctx_to_m2mctx(ctx)->fd, VIDIOC_REQBUFS, );
> +ret = ioctl(ctx_to_m2mctx(ctx)->fd, VIDIOC_REQBUFS, );
> +if (ret < 0) {
> +av_log(logger(ctx), AV_LOG_ERROR, "release all %s buffers
> (%s)\n",
> +ctx->name, av_err2str(AVERROR(errno)));
> +
> +if (ctx_to_m2mctx(ctx)->output_drm)
> +av_log(logger(ctx), AV_LOG_ERROR,
> +"Make sure the DRM client releases all FB/GEM
> objects before closing the codec (ie):\n"
> +"for all buffers: \n"
> +"  1. drmModeRmFB(..)\n"
> +"  2. drmIoctl(.., DRM_IOCTL_GEM_CLOSE,... )\n");

 Is it possible to hit this case?  Waiting for all references to
 disappear seems like it should cover it.  (And if the user has freed an
 object they are still using then that's certainly undefined behaviour, so 
 if
 that's the only case here it would probably be better to abort() so that
 it's caught immediately.)

>>> yes (as per the above explanation)
>>
>> Does errno indicate that we've hit this case specifically rather than e.g.
>> some sort of hardware problem (decoder device physically disconnected or
>> whatever)?
>
>
> it just returns the standard "Device or resource busy" 

Re: [FFmpeg-devel] [PATCH] Limited timecode support for lavd/decklink

2018-06-12 Thread Dave Rice

> On Jun 7, 2018, at 5:01 PM, Marton Balint  wrote:
> 
> On Thu, 7 Jun 2018, Dave Rice wrote:
> 
> [...]
> 
>> 
>> Before I only tested with vitc but now have a serial cable connected as well 
>> and found a source tape that has distinct values for LTC and VITC timecodes. 
>> The LTC values are from 1:00:00 to 2:00:00 and the VITC values are from 
>> 07:00:00 - 08:00:00.
>> 
>> With the deckcontrol utility at https://github.com/bavc/deckcontrol 
>> <https://github.com/bavc/deckcontrol>, I can use the command gettimecode to 
>> grab the LTC value:
>> 
>> deckcontrol gettimecode
>> Issued command 'gettimecode'
>> TC=07:37:56:21
>> Command sucessfully issued
>> Error sending command (No error)
>> 
>> With these patches, I can only grab the vitc values:
>> 
>> for i in rp188vitc rp188vitc2 rp188ltc rp188any vitc vitc2 serial ; do echo 
>> -n "${i}: " ; ./ffprobe -v quiet -timecode_format "$i" -f decklink 
>> -draw_bars 0 -audio_input embedded -video_input sdi -format_code ntsc 
>> -channels 8 -raw_format yuv422p10 -i "UltraStudio Express" -select_streams v 
>> -show_entries stream_tags=timecode -of default=nw=1:nk=1 ; echo ; done
>> rp188vitc: rp188vitc2: rp188ltc: rp188any: vitc: 01:41:44;06
>> vitc2: 01:41:44;21
>> serial:
>> 
>> Also it may be interesting in cases like this to support accepting multiple 
>> timecode inputs at once, such as "-timecode_format vitc+rp188ltc” though it 
>> would need to be contextualized more in metadata.
>> 
>> With a serial cable connected, I can access LTC via the deckcontrol utility 
>> but not with this patch.
> 
> Well, the way I understand it, deckcontrol is using a totally different 
> timecode source: the RS422 deck control interface. In contrast, the
> timecode capture in the patch is using the SDI (video) source.
> 
> If the deck does not put the LTC timecode into SDI line 10, then the driver 
> won't be able to capture it if you specify 'rp188ltc'. I am not sure however 
> why 'serial' does not work, but from a quick look at the SDK maybe that only 
> works if you use the deck control capture functions…

I see at 
https://forum.blackmagicdesign.com/viewtopic.php?f=12=71730=400097=bmdTimecodeSerial#p400097
 
<https://forum.blackmagicdesign.com/viewtopic.php?f=12=71730=400097=bmdTimecodeSerial#p400097>
 that capturing bmdTimecodeSerial is an issue there too, so this is likely an 
issue with the sdk rather than with your patch. Otherwise I’ve been testing 
this more and find it really useful. Hope to see this merged.
Dave
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] Limited timecode support for lavd/decklink

2018-06-07 Thread Dave Rice

> On Jun 7, 2018, at 12:12 PM, Dave Rice  wrote:
> 
> 
>> On Jun 6, 2018, at 5:32 PM, Marton Balint  wrote:
>> 
>> On Wed, 6 Jun 2018, Dave Rice wrote:
>> 
>>>> On Jun 6, 2018, at 4:50 PM, Marton Balint  wrote:
>>>> On Mon, 4 Jun 2018, Dave Rice wrote:
>>>>>>> In my testing the timecode value set here has corrected been associated 
>>>>>>> with the first video frame (maintaining the timecode-to-first-frame 
>>>>>>> relationship as found on the source video stream). Although only having 
>>>>>>> first timecode value known is limiting, I think this is still quite 
>>>>>>> useful. This function also mirrors how BlackMagic Media Express and 
>>>>>>> Adobe Premiere handle capturing video+timecode where only the first 
>>>>>>> value is documented and all subsequent values are presumed.
>>>>>> Could you give me an example? (e.g. ffmpeg command line?)
>>>>> ./ffmpeg -timecode_format vitc2 -f decklink -draw_bars 0 -audio_input 
>>>>> embedded -video_input sdi -format_code ntsc -channels 8 -raw_format 
>>>>> yuv422p10 -i "UltraStudio 3D" -c:v v210 -c:a aac output.mov
>>>>> This worked for me to embed a QuickTime timecode track based upon the 
>>>>> timecode value of the first frame. If the input contained non-sequential 
>>>>> timecode values then the timecode track would not be accurate from that 
>>>>> point onward, but creating a timecode track based only upon the initial 
>>>>> value is what BlackMagic Media Express and Adobe Premiere are doing 
>>>>> anyhow.
>>>> Hmm, either the decklink drivers became better in hinding the first few 
>>>> NoSignal frames, or maybe that issue only affected to old models? (e.g. 
>>>> DeckLink SDI or DeckLink Duo 1). I did some test with a Mini Recorder, and 
>>>> even the first frame was useful, in this case the timecode was indeed 
>>>> correct.
>>>>>>>> I'd rather see a new AVPacketSideData type which will contain the 
>>>>>>>> timecode as a string, so you can set it frame-by-frame.
>>>>>>> Using side data for timecode would be preferable, but the possibility 
>>>>>>> that a patch for that may someday arrive shouldn’t completely block 
>>>>>>> this more limited patch.
>>>>>> I would like to make sure the code works reliably even for the limited 
>>>>>> use case and no race conditions are affectig the way it works.
>>>>> Feel welcome to suggest any testing. I’ll have access for testing again 
>>>>> tomorrow.
>>>> I reworked the patch a bit (see attached), and added per-frame timcode 
>>>> support into the PKT_STRINGS_METADATA packet side data, this way the 
>>>> drawtext filter can also be used to blend the timecode into the frames, 
>>>> which seems like a useful feature.
>>> 
>>> 
>>> That sounds helpful.
>>> 
>>> libavdevice/decklink_dec.cpp:734:21: error: unknown type name 'DECKLINK_STR'
>>>  DECKLINK_STR decklink_tc;
>> 
>> The patch I sent only replaces the second patch, the first one:
>> 
>> http://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20180526/185eb219/attachment.obj
>>  
>> <http://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20180526/185eb219/attachment.obj>
> 
> Thanks for the update. I continued testing and found this very useful, 
> particularly with the side data.
> 
> Before I only tested with vitc but now have a serial cable connected as well 
> and found a source tape that has distinct values for LTC and VITC timecodes. 
> The LTC values are from 1:00:00 to 2:00:00 and the VITC values are from 
> 07:00:00 - 08:00:00.

Realized a mix up here, in the samples below VITC values are in 1:00:00 to 
2:00:00 and the LTC values are from 07:00:00 - 08:00:00

> With the deckcontrol utility at https://github.com/bavc/deckcontrol 
> <https://github.com/bavc/deckcontrol>, I can use the command gettimecode to 
> grab the LTC value:
> 
> deckcontrol gettimecode
> Issued command 'gettimecode'
> TC=07:37:56:21
> Command sucessfully issued
> Error sending command (No error)
> 
> With these patches, I can only grab the vitc values:
> 
> for i in rp188vitc rp188vitc2 rp188ltc rp188any vitc vitc2 serial ; do echo 
> -n "${i}: " ; ./ffprobe -v quiet -timecode_format "$i" -f decklink -draw_bars 
> 0 -audio_input embedded -video_in

Re: [FFmpeg-devel] [PATCH] Limited timecode support for lavd/decklink

2018-06-07 Thread Dave Rice

> On Jun 6, 2018, at 5:32 PM, Marton Balint  wrote:
> 
> On Wed, 6 Jun 2018, Dave Rice wrote:
> 
>>> On Jun 6, 2018, at 4:50 PM, Marton Balint  wrote:
>>> On Mon, 4 Jun 2018, Dave Rice wrote:
>>>>>> In my testing the timecode value set here has corrected been associated 
>>>>>> with the first video frame (maintaining the timecode-to-first-frame 
>>>>>> relationship as found on the source video stream). Although only having 
>>>>>> first timecode value known is limiting, I think this is still quite 
>>>>>> useful. This function also mirrors how BlackMagic Media Express and 
>>>>>> Adobe Premiere handle capturing video+timecode where only the first 
>>>>>> value is documented and all subsequent values are presumed.
>>>>> Could you give me an example? (e.g. ffmpeg command line?)
>>>> ./ffmpeg -timecode_format vitc2 -f decklink -draw_bars 0 -audio_input 
>>>> embedded -video_input sdi -format_code ntsc -channels 8 -raw_format 
>>>> yuv422p10 -i "UltraStudio 3D" -c:v v210 -c:a aac output.mov
>>>> This worked for me to embed a QuickTime timecode track based upon the 
>>>> timecode value of the first frame. If the input contained non-sequential 
>>>> timecode values then the timecode track would not be accurate from that 
>>>> point onward, but creating a timecode track based only upon the initial 
>>>> value is what BlackMagic Media Express and Adobe Premiere are doing anyhow.
>>> Hmm, either the decklink drivers became better in hinding the first few 
>>> NoSignal frames, or maybe that issue only affected to old models? (e.g. 
>>> DeckLink SDI or DeckLink Duo 1). I did some test with a Mini Recorder, and 
>>> even the first frame was useful, in this case the timecode was indeed 
>>> correct.
>>>>>>> I'd rather see a new AVPacketSideData type which will contain the 
>>>>>>> timecode as a string, so you can set it frame-by-frame.
>>>>>> Using side data for timecode would be preferable, but the possibility 
>>>>>> that a patch for that may someday arrive shouldn’t completely block this 
>>>>>> more limited patch.
>>>>> I would like to make sure the code works reliably even for the limited 
>>>>> use case and no race conditions are affectig the way it works.
>>>> Feel welcome to suggest any testing. I’ll have access for testing again 
>>>> tomorrow.
>>> I reworked the patch a bit (see attached), and added per-frame timcode 
>>> support into the PKT_STRINGS_METADATA packet side data, this way the 
>>> drawtext filter can also be used to blend the timecode into the frames, 
>>> which seems like a useful feature.
>> 
>> 
>> That sounds helpful.
>> 
>> libavdevice/decklink_dec.cpp:734:21: error: unknown type name 'DECKLINK_STR'
>>   DECKLINK_STR decklink_tc;
> 
> The patch I sent only replaces the second patch, the first one:
> 
> http://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20180526/185eb219/attachment.obj
>  
> <http://ffmpeg.org/pipermail/ffmpeg-devel/attachments/20180526/185eb219/attachment.obj>

Thanks for the update. I continued testing and found this very useful, 
particularly with the side data.

Before I only tested with vitc but now have a serial cable connected as well 
and found a source tape that has distinct values for LTC and VITC timecodes. 
The LTC values are from 1:00:00 to 2:00:00 and the VITC values are from 
07:00:00 - 08:00:00.

With the deckcontrol utility at https://github.com/bavc/deckcontrol 
<https://github.com/bavc/deckcontrol>, I can use the command gettimecode to 
grab the LTC value:

deckcontrol gettimecode
Issued command 'gettimecode'
TC=07:37:56:21
Command sucessfully issued
Error sending command (No error)

With these patches, I can only grab the vitc values:

for i in rp188vitc rp188vitc2 rp188ltc rp188any vitc vitc2 serial ; do echo -n 
"${i}: " ; ./ffprobe -v quiet -timecode_format "$i" -f decklink -draw_bars 0 
-audio_input embedded -video_input sdi -format_code ntsc -channels 8 
-raw_format yuv422p10 -i "UltraStudio Express" -select_streams v -show_entries 
stream_tags=timecode -of default=nw=1:nk=1 ; echo ; done
rp188vitc: 
rp188vitc2: 
rp188ltc: 
rp188any: 
vitc: 01:41:44;06
vitc2: 01:41:44;21
serial: 

Also it may be interesting in cases like this to support accepting multiple 
timecode inputs at once, such as "-timecode_format vitc+rp188ltc” though it 
would need to be contextualized more in metadata.

With a serial cable connected, I can access LTC vi

Re: [FFmpeg-devel] [PATCH] Limited timecode support for lavd/decklink

2018-06-06 Thread Dave Rice

> On Jun 6, 2018, at 4:50 PM, Marton Balint  wrote:
> 
> On Mon, 4 Jun 2018, Dave Rice wrote:
> 
>> 
>>>> In my testing the timecode value set here has corrected been associated 
>>>> with the first video frame (maintaining the timecode-to-first-frame 
>>>> relationship as found on the source video stream). Although only having 
>>>> first timecode value known is limiting, I think this is still quite 
>>>> useful. This function also mirrors how BlackMagic Media Express and Adobe 
>>>> Premiere handle capturing video+timecode where only the first value is 
>>>> documented and all subsequent values are presumed.
>>> Could you give me an example? (e.g. ffmpeg command line?)
>> 
>> ./ffmpeg -timecode_format vitc2 -f decklink -draw_bars 0 -audio_input 
>> embedded -video_input sdi -format_code ntsc -channels 8 -raw_format 
>> yuv422p10 -i "UltraStudio 3D" -c:v v210 -c:a aac output.mov
>> 
>> This worked for me to embed a QuickTime timecode track based upon the 
>> timecode value of the first frame. If the input contained non-sequential 
>> timecode values then the timecode track would not be accurate from that 
>> point onward, but creating a timecode track based only upon the initial 
>> value is what BlackMagic Media Express and Adobe Premiere are doing anyhow.
>> 
> 
> Hmm, either the decklink drivers became better in hinding the first few 
> NoSignal frames, or maybe that issue only affected to old models? (e.g. 
> DeckLink SDI or DeckLink Duo 1). I did some test with a Mini Recorder, and 
> even the first frame was useful, in this case the timecode was indeed correct.
> 
>>>>> I'd rather see a new AVPacketSideData type which will contain the 
>>>>> timecode as a string, so you can set it frame-by-frame.
>>>> Using side data for timecode would be preferable, but the possibility that 
>>>> a patch for that may someday arrive shouldn’t completely block this more 
>>>> limited patch.
>>> I would like to make sure the code works reliably even for the limited use 
>>> case and no race conditions are affectig the way it works.
>> 
>> Feel welcome to suggest any testing. I’ll have access for testing again 
>> tomorrow.
> 
> I reworked the patch a bit (see attached), and added per-frame timcode 
> support into the PKT_STRINGS_METADATA packet side data, this way the drawtext 
> filter can also be used to blend the timecode into the frames, which seems 
> like a useful feature.


That sounds helpful.

libavdevice/decklink_dec.cpp:734:21: error: unknown type name 'DECKLINK_STR'
DECKLINK_STR decklink_tc;

Dave
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 1/2] avdevice/decklink_dec: use a custom memory allocator

2018-06-05 Thread Dave Rice

> On Jun 5, 2018, at 1:17 PM, Marton Balint  wrote:
> 
> On Tue, 5 Jun 2018, Dave Rice wrote:
> 
>>> On Jun 4, 2018, at 4:21 PM, Marton Balint  wrote:
>>> 
>>> The default memory allocator is limited in the max number of frames 
>>> available,
>>> and therefore caused frame drops if the frames were not freed fast enough.
>> 
>> I’ve been testing this patchset today. Yesterday I was occasionally getting 
>> “Decklink input buffer overrun!” errors with this command:
>> 
>> /usr/local/opt/ffmpegdecklink/bin/ffmpeg-dl -v info -nostdin -nostats -t 
>> 1980 -f decklink -draw_bars 0 -audio_input embedded -video_input sdi 
>> -format_code ntsc -channels 8 -raw_format yuv422p10 -i "UltraStudio Express" 
>> -metadata:s:v:0 encoder="FFV1 version 3" -color_primaries smpte170m 
>> -color_trc bt709 -colorspace smpte170m -color_range mpeg -metadata 
>> creation_time=now -f_strict unofficial -c:v ffv1 -level 3 -g 1 -slices 16 
>> -slicecrc 1 -c:a pcm_s24le -filter_complex 
>> "[0:v:0]setfield=bff,setsar=40/27,setdar=4/3; [0:a:0]pan=stereo| c0=c0 | 
>> c1=c1[stereo1];[0:a:0]pan=stereo| c0=c2 | c1=c3[stereo2]" -map "[stereo1]" 
>> -map "[stereo2]" -f matroska output.mkv -an -f framemd5 output.framemd5
>> 
>> With the patchset applied, I haven’t had that buffer overrun error re-occur.
> 
> That is very strange, it should work the opposite way. Without the patch, the 
> decklink driver is dropping frames (silently), so you should never get a 
> Decklink input buffer overrun error message, but silent frame drops instead 
> if you don't release (transcode) the frames fast enough.
> 
> With the patch, you won't get silent frame drops, but you might fill the 
> internal queue and therefore get Decklink input buffer overruns. On the other 
> hand, if you get Decklink input buffer overruns, that typically means that 
> your computer is too slow to handle transcoding in real time…

Trying to detect unreported dropped frames is why I added the framemd5 output 
as a second output. After the command runs, I would use this command

grep -v "^#” output.framemd5 | awk -F',' '$2!=p+1{printf p+1"-"$2-1" "}{p=$2}'

to report the ranges of pts that weren’t incrementing the pts by 1 within the 
pts. I had presumed that getting a gap in the pts within the framemd5 was 
corresponding with the buffer overrun error in the terminal output. I’ve tested 
a few hours of recorded with your patch applied and haven’t gotten any pts 
discontinuity in the framemd5s yet.

Dave
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 1/2] avdevice/decklink_dec: use a custom memory allocator

2018-06-05 Thread Dave Rice

> On Jun 4, 2018, at 4:21 PM, Marton Balint  wrote:
> 
> The default memory allocator is limited in the max number of frames available,
> and therefore caused frame drops if the frames were not freed fast enough.

I’ve been testing this patchset today. Yesterday I was occasionally getting 
“Decklink input buffer overrun!” errors with this command:

/usr/local/opt/ffmpegdecklink/bin/ffmpeg-dl -v info -nostdin -nostats -t 1980 
-f decklink -draw_bars 0 -audio_input embedded -video_input sdi -format_code 
ntsc -channels 8 -raw_format yuv422p10 -i "UltraStudio Express" -metadata:s:v:0 
encoder="FFV1 version 3" -color_primaries smpte170m -color_trc bt709 
-colorspace smpte170m -color_range mpeg -metadata creation_time=now -f_strict 
unofficial -c:v ffv1 -level 3 -g 1 -slices 16 -slicecrc 1 -c:a pcm_s24le 
-filter_complex "[0:v:0]setfield=bff,setsar=40/27,setdar=4/3; 
[0:a:0]pan=stereo| c0=c0 | c1=c1[stereo1];[0:a:0]pan=stereo| c0=c2 | 
c1=c3[stereo2]" -map "[stereo1]" -map "[stereo2]" -f matroska output.mkv -an -f 
framemd5 output.framemd5

With the patchset applied, I haven’t had that buffer overrun error re-occur. 

> Signed-off-by: Marton Balint 
> ---
> libavdevice/decklink_dec.cpp | 50 
> 1 file changed, 50 insertions(+)
> 
> diff --git a/libavdevice/decklink_dec.cpp b/libavdevice/decklink_dec.cpp
> index 510637676c..897fca1003 100644
> --- a/libavdevice/decklink_dec.cpp
> +++ b/libavdevice/decklink_dec.cpp
> @@ -21,6 +21,9 @@
>  * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 
> USA
>  */
> 
> +#include 
> +using std::atomic;
> +
> /* Include internal.h first to avoid conflict between winsock.h (used by
>  * DeckLink headers) and winsock2.h (used by libavformat) in MSVC++ builds */
> extern "C" {
> @@ -98,6 +101,44 @@ static VANCLineNumber vanc_line_numbers[] = {
> {bmdModeUnknown, 0, -1, -1, -1}
> };
> 
> +class decklink_allocator : public IDeckLinkMemoryAllocator
> +{
> +public:
> +decklink_allocator(): _refs(1) { }
> +virtual ~decklink_allocator() { }
> +
> +// IDeckLinkMemoryAllocator methods
> +virtual HRESULT STDMETHODCALLTYPE AllocateBuffer(unsigned int 
> bufferSize, void* *allocatedBuffer)
> +{
> +void *buf = av_malloc(bufferSize + AV_INPUT_BUFFER_PADDING_SIZE);
> +if (!buf)
> +return E_OUTOFMEMORY;
> +*allocatedBuffer = buf;
> +return S_OK;
> +}
> +virtual HRESULT STDMETHODCALLTYPE ReleaseBuffer(void* buffer)
> +{
> +av_free(buffer);
> +return S_OK;
> +}
> +virtual HRESULT STDMETHODCALLTYPE Commit() { return S_OK; }
> +virtual HRESULT STDMETHODCALLTYPE Decommit() { return S_OK; }
> +
> +// IUnknown methods
> +virtual HRESULT STDMETHODCALLTYPE QueryInterface(REFIID iid, LPVOID 
> *ppv) { return E_NOINTERFACE; }
> +virtual ULONG   STDMETHODCALLTYPE AddRef(void) { return ++_refs; }
> +virtual ULONG   STDMETHODCALLTYPE Release(void)
> +{
> +int ret = --_refs;
> +if (!ret)
> +delete this;
> +return ret;
> +}
> +
> +private:
> +std::atomic  _refs;
> +};
> +
> extern "C" {
> static void decklink_object_free(void *opaque, uint8_t *data)
> {
> @@ -924,6 +965,7 @@ av_cold int ff_decklink_read_header(AVFormatContext 
> *avctx)
> {
> struct decklink_cctx *cctx = (struct decklink_cctx *)avctx->priv_data;
> struct decklink_ctx *ctx;
> +class decklink_allocator *allocator;
> AVStream *st;
> HRESULT result;
> char fname[1024];
> @@ -1017,6 +1059,14 @@ av_cold int ff_decklink_read_header(AVFormatContext 
> *avctx)
> ctx->input_callback = new decklink_input_callback(avctx);
> ctx->dli->SetCallback(ctx->input_callback);
> 
> +allocator = new decklink_allocator();
> +ret = (ctx->dli->SetVideoInputFrameMemoryAllocator(allocator) == S_OK ? 
> 0 : AVERROR_EXTERNAL);
> +allocator->Release();
> +if (ret < 0) {
> +av_log(avctx, AV_LOG_ERROR, "Cannot set custom memory allocator\n");
> +goto error;
> +}
> +
> if (mode_num == 0 && !cctx->format_code) {
> if (decklink_autodetect(cctx) < 0) {
> av_log(avctx, AV_LOG_ERROR, "Cannot Autodetect input stream or No 
> signal\n");
> -- 
> 2.16.3
> 
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] Limited timecode support for lavd/decklink

2018-06-04 Thread Dave Rice


> On Jun 4, 2018, at 12:24 PM, Marton Balint  wrote:
> 
> On Fri, 1 Jun 2018, Dave Rice wrote:
> 
>>> On May 31, 2018, at 5:29 PM, Marton Balint  wrote:
>>> On Thu, 31 May 2018, Jonathan Morley wrote:
>>>> Thank you for the clarification, Dave. It might be that the Blackmagic 
>>>> approach to collecting timecode doesn’t work for that one source because 
>>>> it is in the horizontal space (HANC) instead of the vertical (VANC). I am 
>>>> not sure. Sadly I don’t think my solution is all encompassing, but it does 
>>>> seem to help in enough cases I would like to get it integrated with master.
>>>> I am still a bit thrown about the initial “Unable to set timecode” error, 
>>>> but believe it to be initialization related. I will wait to hear back from 
>>>> Marton on my overall approach and see what I can do to clean that up.
>>> av_dict_set returns < 0 on error, so the condition seems wrong.
>>>> As for the other error message my plan is to demote that to a debug.
>>> That is a good idea.
>> 
>> +1
>> 
>>> On the other hand, I believe the usefulness of this is in its current form 
>>> is still very limited, because typically the first few frames are NoSignal 
>>> frames, how the end user supposed to know which frame is the one with the 
>>> first valid timecode?
>> 
>> In my testing the timecode value set here has corrected been associated with 
>> the first video frame (maintaining the timecode-to-first-frame relationship 
>> as found on the source video stream). Although only having first timecode 
>> value known is limiting, I think this is still quite useful. This function 
>> also mirrors how BlackMagic Media Express and Adobe Premiere handle 
>> capturing video+timecode where only the first value is documented and all 
>> subsequent values are presumed.
> 
> Could you give me an example? (e.g. ffmpeg command line?)

./ffmpeg -timecode_format vitc2 -f decklink -draw_bars 0 -audio_input embedded 
-video_input sdi -format_code ntsc -channels 8 -raw_format yuv422p10 -i 
"UltraStudio 3D" -c:v v210 -c:a aac output.mov

This worked for me to embed a QuickTime timecode track based upon the timecode 
value of the first frame. If the input contained non-sequential timecode values 
then the timecode track would not be accurate from that point onward, but 
creating a timecode track based only upon the initial value is what BlackMagic 
Media Express and Adobe Premiere are doing anyhow.

>>> I'd rather see a new AVPacketSideData type which will contain the timecode 
>>> as a string, so you can set it frame-by-frame.
>> 
>> Using side data for timecode would be preferable, but the possibility that a 
>> patch for that may someday arrive shouldn’t completely block this more 
>> limited patch.
> 
> I would like to make sure the code works reliably even for the limited use 
> case and no race conditions are affectig the way it works.

Feel welcome to suggest any testing. I’ll have access for testing again 
tomorrow.
Dave
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] Limited timecode support for lavd/decklink

2018-06-01 Thread Dave Rice

> On May 31, 2018, at 5:29 PM, Marton Balint  wrote:
> 
> On Thu, 31 May 2018, Jonathan Morley wrote:
> 
>> Thank you for the clarification, Dave. It might be that the Blackmagic 
>> approach to collecting timecode doesn’t work for that one source because it 
>> is in the horizontal space (HANC) instead of the vertical (VANC). I am not 
>> sure. Sadly I don’t think my solution is all encompassing, but it does seem 
>> to help in enough cases I would like to get it integrated with master.
>> 
>> I am still a bit thrown about the initial “Unable to set timecode” error, 
>> but believe it to be initialization related. I will wait to hear back from 
>> Marton on my overall approach and see what I can do to clean that up.
> 
> av_dict_set returns < 0 on error, so the condition seems wrong.
> 
>> 
>> As for the other error message my plan is to demote that to a debug.
> 
> That is a good idea.

+1

> On the other hand, I believe the usefulness of this is in its current form is 
> still very limited, because typically the first few frames are NoSignal 
> frames, how the end user supposed to know which frame is the one with the 
> first valid timecode?

In my testing the timecode value set here has corrected been associated with 
the first video frame (maintaining the timecode-to-first-frame relationship as 
found on the source video stream). Although only having first timecode value 
known is limiting, I think this is still quite useful. This function also 
mirrors how BlackMagic Media Express and Adobe Premiere handle capturing 
video+timecode where only the first value is documented and all subsequent 
values are presumed.

> I'd rather see a new AVPacketSideData type which will contain the timecode as 
> a string, so you can set it frame-by-frame.

Using side data for timecode would be preferable, but the possibility that a 
patch for that may someday arrive shouldn’t completely block this more limited 
patch.

[…]

Dave
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] Limited timecode support for lavd/decklink

2018-05-31 Thread Dave Rice
Hi Jonathan,

> On May 31, 2018, at 3:56 PM, Jonathan Morley  wrote:
> 
> Well that is helpful information if not a bit disappointing. Perhaps if I use 
> the SDK calls to get the individual timecode components _and_ check the drop 
> frame flag I can reassemble what the GetString() method on the 
> IDeckLinkTimecode class is supposed to provide.

I tried with another NDF tape and it worked appropriately. So I do have a 
capture with NDF and another capture with DF timecode as intended. I am still 
uncertain what’s wrong with the first NDF that I reported before but I’m 
supposing it’s an issue with that tape rather than your work.

Here’s the output of the capture of the new NDF tape, note that I’m still 
getting the "Unable to set timecode” even though the timecode is set.

./ffmpeg -timecode_format vitc2 -f decklink -draw_bars 0 -audio_input embedded 
-video_input sdi -format_code ntsc -channels 8 -raw_format yuv422p10 -i 
"UltraStudio 3D" -c:v v210 -c:a aac maybendf.mov
ffmpeg version N-91200-g1616b1be5a Copyright (c) 2000-2018 the FFmpeg developers
  built with Apple LLVM version 9.0.0 (clang-900.0.38)
  configuration: --enable-nonfree --enable-decklink 
--extra-cflags=-I/usr/local/include --extra-ldflags=-L/usr/local/include
  libavutil  56. 18.102 / 56. 18.102
  libavcodec 58. 19.104 / 58. 19.104
  libavformat58. 17.100 / 58. 17.100
  libavdevice58.  4.100 / 58.  4.100
  libavfilter 7. 24.100 /  7. 24.100
  libswscale  5.  2.100 /  5.  2.100
  libswresample   3.  2.100 /  3.  2.100
[decklink @ 0x7fdd75800a00] Found Decklink mode 720 x 486 with rate 29.97(i)
[decklink @ 0x7fdd75800a00] Unable to set timecode
Guessed Channel Layout for Input Stream #0.0 : 7.1
Input #0, decklink, from 'UltraStudio 3D':
  Duration: N/A, start: 0.00, bitrate: 229869 kb/s
Stream #0:0: Audio: pcm_s16le, 48000 Hz, 7.1, s16, 6144 kb/s
Stream #0:1: Video: v210 (V210 / 0x30313256), yuv422p10le(bottom first), 
720x486, 223725 kb/s, 29.97 fps, 29.97 tbr, 1000k tbn, 1000k tbc
Metadata:
  timecode: 03:48:55:26
Stream mapping:
  Stream #0:1 -> #0:0 (v210 (native) -> v210 (native))
  Stream #0:0 -> #0:1 (pcm_s16le (native) -> aac (native))
Press [q] to stop, [?] for help
Output #0, mov, to 'maybendf.mov':
  Metadata:
encoder : Lavf58.17.100
Stream #0:0: Video: v210 (v210 / 0x30313276), yuv422p10le(bottom coded 
first (swapped)), 720x486, q=2-31, 223725 kb/s, 0.03 fps, 30k tbn, 29.97 tbc
Metadata:
  timecode: 03:48:55:26
  encoder : Lavc58.19.104 v210
Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, 7.1, fltp, 469 
kb/s
Metadata:
  encoder : Lavc58.19.104 aac
frame=   48 fps= 34 q=-0.0 Lsize=   43745kB time=00:00:01.56 
bitrate=228505.8kbits/s speed=1.12x
video:43740kB audio:2kB subtitle:0kB other streams:0kB global headers:0kB 
muxing overhead: 0.007146%
[aac @ 0x7fdd77801e00] Qavg: 65536.000

> I will add that to the next patch, but it might be a minute before I get to 
> it. Thank you again for taking a look.
> 
> I guess we got really lucky in our use case.
> 
> Thanks,
> Jon
> 
>> On May 31, 2018, at 12:39 PM, Dave Rice  wrote:
>> 
>> Hi Jonathan,
>> 
>>> On May 31, 2018, at 11:41 AM, Jonathan Morley >> <mailto:jmor...@pixsystem.com>> wrote:
>>> 
>>> Thank you very much, Dave. I am really curious about the df vs ndf since 
>>> the Blackmagic SDK call I am making doesn’t have any arguments for 
>>> specifying that kind of distinction. It simply returns what it finds in the 
>>> SDI stream.
>> 
>> I know have a tape known to be NDF and a tape known to be DF. The messages I 
>> sent before were from DF tapes. When I tried a NDF tape, I get the "Unable 
>> to find timecode” warning repeatedly and no timecode on the output file.
>> 
>> ./ffmpeg -timecode_format vitc2 -f decklink -draw_bars 0 -audio_input 
>> embedded -video_input sdi -format_code ntsc -channels 8 -raw_format 
>> yuv422p10 -i "UltraStudio 3D" -c:v v210 -c:a aac devlin5.mov
>> ffmpeg version N-91200-g1616b1be5a Copyright (c) 2000-2018 the FFmpeg 
>> developers
>> built with Apple LLVM version 9.0.0 (clang-900.0.38)
>> configuration: --enable-nonfree --enable-decklink 
>> --extra-cflags=-I/usr/local/include --extra-ldflags=-L/usr/local/include
>> libavutil  56. 18.102 / 56. 18.102
>> libavcodec 58. 19.104 / 58. 19.104
>> libavformat58. 17.100 / 58. 17.100
>> libavdevice58.  4.100 / 58.  4.100
>> libavfilter 7. 24.100 /  7. 24.100
>> libswscale  5.  2.100 /  5.  2.100
>> libswresample   3.  2.100 /  3.  2.100
>> [decklink @ 0x7f9f6680] Found Decklink mode 720 x 486 with rate 29.97(i)
>> [decklink @ 0x7f9f668000

Re: [FFmpeg-devel] [PATCH] Limited timecode support for lavd/decklink

2018-05-31 Thread Dave Rice
Hi Jonathan,

> On May 31, 2018, at 11:41 AM, Jonathan Morley  wrote:
> 
> Thank you very much, Dave. I am really curious about the df vs ndf since the 
> Blackmagic SDK call I am making doesn’t have any arguments for specifying 
> that kind of distinction. It simply returns what it finds in the SDI stream.

I know have a tape known to be NDF and a tape known to be DF. The messages I 
sent before were from DF tapes. When I tried a NDF tape, I get the "Unable to 
find timecode” warning repeatedly and no timecode on the output file.

./ffmpeg -timecode_format vitc2 -f decklink -draw_bars 0 -audio_input embedded 
-video_input sdi -format_code ntsc -channels 8 -raw_format yuv422p10 -i 
"UltraStudio 3D" -c:v v210 -c:a aac devlin5.mov
ffmpeg version N-91200-g1616b1be5a Copyright (c) 2000-2018 the FFmpeg developers
  built with Apple LLVM version 9.0.0 (clang-900.0.38)
  configuration: --enable-nonfree --enable-decklink 
--extra-cflags=-I/usr/local/include --extra-ldflags=-L/usr/local/include
  libavutil  56. 18.102 / 56. 18.102
  libavcodec 58. 19.104 / 58. 19.104
  libavformat58. 17.100 / 58. 17.100
  libavdevice58.  4.100 / 58.  4.100
  libavfilter 7. 24.100 /  7. 24.100
  libswscale  5.  2.100 /  5.  2.100
  libswresample   3.  2.100 /  3.  2.100
[decklink @ 0x7f9f6680] Found Decklink mode 720 x 486 with rate 29.97(i)
[decklink @ 0x7f9f6680] Unable to find timecode.
Last message repeated 5 times
Guessed Channel Layout for Input Stream #0.0 : 7.1
Input #0, decklink, from 'UltraStudio 3D':
  Duration: N/A, start: 0.00, bitrate: 229869 kb/s
Stream #0:0: Audio: pcm_s16le, 48000 Hz, 7.1, s16, 6144 kb/s
Stream #0:1: Video: v210 (V210 / 0x30313256), yuv422p10le(bottom first), 
720x486, 223725 kb/s, 29.97 fps, 29.97 tbr, 1000k tbn, 1000k tbc
Stream mapping:
  Stream #0:1 -> #0:0 (v210 (native) -> v210 (native))
  Stream #0:0 -> #0:1 (pcm_s16le (native) -> aac (native))
Press [q] to stop, [?] for help
Output #0, mov, to 'devlin5.mov':
  Metadata:
encoder : Lavf58.17.100
Stream #0:0: Video: v210 (v210 / 0x30313276), yuv422p10le(bottom coded 
first (swapped)), 720x486, q=2-31, 223725 kb/s, 29.97 fps, 30k tbn, 29.97 tbc
Metadata:
  encoder : Lavc58.19.104 v210
Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, 7.1, fltp, 469 
kb/s
Metadata:
  encoder : Lavc58.19.104 aac
[decklink @ 0x7f9f6680] Unable to find timecode.
Last message repeated 15 times
[decklink @ 0x7f9f6680] Unable to find timecode.0:00.70 
bitrate=200517.8kbits/s speed= 1.4x
Last message repeated 14 times
[decklink @ 0x7f9f6680] Unable to find timecode.0:01.20 
bitrate=211246.0kbits/s speed= 1.2x
Last message repeated 14 times
[decklink @ 0x7f9f6680] Unable to find timecode.0:01.70 
bitrate=214431.3kbits/s speed=1.13x
Last message repeated 14 times
[decklink @ 0x7f9f6680] Unable to find timecode.0:02.20 
bitrate=217121.0kbits/s speed= 1.1x
Last message repeated 14 times
[decklink @ 0x7f9f6680] Unable to find timecode.0:02.70 
bitrate=221142.3kbits/s speed=1.07x
Last message repeated 14 times
[decklink @ 0x7f9f6680] Unable to find timecode.0:03.20 
bitrate=221288.2kbits/s speed=1.06x

So when inputing DF this appears to work while Media Express gets it wrong by 
writing the timecode with a NDF flag.
However when inputing NDF no timecode is communicated.

I ran this to check if somehow I was called the wrong timecode 

for i in rp188vitc rp188vitc2 rp188ltc rp188any vitc vitc2 serial ; do echo -n 
"${i}: " ; ./ffprobe -v quiet -timecode_format "$i" -f decklink -draw_bars 0 
-audio_input embedded -video_input sdi -format_code ntsc -channels 8 
-raw_format yuv422p10 -i "UltraStudio 3D" -select_streams v -show_entries 
stream_tags=timecode -of default=nw=1:nk=1 ; echo ; done

rp188vitc: 
rp188vitc2: 
rp188ltc: 
rp188any: 
vitc: 
vitc2: 
serial: 

I may try to find another NDF tape to make sure this isn’t a fluke.

> And when I skimmed the movenc timecode handling it doesn’t seem to make any 
> assumptions or changes either.

I’ve replicated it with Matroska which simply moves the timecode into metadata 
as a string.

> Please keep me posted. Meanwhile I will look into what could be causing all 
> the error chatter.

Thanks!

> Thanks,
> Jon
> 
>> On May 31, 2018, at 7:59 AM, Dave Rice  wrote:
>> 
>> 
>>> On May 31, 2018, at 5:49 AM, Jonathan Morley  wrote:
>>> 
>>> Please take a look at my latest patches.
>>> 
>>> NOTE: I no longer have the hardware to test this work!
>> 
>> I tested these patches with an Ultrastudio 3D.
>> 
>> I find that in some cases it provides the “Unable to set timecode” warning 
>> although it does provide the timecode value. Such as:
>> 
>> ./ffmpeg -timec

Re: [FFmpeg-devel] [PATCH] Limited timecode support for lavd/decklink

2018-05-31 Thread Dave Rice
e=218719.7kbits/s speed=1.11x
Last message repeated 14 times
[decklink @ 0x7fe313803000] Unable to find timecode.0:02.16 
bitrate=219494.2kbits/s speed=1.08x
Last message repeated 14 times
[decklink @ 0x7fe313803000] Unable to find timecode.0:02.66 
bitrate=220763.9kbits/s speed=1.07x
Last message repeated 14 times
[decklink @ 0x7fe313803000] Unable to find timecode.0:03.16 
bitrate=220971.1kbits/s speed=1.06x
Last message repeated 14 times
[decklink @ 0x7fe313803000] Unable to find timecode.0:03.67 
bitrate=221693.2kbits/s speed=1.05x
Last message repeated 14 times
[decklink @ 0x7fe313803000] Unable to find timecode.0:04.17 
bitrate=223247.5kbits/s speed=1.04x  

Until I hit the play button on my source.

I tested the same videotape sources with Media Express and ffmpeg with this 
patch and the initial timecode values are aligned to the same frames 
accurately, but with 4 random videotapes I’ve tried, all the Media Express 
captures are Non-Drop Frame and all the ffmpeg captures are Drop Frame, so one 
has to be wrong. I’ll try to find a source that can provide a known df and ndf 
signal to determine which is correct.

Thanks for the update,
Dave Rice

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] Limited timecode support for lavd/decklink

2018-05-25 Thread Dave Rice


> On May 25, 2018, at 17:15, Jonathan Morley <jmor...@pixsystem.com> wrote:
> 
> Thank you for trying it out, Dave.
> 
> 1) Can you please tell me more about how you tested?

J30 digibeta deck playing a Betacam SP with SDI to an Ultrastudio 3D with 
ffmpeg with your patch and configured with decklink. 

> 2) Did you run a capture command with ffmpeg after patching?

Yes. And I should have shared that command in my email. From memory it was 
something like: ffmpeg -f decklink $input-format-options-for-an-sd-ntsc-input 
-i “Ultrastudio 3D” -c:v v210 -c:a aac output.mov

> 3) What was the target output you we using?

QuickTime since the -metadata timecode=# tag works well there.

I can test more on Tuesday. Thanks for this work!

> Thanks,
> Jon
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] Limited timecode support for lavd/decklink

2018-05-25 Thread Dave Rice
ion with AJA hardware. (I am currently adding 
> support for TC to the v4l2 ffmpeg reader).

I tested moving this function up a few lines into the else branch as Marton 
suggested and it worked well for me. I’m running this on a Mac. The timecode 
values when recording with this match what I get via Blackmagic Media Express, 
but the drop frame flag does not match. I’m testing with a vitc timecode source 
which is non-drop frame but the value shows in ffmpeg as drop frame, 
07:09:56;19.

>>> +
>>>  pkt.pts = get_pkt_pts(videoFrame, audioFrame, wallclock,
>>> abs_wallclock, ctx->video_pts_source, ctx->video_st->time_base,
>>> _video_pts, cctx->copyts);
>>>  pkt.dts = pkt.pts;
>>> 
>>> @@ -939,6 +956,8 @@ av_cold int ff_decklink_read_header(AVFormatContext
>>> *avctx)
>>>  ctx->teletext_lines = cctx->teletext_lines;
>>>  ctx->preroll  = cctx->preroll;
>>>  ctx->duplex_mode  = cctx->duplex_mode;
>>> +if (cctx->tc_format > 0 && (unsigned int)cctx->tc_format <
>>> FF_ARRAY_ELEMS(decklink_timecode_format_map))
>>> +ctx->tc_format = decklink_timecode_format_map[cctx->tc_format];
>>>  if (cctx->video_input > 0 && (unsigned int)cctx->video_input <
>>> FF_ARRAY_ELEMS(decklink_video_connection_map))
>>>  ctx->video_input = decklink_video_connection_map[cctx->video_input];
>>>  if (cctx->audio_input > 0 && (unsigned int)cctx->audio_input <
>>> FF_ARRAY_ELEMS(decklink_audio_connection_map))
>>> diff --git a/libavdevice/decklink_dec_c.c b/libavdevice/decklink_dec_c.c
>>> index 47018dc681..6ab3819375 100644
>>> --- a/libavdevice/decklink_dec_c.c
>>> +++ b/libavdevice/decklink_dec_c.c
>>> @@ -48,6 +48,15 @@ static const AVOption options[] = {
>>>  { "unset",     NULL,  0,
>>> AV_OPT_TYPE_CONST, { .i64 = 0}, 0, 0,DEC, "duplex_mode"},
>>>  { "half",  NULL,  0,
>>> AV_OPT_TYPE_CONST, { .i64 = 1}, 0, 0,DEC, "duplex_mode"},
>>>  { "full",  NULL,  0,
>>> AV_OPT_TYPE_CONST, { .i64 = 2}, 0, 0,DEC, "duplex_mode"},
>>> +{ "timecode_format", "timecode format",   OFFSET(tc_format),
>>> AV_OPT_TYPE_INT,   { .i64 = 0}, 0, 7,DEC, "tc_format"},
>>> +{ "none",  NULL,  0,
>>> AV_OPT_TYPE_CONST, { .i64 = 0}, 0, 0,DEC, "tc_format"},
>>> +{ "rp188vitc", NULL,  0,
>>> AV_OPT_TYPE_CONST, { .i64 = 1}, 0, 0,DEC, "tc_format"},
>>> +{ "rp188vitc2",NULL,  0,
>>> AV_OPT_TYPE_CONST, { .i64 = 2}, 0, 0,DEC, "tc_format"},
>>> +{ "rp188ltc",  NULL,  0,
>>> AV_OPT_TYPE_CONST, { .i64 = 3}, 0, 0,DEC, "tc_format"},
>>> +{ "rp188any",  NULL,  0,
>>> AV_OPT_TYPE_CONST, { .i64 = 4}, 0, 0,DEC, "tc_format"},
>>> +{ "vitc",  NULL,  0,
>>> AV_OPT_TYPE_CONST, { .i64 = 5}, 0, 0,DEC, "tc_format"},
>>> +{ "vitc2", NULL,  0,
>>> AV_OPT_TYPE_CONST, { .i64 = 6}, 0, 0,DEC, "tc_format"},
>>> +{ "serial",NULL,  0,
>>> AV_OPT_TYPE_CONST, { .i64 = 7}, 0, 0,DEC, "tc_format"},
>>>  { "video_input",  "video input",  OFFSET(video_input),
>>> AV_OPT_TYPE_INT,   { .i64 = 0}, 0, 6,DEC, "video_input"},
>>>  { "unset", NULL,  0,
>>> AV_OPT_TYPE_CONST, { .i64 = 0}, 0, 0,DEC, "video_input"},
>>>  { "sdi",   NULL,  0,
>>> AV_OPT_TYPE_CONST, { .i64 = 1}, 0, 0,DEC, "video_input"},
>>> --
>> 
>> Documentation update is missing.
> 
> D’oh! Sloppy.

Thanks for this work!
Dave

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [RFC][ALT PATCHES] Code of Conduct Enforcement

2018-05-17 Thread Dave Rice

> On May 17, 2018, at 10:22 AM, Clément Bœsch <u...@pkh.me> wrote:
> 
> On Mon, May 14, 2018 at 05:50:25PM +0100, Derek Buitenhuis wrote:
> [...]
>>1. Implement a formal CoC enforcement system. This has been mostly 
>> copypasted
>>   from VideoLAN's, and is meant as more of a blueprint. This will no 
>> doubt
>>   be controversial.
> 
> So as mentioned already in the thread, the main issue is having a
> police/justice entity. I would say it needs to be separate from the
> development team (to maintain a power separation). Since such profile
> doesn't seem to be exactly common in the open source world, maybe we could
> externalize it. Does such a service exist with reasonable prices? Could we
> use our funds for this? I understand this may sound far-fetched, but who
> knows.

CoC enforcement as a paid service sounds alarming. Though it might make sense 
to consider people separate from the development team for the role. There are 
likely many who would like to contribute to the FFmpeg project but not as a 
developer who could consider such a role.

> If such solution is not viable, we could fallback on the voting committee
> to elect/design a subgroup of itself (an odd number like 3 persons maybe?)
> to hold this moderation task for a period of 3 or 6 months, maybe 1 year.
> Then these members are automatically maintainer of the CoC for this period
> of time, and decide what to do with it.
> 
> Just random thoughts, no hard opinion on it to be honest.

I like this suggestion for a small committee to be tasked and trusted with such 
actions. I consider that it might be easier to find rough consensus in scenario 
a than in scenario b.

a) the larger ffmpeg community finds consensus to appoint a CoC committee and 
as needed the CoC committee finds consensus (as a small group) on how to 
respond to concerns from the community and to implement the CoC.

b) the larger ffmpeg community finds consensus on how to implement the CoC 
directly each time there’s a concern from the community.

Dave Rice

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] libavformat: add mbedTLS based TLS

2018-05-14 Thread Dave Gregory
> Have you considered BearSSL?
Thanks Nicolas; it sounds great but we disregarded it because the web
site declares it to be beta-quality. We will certainly keep an eye on
it.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] libavformat: add mbedTLS based TLS

2018-05-14 Thread Dave Gregory
Hi all,

I am an engineer working at Manything (https://manything.com). We
develop software that allows users to view their security cameras over
the Internet. Thomas has been working with us to develop this mbedTLS
integration.

Thanks for your feedback on his patch; we have run some tests that I
hope will answer some of your questions. We've not been able to
produce all the detail you've requested, but hopefully what we have is
a compelling case. With some advice, I hope we could produce any other
evidence you need to support acceptance of the patch. This is our
first contribution back to FFmpeg, so we welcome guidance on how to do
the right things in the right way.

First, a bit of context on our motivation. We write software that runs on
IP cameras to connect them to our cloud service. We started off by
using GnuTLS to secure our connections but there's very limited disk
space on the cameras and the relatively large binary footprint caused
some problems. Looking around for an alternative library we discovered
mbedTLS, a project with a good open-source heritage, suitable license
terms and a major commercial backer (ARM Holdings). We were easily
able to make use of its integration with cURL but it wasn't yet
supported by the FFmpeg project, so we found Thomas via the project's
consulting page and embarked on the integration. mbedTLS is an
excellent fit for our needs; I hope this is an indicator of its potential value
to other FFmpeg users.

Below you will find our method and results. We look forward to your
feedback.

Many thanks,
Dave

--

FFmpeg TLS libraries: disk and memory footprint comparison report

Goal: Compare the memory usage and disk footprint of the new mbedTLS
integration against the three existing platform-agnostic TLS libraries
supported by FFmpeg.

Approach summary (full detail in appendix A):
 - Build FFmpeg against four TLS libraries, measure output library sizes.
- schannel (Windows) and securetransport (macOS) are excluded from
the analysis because it would be very difficult to compare results
fairly across different operating systems (and impossible on a Linux
test rig).
 - Stream well-known free video file (Big Buck Bunny) to remote rtsps
server in realtime,
collect memory statistics with valgrind/massif.
 - Interrogate massif outputs to assess relative memory consumption of
each library.

Note: The project's actual devices are mostly ARM-based but we have
found valgrind hard to run on ARM. The stats below have therefore been
created on an x86_64 box to provide memory measurements easily and
repeatably.


Results summary (full detail in appendix B):
 - GnuTLS 3.4.17
   - Peak ssl/tls/crypto-related memory allocation: 4.36MB
   - TLS library file size on disk, including dependencies: 5.5MB

 - LibreSSL 2.7.2 (libtls)
- Results
   - Peak ssl/tls/crypto-related memory allocation: 115KB
   - TLS library file size on disk, including dependencies: 3.27MB

 - mbedTLS 2.8.0
- Results
   - Peak ssl/tls/crypto-related memory allocation: 40KB
   - TLS library file size on disk, including dependencies: 828KB

 - OpenSSL 1.0.2n
- Results
   - Peak ssl/tls/crypto-related memory allocation: 151KB
   - TLS library file size on disk, including dependencies: 2.7MB

All supporting data can be downloaded from
https://assetcdn.manything.com/downloads/data/ffmpeg_tls_massif_data.tar.xz.

Conclusion: For the task of outbound interleaved RTSPS streaming,
mbedTLS uses about 35% of the memory of LibreSSL/libtls (its nearest
competitor) with only 25% of the disk footprint.


---
APPENDIX A: Test detail

Test system:
 - Linux vagrant-ubuntu-trusty-64 3.13.0-87-generic #133-Ubuntu (VM)
 - Running in virtualbox on MacBook Pro (Retina, 13-inch, Early 2015)
 - Ubuntu EGLIBC 2.19-0ubuntu6.11


Test process:
 - Compile dependencies, with configuration flags listed, and install
into custom prefix (referred to as ${DEPS_OUTDIR})
- Unless otherwise specified, steps are:
 configure --prefix=${DEPS_OUTDIR} (any extra flags specified above)
 make
 make install
 - Measure size of relevant libs
  e.g. ls -lh ${DEPS_OUTDIR}/lib/ | grep
'ssl\|crypto\|tls\|nettle\|hogweed\|gmp\|gpg\|gcrypt'
 - Compile ffmpeg against dependencies, with additional configuration
flags listed above.
  configure --prefix=${DEPS_OUTDIR} --disable-stripping
--disable-debug --enable-shared --disable-everything --enable-version3
--enable-parser=h264,hevc,aac
--enable-decoder=h264,hevc,aac,pcm_alaw,pcm_mulaw
--enable-protocol=tcp,udp,file,tls,tls_gnutls --enable-demuxer=rtsp
--enable-muxer=rtsp,mp4,mpegts,pcm_alaw,pcm_mulaw --disable-ffplay
--disable-ffprobe --disable-doc --disable-avdevice
 - Stream known free video file to remote rtsps server in realtime,
wrapped in valgrind/massif
  valgrind --tool=massif --threshold=0.1 ${DEPS_OUTDIR}/bin/ffmpeg
-re -i BigBuckBunny_320x180.mp4 -c:v copy -c:a copy -f rtsp
-rtsp_transport tcp rtsps://USER:PASS@REMOTEHOST:443

Re: [FFmpeg-devel] [PATCH 2/3] libavcodec: v4l2m2m: output AVDRMFrameDescriptor

2018-05-09 Thread Dave Stevenson
On 9 May 2018 at 00:33, Mark Thompson <s...@jkqxz.net> wrote:
> On 08/05/18 19:24, Lukas Rusak wrote:
>> +
>> +layer->format = avbuf->context->av_pix_fmt == AV_PIX_FMT_NV12 ?
>> +DRM_FORMAT_NV12 : DRM_FORMAT_NV21;
>> +layer->nb_planes = 2;
>> +
>> +layer->planes[1].object_index = 0;
>> +layer->planes[1].offset = avbuf->plane_info[0].bytesperline *
>> +avbuf->context->format.fmt.pix_mp.height;
>
> Is that always true?  I would expect that some driver might want more 
> vertical alignment (especially with tiled formats) and would provide this 
> number somewhere else.

The V4L2 spec defines their NV12/21 as:
"These are two-plane versions of the YUV 4:2:0 format. The three
components are separated into two sub-images or planes. The Y plane is
first. The Y plane has one byte per pixel. For V4L2_PIX_FMT_NV12, a
combined CbCr plane immediately follows the Y plane in memory. " [1]

Please be aware that there is now the V4L2 multi-planar API which
returns the planes in independent buffers (and an independent dma_buf
fd for each plane). That is not the case being handled here.
If being a real pedant, then it wants to be "layer->planes[1].offset =
avbuf->plane_info[0].bytesperline *
avbuf->context->format.fmt.pix.height;" as this should be using the
single planar API union member. width and height align between the two
structures anyway.

If you want the buffer to be a multiple of macroblocks or have other
padding requirements, then the width and height are defined to be that
size. Use the selection API [2] to get the cropping window.

>> +layer->planes[1].pitch = avbuf->plane_info[0].bytesperline;
>> +break;
>> +
>> +case AV_PIX_FMT_YUV420P:
>> +
>> +if (avbuf->num_planes > 1)
>> +break;
>> +
>> +layer->format = DRM_FORMAT_YUV420;
>> +layer->nb_planes = 3;
>> +
>> +layer->planes[1].object_index = 0;
>> +layer->planes[1].offset = avbuf->plane_info[0].bytesperline *
>> +avbuf->context->format.fmt.pix_mp.height;
>> +layer->planes[1].pitch = avbuf->plane_info[0].bytesperline >> 1;
>> +
>> +layer->planes[2].object_index = 0;
>> +layer->planes[2].offset = layer->planes[1].offset +
>> +((avbuf->plane_info[0].bytesperline *
>> +  avbuf->context->format.fmt.pix_mp.height) >> 2);
>> +layer->planes[2].pitch = avbuf->plane_info[0].bytesperline >> 1;
>
> Similarly here, and the pitch feels dubious as well.  Is 
> plane_info[n].bytesperline set for n > 0?

Likewise the definition of YU12/YV12 is that the planes will follow
immediately [3]

Pass on plane_info[n] - I only know V4L2 in any depth.

>> +break;
>> +
>> +default:
>
> Probably want a more explicit failure in other cases.  Is there any 10-bit 
> handling here yet (P010, I guess)?

Based on a quick search I don't believe V4L2 has added support for any
10 bit formats as yet, therefore it would be strange to get here with
a 10bit format selected.

  Dave

[1] https://linuxtv.org/downloads/v4l-dvb-apis-new/uapi/v4l/pixfmt-nv12.html
[2] 
https://linuxtv.org/downloads/v4l-dvb-apis-new/uapi/v4l/vidioc-g-selection.html#vidioc-g-selection
[3] https://linuxtv.org/downloads/v4l-dvb-apis-new/uapi/v4l/pixfmt-yuv420.html
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 2/3] libavcodec: v4l2m2m: output AVDRMFrameDescriptor

2018-05-09 Thread Dave Stevenson
m it is suitably released.

Videobuf2 (the V4L2 buffer allocation/handlier) will have created a
dmabuf fd for each buffer/plane via the VIDIOC_EXPBUF ioctl. It is up
to the client to close those.

There is an odd-ball in videobuf2. On the REQBUFS(0) call to release
all the buffers it checks the number of users of the buffers, and
fails the release call if anyone is using it. It's even documented
that way [1].
This had been raised with the V4L2 maintainers as strange and
annoying, but not resolved [2]. It probably needs picking up again and
getting merged into mainline, but that will then have a hard
requirement of needing a latest kernel.

The error message here is really for information that things haven't
cleaned up in the manner that you might expect. Either you just ignore
it and let V4L2 clean up eventually when the main V4L2 device fd gets
closed, or you need applications to behave in a particular manner.
Ignoring it has potential knockon issues as calls like S_FMT to
request a different format/resolution will typically fail if buffers
are allocated.

>> This is what I mean:
>> https://github.com/BayLibre/ffmpeg-drm/blob/master/main.c#L391
>>
>> I think this would mean that the libavcodec should open the drm device 
>> instead of the client application doing it and perform the actions above in 
>> unref.
>> would that be acceptable?
>
> Is there necessarily an explicit DRM device associated with a V4L2 decoder?  
> (And if so, how do you find it?)  For comparison, there isn't one for 
> Rockchip - there we create a dummy device to host the frames context.

No, videobuf2 will have made the allocation and created the dmabuf
object you get the fd for. That could be imported into DRM (via
DRM_IOCTL_PRIME_FD_TO_HANDLE), EGL (via EGL_LINUX_DMA_BUF_EXT), a V4L2
encoder, or any other subsystem that supports importing dmabufs.
There is no reliance at all on DRM or any particular DRM device, only dmabuf.

  Dave

[1] https://linuxtv.org/downloads/v4l-dvb-apis-new/uapi/v4l/vidioc-reqbufs.html
"Applications can call ioctl VIDIOC_REQBUFS again to change the number
of buffers, however this cannot succeed when any buffers are still
mapped. A count value of zero frees all buffers, after aborting or
finishing any DMA in progress, an implicit VIDIOC_STREAMOFF."
[2] https://patchwork.kernel.org/patch/7404111/
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [RFC] Exporting virtual timelines as stream side data

2018-03-28 Thread Dave Rice

> On Mar 27, 2018, at 5:16 PM, wm4 <nfx...@googlemail.com> wrote:
> 
> On Tue, 27 Mar 2018 16:45:23 -0400
> Dave Rice <d...@dericed.com <mailto:d...@dericed.com>> wrote:
> 
>>> On Mar 27, 2018, at 4:33 PM, wm4 <nfx...@googlemail.com> wrote:
>>> 
>>> On Tue, 27 Mar 2018 16:11:11 -0400
>>> Dave Rice <d...@dericed.com> wrote:
>>> 
>>>>> On Mar 27, 2018, at 4:01 PM, Derek Buitenhuis 
>>>>> <derek.buitenh...@gmail.com> wrote:
>>>>> 
>>>>> On 3/27/2018 8:52 PM, Rostislav Pehlivanov wrote:
>>>>>> I think we should drop the internal crap if the tools and the API support
>>>>>> it. Would also solve a lot of issues like ffmpeg.c not trimming the start
>>>>>> frame (so people complain all the time about longer files).
>>>>> 
>>>>> I personally agree, but I thought I'd be diplomatic about it, since it 
>>>>> would
>>>>> technically be losing a 'feature', since it would no longer Just Work(ish)
>>>>> and require user applications to apply timelines themselves - and I 
>>>>> figured
>>>>> some would argue that point.
>>>> 
>>>> +1 I’m willing to contribute what information or samples would be needed 
>>>> to help with Matroska support with virtual timelines. IMO, this would be a 
>>>> valuable feature to have in ffmpeg.
>>>> Dave Rice  
>>> 
>>> Some explanations how this interacts with editions would be good.  
>> 
>> I put an example with two editions at 
>> https://archive.org/download/chapters_test/chapters_test.mkv which mimics a 
>> digitized video 
> 
> Also this file lacks a chapter end time in the second edition. How is
> that valid?

You’re right. I moved this discussion to cellar, but when the Edition is 
Ordered then ChapterTimeEnd is required.
Dave Rice

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [RFC] Exporting virtual timelines as stream side data

2018-03-27 Thread Dave Rice

> On Mar 27, 2018, at 4:33 PM, wm4 <nfx...@googlemail.com> wrote:
> 
> On Tue, 27 Mar 2018 16:11:11 -0400
> Dave Rice <d...@dericed.com> wrote:
> 
>>> On Mar 27, 2018, at 4:01 PM, Derek Buitenhuis <derek.buitenh...@gmail.com> 
>>> wrote:
>>> 
>>> On 3/27/2018 8:52 PM, Rostislav Pehlivanov wrote:  
>>>> I think we should drop the internal crap if the tools and the API support
>>>> it. Would also solve a lot of issues like ffmpeg.c not trimming the start
>>>> frame (so people complain all the time about longer files).  
>>> 
>>> I personally agree, but I thought I'd be diplomatic about it, since it would
>>> technically be losing a 'feature', since it would no longer Just Work(ish)
>>> and require user applications to apply timelines themselves - and I figured
>>> some would argue that point.  
>> 
>> +1 I’m willing to contribute what information or samples would be needed to 
>> help with Matroska support with virtual timelines. IMO, this would be a 
>> valuable feature to have in ffmpeg.
>> Dave Rice
> 
> Some explanations how this interacts with editions would be good.

I put an example with two editions at 
https://archive.org/download/chapters_test/chapters_test.mkv which mimics a 
digitized video tape. One edition depicts the stored video encoding in its 
entire presentation and the other default edition plays from the video encoding 
selectively. When the file is played in VLC, the default edition is used to 
show a shortened presentation, but the other editions may be selected within 
the Program menu option. In the proposal there should be an option to mimic the 
current behavior and ignore the editions (equivalent of -ignore_editlist in 
mov), to use the first default edition, or to select a specific edition.

> Or documenting ordered chapters and segment linking in the mkv "spec"
> at all.

This document, http://mod16.org/hurfdurf/?p=8 <http://mod16.org/hurfdurf/?p=8>, 
 is unofficially a good resource on that.

More officially the Cellar working group has this section on Linked Segments: 
https://tools.ietf.org/html/draft-lhomme-cellar-matroska-04#section-23 
<https://tools.ietf.org/html/draft-lhomme-cellar-matroska-04#section-23>, and 
this one on Ordered Editions: 
https://tools.ietf.org/html/draft-lhomme-cellar-matroska-04#section-10.1.2.3 
<https://tools.ietf.org/html/draft-lhomme-cellar-matroska-04#section-10.1.2.3>. 
This documentation is a work in progress so comments/reviews/suggestions are 
welcome (issues can be filed at 
https://github.com/Matroska-Org/matroska-specification 
<https://github.com/Matroska-Org/matroska-specification> or via 
https://www.ietf.org/mailman/listinfo/cellar 
<https://www.ietf.org/mailman/listinfo/cellar>).

Dave Rice

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [RFC] Exporting virtual timelines as stream side data

2018-03-27 Thread Dave Rice

> On Mar 27, 2018, at 4:01 PM, Derek Buitenhuis <derek.buitenh...@gmail.com> 
> wrote:
> 
> On 3/27/2018 8:52 PM, Rostislav Pehlivanov wrote:
>> I think we should drop the internal crap if the tools and the API support
>> it. Would also solve a lot of issues like ffmpeg.c not trimming the start
>> frame (so people complain all the time about longer files).
> 
> I personally agree, but I thought I'd be diplomatic about it, since it would
> technically be losing a 'feature', since it would no longer Just Work(ish)
> and require user applications to apply timelines themselves - and I figured
> some would argue that point.

+1 I’m willing to contribute what information or samples would be needed to 
help with Matroska support with virtual timelines. IMO, this would be a 
valuable feature to have in ffmpeg.
Dave Rice
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [Cellar] [PATCH] avcodec/ffv1: Support for RGBA64 and GBRAP16

2018-02-05 Thread Dave Rice

> On Feb 5, 2018, at 4:20 PM, Jerome Martinez <jer...@mediaarea.net> wrote:
> 
> On 03/02/2018 14:48, Michael Niedermayer wrote:
>> 
>> I hope this will not reduce interrest in working on a improved
>> 9-16bit mode in v4.
> 
> I don't like to put politics in technical stuff, but here this is politics, 
> and I think that blocking an easy improvement of FFV1 v3 encoding/decoding in 
> FFmpeg (actually just using the current FFV1 v3, and also v1 actually, 
> specification without having artificial limitations about bit depths for a 
> specific color space, i.e. 16-bit support was already there for YUV before I 
> work on FFV1) just for forcing people to do a completely different work (e.g. 
> working on FFV1 v4) is not fair and IMO is not in the idea behind open source.
> IMO if interest for 9-16bit (side note: I don't see why 8-bit should not be 
> in the range, some ideas I have for v4 are useful also for 8-bit) mode in v4 
> is reduced, that would only mean that v3 is already so great and/or just that 
> people have no better idea right now, it is not bad, and both works (using v3 
> at the maximum of its possibilities and working on v4) are separate works. 
> FYI criticism I saw about FFV1 is not compression ratio but speed (playing a 
> 4K stream is just impossible on a "normal" machine, you need a very expensive 
> CPU for that) so my plan is to focus on speed improvement in priority. It 
> could be v4 stuff (incompatible changes), or v3 (encoder/decoder 
> optimization), depending of what I find.

IMHO, improved higher bit depths in version 4 is very interesting. We already 
have a few noted exceptions where version 4 is intended to fix and optimize a 
few things of version 3 (signed 16 bit handling, RGB plane order for 9-15 
bits), so a change in optimization of high bit depth handling seems already 
appropriate for consideration.

>> [...]
>> 
>>> Last but not least, since almost two years now it's impossible
>>> to work on the development of FFV1 v4. It's always the wrong
>>> time and/or the wrong place every time I am doing something for
>>> this cause. Why? This is extremely frustrating.
>> I want to understand this too. IMO v4 development should be more
>> important than or at least equally to the v3 draft polishing and neither
>> should block the other.
> 
> Also politics, but I don't understand who is blocking v4 from your point of 
> view. "impossible to work on v4"? Where? As far as I see 1 person (me) said 
> his priority is having FFV1 v3 becoming a standard (so IETF related work) and 
> widely used (so any bit depth for any supported color space in v3 because it 
> is an easy demonstration that FFV1 is versatile without having to wait for v4 
> R, especially because v3 bitstream design is already pretty good and 
> versatile) because this is what I need, and my personal case is not blocking 
> anyone else for sending patches for either FFV1 specifications or FFmpeg code 
> about v4. Eager to see patch proposals for v4 (and v3 if it does not break 
> support of files in the wild), whoever feeling blocked with his patches 
> should send patch proposals to either FFmpeg (code) or CELLAR mailing list 
> (bitstream format).
> 
> That said, I plan to add more pix_fmt support for FFV1 v3 (which is already a 
> good format!) support in FFmpeg and some optimization ideas beside my work 
> for IETF spec, with the hope we could speak about technical issues (e.g. more 
> optimization hints or info about wrong code or warning about that it breaks 
> the bitstream specs as currently written), leaving aside politics.

In the cellar charter and timeline, the effort to produce a informational 
specification for FFV1 video codec versions 0, 1 and 3 precedes the effort to 
produce a specification for FFV1 video codec version 4. Initially I anticipated 
that the version 4 specification would be a fork from the completed version 
0,1,3 draft; however, I think the current approach (one doc that ‘makes’ both 
outputs) works well to allow version 4 work to proceed without blocking any 
remaining version 0,1,3 work. Still I suggest that we resolve 
https://github.com/FFmpeg/FFV1/pull/87 and 
https://github.com/FFmpeg/FFV1/pull/71 soon as IMHO the version 0,1,3 
informational specification is very close to a last call and submission to the 
IESG (chairs are welcome to offer other opinions ;) ). So while I encourage 
resolution to these pull requests, it seems we have a good system to manage 
concurrent work on both FFV1 goals of the charter.

Kind Regards,
Dave Rice


___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] ffmpeg.c: use sigterm_handler with sigpipe

2018-01-18 Thread Dave Rice
Thread bump.

> On Jan 11, 2018, at 5:51 PM, Nicolas George <geo...@nsup.org> wrote:
> 
> Moritz Barsnick (2018-01-11):
>> This patch doesn't change the handling of SIGTERM
> 
> You should have read SIGPIPE, obviously.
> 
>> Is SIGPIPE an interactive signal?
> 
> Of course not.
> 
>>  Anything on the other side of output
>> file(name) "-" or "pipe:N" may terminate for some reason.
> 
> Yes, that is exactly what SIGPIPE is for.
> 
>> This patch does NOT try to ignore anything. ffmpeg won't keep running
>> due to ignoring of SIGPIPE, it will terminate more cleanly due to
>> handling it. The former is not desired. (But yes, shall handing to
>> enforce ignoring it would allow that.)
> 
> It will terminate less cleanly than if you do the right thing with
> SIGPIPE.

This patch has been working for me and ffmpeg terminates cleanly with SIGPIPE 
with a valid output (moov atom written with mov or cues/seekhead written with 
mkv). Not sure if I understand well the disadvantage of this patch.
Dave Rice

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH] ffmpeg.c: use sigterm_handler with sigpipe

2018-01-11 Thread Dave Rice
From 0faa2954010feb8428542fc33aa81e898a280c88 Mon Sep 17 00:00:00 2001
From: Dave Rice <d...@dericed.com>
Date: Thu, 11 Jan 2018 15:52:36 -0500
Subject: [PATCH] ffmpeg.c: use sigterm_handler with sigpipe

Based on a suggestion by Moritz Barsnick at 
http://ffmpeg.org/pipermail/ffmpeg-user/2018-January/038549.html 
<http://ffmpeg.org/pipermail/ffmpeg-user/2018-January/038549.html>

---
 fftools/ffmpeg.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/fftools/ffmpeg.c b/fftools/ffmpeg.c
index 0c16e75ab0..dfcc865dcf 100644
--- a/fftools/ffmpeg.c
+++ b/fftools/ffmpeg.c
@@ -406,6 +406,7 @@ void term_init(void)
 
 signal(SIGINT , sigterm_handler); /* Interrupt (ANSI).*/
 signal(SIGTERM, sigterm_handler); /* Termination (ANSI).  */
+signal(SIGPIPE, sigterm_handler); /* Termination (pipe closed).  */
 #ifdef SIGXCPU
 signal(SIGXCPU, sigterm_handler);
 #endif
-- 
2.15.1
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avfilter/avf_showwaves: add draw grid support

2017-11-21 Thread Dave Rice

> On Nov 21, 2017, at 7:36 AM, Paul B Mahol <one...@gmail.com> wrote:
> 
> Signed-off-by: Paul B Mahol <one...@gmail.com>
> ---
> doc/filters.texi|  6 ++
> libavfilter/avf_showwaves.c | 28 
> 2 files changed, 34 insertions(+)
> 
> diff --git a/doc/filters.texi b/doc/filters.texi
> index 62f633c6f8..9c98f1684b 100644
> --- a/doc/filters.texi
> +++ b/doc/filters.texi
> @@ -19178,6 +19178,12 @@ Cubic root.
> @end table
> 
> Default is linear.
> +
> +@item grid
> +Draw grid, default is disabled.
> +
> +@item grid_color
> +Set grid color.
> @end table
> 
> @subsection Examples
> diff --git a/libavfilter/avf_showwaves.c b/libavfilter/avf_showwaves.c
> index 0866967984..74d4886cd4 100644
> --- a/libavfilter/avf_showwaves.c
> +++ b/libavfilter/avf_showwaves.c
> @@ -69,6 +69,8 @@ typedef struct ShowWavesContext {
> int mode;   ///< ShowWavesMode
> int scale;  ///< ShowWavesScale
> int split_channels;
> +int grid;
> +uint8_t grid_rgba[4];
> uint8_t *fg;
> 
> int (*get_h)(int16_t sample, int height);
> @@ -104,6 +106,8 @@ static const AVOption showwaves_options[] = {
> { "log", "logarithmic",0, AV_OPT_TYPE_CONST, {.i64=SCALE_LOG}, 
> .flags=FLAGS, .unit="scale"},
> { "sqrt", "square root",   0, AV_OPT_TYPE_CONST, {.i64=SCALE_SQRT}, 
> .flags=FLAGS, .unit="scale"},
> { "cbrt", "cubic root",0, AV_OPT_TYPE_CONST, {.i64=SCALE_CBRT}, 
> .flags=FLAGS, .unit="scale"},
> +{ "grid", "draw grid", OFFSET(grid), AV_OPT_TYPE_BOOL, {.i64=0}, 0, 1, 
> FLAGS },
> +{ "grid_color", "set grid color", OFFSET(grid_rgba), AV_OPT_TYPE_COLOR, 
> {.str="0x00"}, 0, 0, FLAGS },
> { NULL }
> };
> 
> @@ -562,6 +566,30 @@ static int alloc_out_frame(ShowWavesContext *showwaves, 
> const int16_t *p,
>   outlink->time_base);
> for (j = 0; j < outlink->h; j++)
> memset(out->data[0] + j*out->linesize[0], 0, outlink->w * 
> showwaves->pixstep);
> +
> +if (showwaves->grid) {
> +const int pixstep = showwaves->pixstep;
> +int ystep = showwaves->split_channels ? outlink->h / 
> inlink->channels / 4 : outlink->h / 4;
> +int channels = showwaves->split_channels ? inlink->channels : 1;
> +int x, s, c, yskip = 0;
> +
> +for (c = 0; c < channels; c++) {
> +for (j = 0; j < 4; j++) {
> +for (x = 0; x < outlink->w; x+=3) {
> +for (s = 0; s < pixstep; s++) {
> +out->data[0][(yskip + j * ystep) * 
> out->linesize[0] + x * pixstep + s] = showwaves->grid_rgba[s];
> +}
> +}
> +}
> +for (x = 0; x < outlink->w; x+=3) {
> +for (s = 0; s < pixstep; s++) {
> +out->data[0][(yskip + j * ystep - 1) * 
> out->linesize[0] + x * pixstep + s] = showwaves->grid_rgba[s];
> +}
> +}
> +
> +yskip += j * ystep;
> +}
> +}
> }
> return 0;
> }
> -- 
> 2.11.0

Seems interesting but do the gridlines convey any meaning? Perhaps a flags 
value too similar to waveform that should label the lines in dB or as a float. 
Also perhaps worth adjustment the placement of the gridlines depending on a 
scale (log vs lin).
Dave Rice

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 3/4] avformat/movenc: force colr atom for uncompressed yuv in mov

2017-11-20 Thread Dave Rice

> On Nov 20, 2017, at 11:22 AM, Derek Buitenhuis <derek.buitenh...@gmail.com> 
> wrote:
> 
> On 11/20/2017 3:19 PM, Dave Rice wrote:
>> TN2162 requires a colr atom for uncompressed yuv (including v210, v308, 
>> v408, etc) in mov, so I'd prefer to write it in this case. Note that the 
>> colr atom provides an option for unspecified for each of the color values, 
>> so there's a method to write a colr atom which basically says ¯\_(ツ)_/¯.
> 
> [...]
> 
>> I disagree. I'd prefer to follow the specification and write a colr atom (in 
>> the case of uncompressed yuv in mov) that say the colr is unspecified rather 
>> than to write no colr atom at all and create an invalid file. See 
>> https://developer.apple.com/library/content/technotes/tn2162/_index.html#//apple_ref/doc/uid/DTS40013070-CH1-TNTAG9.
>> 
>> I do agree that guesswork should be avoided or provided under an option that 
>> users could opt into. I suppose my preference order from support to oppose 
>> for writing uncompressed yuv in mov would be:
>> 
>> 1: write a colr atom in all cases (if unknown, use unspecified values in 
>> colr atom)
>> 2: write a colr atom if color data the known (no colr atom if unknown)
>> 3: write a colr atom in all cases (if unknown, just make stuff up, #yolo)
> 
> My opinion falls somewhere in between, to something like this (in 
> pseudo-code):
> 
>if (colors_known) {
>write_colr(vals);
>} else if (uncompressed_yuv) {
>if (guesswork_user_option) {
>write_colr(guessed_hacky_crap);
>} else {
>write_colr(unspecified);
>}
>} else if (guesswork_user_option) {
>write_colr(guessed_hacky_crap);
>} else {
>// Don't write a colr atom because it adds no value, to write 
> unspecified in it
>// and no spec requires it for compressed streams.
>}

What do you propose as the default for guessed_hacky_crap? Also are there 
supporters for the need of a guessed_hacky_crap optio? Is there precedence in 
ffmpeg to enable/disable guesswork via a user option?
Dave
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 3/4] avformat/movenc: force colr atom for uncompressed yuv in mov

2017-11-20 Thread Dave Rice

> On Nov 20, 2017, at 9:55 AM, Derek Buitenhuis <derek.buitenh...@gmail.com> 
> wrote:
> 
> On 11/20/2017 2:50 PM, Dave Rice wrote:
>> I am hesitant to see write_colr option removed since the guesswork used here 
>> https://github.com/FFmpeg/FFmpeg/blob/8f4702a93f87f9f76563e80f1ae2141a40029d9d/libavformat/movenc.c#L1747-L1775
>>  would then be applied to all movs of standard HD and SD size and cause many 
>> ffmpeg outputs to appear differently after the change.
> 
> 100% agree with this. We *should not* write guessed color information based
> off of resolution by default.
> 
> Further, if no color information is present, no colr atom should be written
> at all, unless it is forced by the user (aka the write_colr option right
> now).

TN2162 requires a colr atom for uncompressed yuv (including v210, v308, v408, 
etc) in mov, so I'd prefer to write it in this case. Note that the colr atom 
provides an option for unspecified for each of the color values, so there's a 
method to write a colr atom which basically says ¯\_(ツ)_/¯.

>> Could the guesswork here be removed? Or could write_colr be enabled with the 
>> option removed while the guesswork itself has its own option?
> 
> If the quesswork is removed or put under an option, I still maintain no
> colr atom should be written if no color information is known. Doing
> so is just asking for trouble, in my opinion.

I disagree. I'd prefer to follow the specification and write a colr atom (in 
the case of uncompressed yuv in mov) that say the colr is unspecified rather 
than to write no colr atom at all and create an invalid file. See 
https://developer.apple.com/library/content/technotes/tn2162/_index.html#//apple_ref/doc/uid/DTS40013070-CH1-TNTAG9.

I do agree that guesswork should be avoided or provided under an option that 
users could opt into. I suppose my preference order from support to oppose for 
writing uncompressed yuv in mov would be:

1: write a colr atom in all cases (if unknown, use unspecified values in colr 
atom)
2: write a colr atom if color data the known (no colr atom if unknown)
3: write a colr atom in all cases (if unknown, just make stuff up, #yolo)

Dave Rice

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 3/4] avformat/movenc: force colr atom for uncompressed yuv in mov

2017-11-20 Thread Dave Rice

> On Nov 20, 2017, at 9:01 AM, Carl Eugen Hoyos <ceffm...@gmail.com> wrote:
> 
> 2017-11-20 2:24 GMT+01:00 James Almer <jamr...@gmail.com 
> <mailto:jamr...@gmail.com>>:
>> On 11/18/2017 11:19 PM, Dave Rice wrote:
>>> From 41da5e48f8788b85dd7a382030bb2866c506cc03 Mon Sep 17 00:00:00 2001
>>> From: Dave Rice <d...@dericed.com>
>>> Date: Sat, 18 Nov 2017 20:31:27 -0500
>>> Subject: [PATCH 3/4] avformat/movenc: force colr atom for uncompressed yuv 
>>> in
>>> mov
>>> MIME-Version: 1.0
>>> Content-Type: text/plain; charset=UTF-8
>>> Content-Transfer-Encoding: 8bit
>>> 
>>> As required by Apple’s TN2162.
>>> ---
>>> libavformat/movenc.c | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>> 
>>> diff --git a/libavformat/movenc.c b/libavformat/movenc.c
>>> index aaa1dedfd7..86960b19c1 100644
>>> --- a/libavformat/movenc.c
>>> +++ b/libavformat/movenc.c
>>> @@ -1978,7 +1978,7 @@ static int mov_write_video_tag(AVIOContext *pb, 
>>> MOVMuxContext *mov, MOVTrack *tr
>>> else
>>> av_log(mov->fc, AV_LOG_WARNING, "Not writing 'gama' atom. 
>>> Format is not MOV.\n");
>>> }
>>> -if (mov->flags & FF_MOV_FLAG_WRITE_COLR) {
>>> +if (mov->flags & FF_MOV_FLAG_WRITE_COLR || uncompressed_ycbcr) {
>>> if (track->mode == MODE_MOV || track->mode == MODE_MP4)
>>> mov_write_colr_tag(pb, track);
>>> else
>>> 
>> 
>> The write_colr option says "Write colr atom (Experimental, may be
>> renamed or changed, do not use from scripts)". Does that still apply? Is
>> the feature/spec still experimental?
>> 
>> If not, then the option and flag could be removed as well as part of
>> this patch.
> 
> I believe it should be removed in a follow-up patch.

I am hesitant to see write_colr option removed since the guesswork used here 
https://github.com/FFmpeg/FFmpeg/blob/8f4702a93f87f9f76563e80f1ae2141a40029d9d/libavformat/movenc.c#L1747-L1775
 would then be applied to all movs of standard HD and SD size and cause many 
ffmpeg outputs to appear differently after the change.

Could the guesswork here be removed? Or could write_colr be enabled with the 
option removed while the guesswork itself has its own option?
Dave Rice
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avfilter/vf_tile: add queue option

2017-11-19 Thread Dave Rice


> On Nov 19, 2017, at 2:19 PM, Nicolas George <geo...@nsup.org> wrote:
> 
> Paul B Mahol (2017-11-19):
>> So what you propose?
> 
> I do not know. It is your patch, and I am not even sure I understood
> your explanation correctly.

IMHO, ‘queue' is a good name for what it does. Perhaps an additional example 
would help demonstrate the option better. For example `ffplay -f lavfi -i 
testsrc=r=1 -vf tile=1x8:overlap=7:queue=1` will display 8 adjoining frames of 
the testsrc. Without `queue=1` in this example, the output will be stalled for 
8 seconds before displaying, whereas with `queue=1` there will be an output 
frame for each input frame (the first frame of the output simply showing the 
first frame of the input plus seven empty spaces).

Suggestion:

@item
Produce a filmstrip animation from frame @code{n-7} to @code{n}:
@example
ffmpeg -i file.avi -vf 'tile=1x8:overlap=7:queue=1' filmstrip.mkv
@end example

Dave Rice
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 4/4] fate: add v210 mov test

2017-11-18 Thread Dave Rice
From fe9e3aa13ecf3b4cb81f279696bcc21602662e7a Mon Sep 17 00:00:00 2001
From: Dave Rice <d...@dericed.com>
Date: Sat, 18 Nov 2017 20:32:33 -0500
Subject: [PATCH 4/4] fate: add v210 mov test

---
 tests/fate/vcodec.mak | 4 
 tests/ref/vsynth/vsynth1-mov-v210 | 4 
 2 files changed, 8 insertions(+)
 create mode 100644 tests/ref/vsynth/vsynth1-mov-v210

diff --git a/tests/fate/vcodec.mak b/tests/fate/vcodec.mak
index bbcf25d72a..0206312a53 100644
--- a/tests/fate/vcodec.mak
+++ b/tests/fate/vcodec.mak
@@ -360,6 +360,10 @@ fate-vsynth%-mov-bpp16:  CODEC   = rawvideo
 fate-vsynth%-mov-bpp16:  ENCOPTS = -pix_fmt rgb565le
 fate-vsynth%-mov-bpp16:  FMT  = mov
 
+FATE_VCODEC-$(call ENCDEC, V210, MOV) += mov-v210
+fate-vsynth%-mov-v210:  CODEC = v210
+fate-vsynth%-mov-v210:  FMT   = mov
+
 FATE_VCODEC-$(call ENCDEC, ROQ, ROQ)+= roqvideo
 fate-vsynth%-roqvideo:   CODEC   = roqvideo
 fate-vsynth%-roqvideo:   ENCOPTS = -frames 5
diff --git a/tests/ref/vsynth/vsynth1-mov-v210 
b/tests/ref/vsynth/vsynth1-mov-v210
new file mode 100644
index 00..035f8df6ff
--- /dev/null
+++ b/tests/ref/vsynth/vsynth1-mov-v210
@@ -0,0 +1,4 @@
+a96ffa7a9ccb8242cb462dd698b3e222 *tests/data/fate/vsynth1-mov-v210.mov
+14746427 tests/data/fate/vsynth1-mov-v210.mov
+2ba7f4ca302f3c4147860b9dfb12b6e4 *tests/data/fate/vsynth1-mov-v210.out.rawvideo
+stddev:1.84 PSNR: 42.81 MAXDIFF:   29 bytes:  7603200/  7603200
-- 
2.15.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 1/4] avformat/movenc: correct ImageDescription for uncompressed ycbcr

2017-11-18 Thread Dave Rice
This patch set updates movenc to write uncompressed ycbcr in mov following 
requirements in Apple’s TN2162, 
https://developer.apple.com/library/content/technotes/tn2162/_index.html. 
Thanks to Carl and Michael for comments on an earlier version of this patchset.


From 26d9ca470f104d8448000b13c2cc97b8fc5c15ba Mon Sep 17 00:00:00 2001
From: Dave Rice <d...@dericed.com>
Date: Thu, 16 Nov 2017 11:53:32 -0500
Subject: [PATCH 1/4] avformat/movenc: correct ImageDescription for
 uncompressed ycbcr

Per
https://developer.apple.com/library/content/technotes/tn2162/_index.html
.
---
 libavformat/movenc.c | 20 +---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/libavformat/movenc.c b/libavformat/movenc.c
index cc3fc19d9b..ce51c4b3d2 100644
--- a/libavformat/movenc.c
+++ b/libavformat/movenc.c
@@ -1832,6 +1832,13 @@ static int mov_write_video_tag(AVIOContext *pb, 
MOVMuxContext *mov, MOVTrack *tr
 char compressor_name[32] = { 0 };
 int avid = 0;
 
+int uncompressed_ycbcr = ((track->par->codec_id == AV_CODEC_ID_RAWVIDEO && 
track->par->format == AV_PIX_FMT_UYVY422)
+   || (track->par->codec_id == AV_CODEC_ID_RAWVIDEO && 
track->par->format == AV_PIX_FMT_YUYV422)
+   ||  track->par->codec_id == AV_CODEC_ID_V308
+   ||  track->par->codec_id == AV_CODEC_ID_V408
+   ||  track->par->codec_id == AV_CODEC_ID_V410
+   ||  track->par->codec_id == AV_CODEC_ID_V210);
+
 avio_wb32(pb, 0); /* size */
 if (mov->encryption_scheme != MOV_ENC_NONE) {
 ffio_wfourcc(pb, "encv");
@@ -1842,11 +1849,15 @@ static int mov_write_video_tag(AVIOContext *pb, 
MOVMuxContext *mov, MOVTrack *tr
 avio_wb16(pb, 0); /* Reserved */
 avio_wb16(pb, 1); /* Data-reference index */
 
-avio_wb16(pb, 0); /* Codec stream version */
+if (uncompressed_ycbcr) {
+avio_wb16(pb, 2); /* Codec stream version */
+} else {
+avio_wb16(pb, 0); /* Codec stream version */
+}
 avio_wb16(pb, 0); /* Codec stream revision (=0) */
 if (track->mode == MODE_MOV) {
 ffio_wfourcc(pb, "FFMP"); /* Vendor */
-if (track->par->codec_id == AV_CODEC_ID_RAWVIDEO) {
+if (track->par->codec_id == AV_CODEC_ID_RAWVIDEO || 
uncompressed_ycbcr) {
 avio_wb32(pb, 0); /* Temporal Quality */
 avio_wb32(pb, 0x400); /* Spatial Quality = lossless*/
 } else {
@@ -1870,7 +1881,10 @@ static int mov_write_video_tag(AVIOContext *pb, 
MOVMuxContext *mov, MOVTrack *tr
 avio_w8(pb, strlen(compressor_name));
 avio_write(pb, compressor_name, 31);
 
-if (track->mode == MODE_MOV && track->par->bits_per_coded_sample)
+if (track->mode == MODE_MOV &&
+   (track->par->codec_id == AV_CODEC_ID_V410 || track->par->codec_id == 
AV_CODEC_ID_V210))
+avio_wb16(pb, 0x18);
+else if (track->mode == MODE_MOV && track->par->bits_per_coded_sample)
 avio_wb16(pb, track->par->bits_per_coded_sample |
   (track->par->format == AV_PIX_FMT_GRAY8 ? 0x20 : 0));
 else
-- 
2.15.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 3/4] avformat/movenc: force colr atom for uncompressed yuv in mov

2017-11-18 Thread Dave Rice
From 41da5e48f8788b85dd7a382030bb2866c506cc03 Mon Sep 17 00:00:00 2001
From: Dave Rice <d...@dericed.com>
Date: Sat, 18 Nov 2017 20:31:27 -0500
Subject: [PATCH 3/4] avformat/movenc: force colr atom for uncompressed yuv in
 mov
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

As required by Apple’s TN2162.
---
 libavformat/movenc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/libavformat/movenc.c b/libavformat/movenc.c
index aaa1dedfd7..86960b19c1 100644
--- a/libavformat/movenc.c
+++ b/libavformat/movenc.c
@@ -1978,7 +1978,7 @@ static int mov_write_video_tag(AVIOContext *pb, 
MOVMuxContext *mov, MOVTrack *tr
 else
 av_log(mov->fc, AV_LOG_WARNING, "Not writing 'gama' atom. Format 
is not MOV.\n");
 }
-if (mov->flags & FF_MOV_FLAG_WRITE_COLR) {
+if (mov->flags & FF_MOV_FLAG_WRITE_COLR || uncompressed_ycbcr) {
 if (track->mode == MODE_MOV || track->mode == MODE_MP4)
 mov_write_colr_tag(pb, track);
 else
-- 
2.15.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 2/4] avformat/movenc: write clap atom for uncompressed yuv in mov

2017-11-18 Thread Dave Rice
From e9079167c199791e652fe9cd3550333b819a8a9a Mon Sep 17 00:00:00 2001
From: Dave Rice <d...@dericed.com>
Date: Thu, 16 Nov 2017 11:29:06 -0500
Subject: [PATCH 2/4] avformat/movenc: write clap atom for uncompressed yuv in
 mov

fixes 6145
---
 libavformat/movenc.c | 19 +++
 1 file changed, 19 insertions(+)

diff --git a/libavformat/movenc.c b/libavformat/movenc.c
index ce51c4b3d2..aaa1dedfd7 100644
--- a/libavformat/movenc.c
+++ b/libavformat/movenc.c
@@ -1686,6 +1686,21 @@ static int mov_write_sv3d_tag(AVFormatContext *s, 
AVIOContext *pb, AVSphericalMa
 return update_size(pb, sv3d_pos);
 }
 
+static int mov_write_clap_tag(AVIOContext *pb, MOVTrack *track)
+{
+avio_wb32(pb, 40);
+ffio_wfourcc(pb, "clap");
+avio_wb32(pb, track->par->width); /* apertureWidth_N */
+avio_wb32(pb, 1); /* apertureWidth_D (= 1) */
+avio_wb32(pb, track->height); /* apertureHeight_N */
+avio_wb32(pb, 1); /* apertureHeight_D (= 1) */
+avio_wb32(pb, 0); /* horizOff_N (= 0) */
+avio_wb32(pb, 1); /* horizOff_D (= 1) */
+avio_wb32(pb, 0); /* vertOff_N (= 0) */
+avio_wb32(pb, 1); /* vertOff_D (= 1) */
+return 40;
+}
+
 static int mov_write_pasp_tag(AVIOContext *pb, MOVTrack *track)
 {
 AVRational sar;
@@ -1984,6 +1999,10 @@ static int mov_write_video_tag(AVIOContext *pb, 
MOVMuxContext *mov, MOVTrack *tr
 mov_write_pasp_tag(pb, track);
 }
 
+if (uncompressed_ycbcr){
+mov_write_clap_tag(pb, track);
+}
+
 if (mov->encryption_scheme != MOV_ENC_NONE) {
 ff_mov_cenc_write_sinf_tag(track, pb, mov->encryption_kid);
 }
-- 
2.15.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avformat/movenc: correct ImageDescription depth for v210 v410

2017-11-16 Thread Dave Rice

> On Nov 16, 2017, at 6:08 PM, Carl Eugen Hoyos <ceffm...@gmail.com> wrote:
> 
> 2017-11-16 17:54 GMT+01:00 Dave Rice <d...@dericed.com>:
> 
>> +if (track->mode == MODE_MOV && track->par->codec_id == AV_CODEC_ID_V410)
>> +avio_wb16(pb, 0x18);
>> +else if (track->mode == MODE_MOV && track->par->codec_id == 
>> AV_CODEC_ID_V210)
>> +avio_wb16(pb, 0x18);
> 
> It appears you can merge the two cases.

The patch is updated with merged cases below.

> Or maybe patch bits_per_coded_sample in the encoder…


With Apple’s TN2162 there doesn’t appear to be a reliable relationship between 
the bits_per_coded_sample and what the ImageDescription depth value should be 
for the uncompressed yuv formats. TN2162 simply lists what the depth value 
should be and this patch corrects the few instances, where ffmpeg’s behavior 
doesn’t correlate to what TN2162 defines.


From cfa5b2cd959154f2896a9557d9ca2ed2d2d3834e Mon Sep 17 00:00:00 2001
From: Dave Rice <d...@dericed.com>
Date: Thu, 16 Nov 2017 11:53:32 -0500
Subject: [PATCH 2/2] avformat/movenc: correct ImageDescription depth for v210
 v410

Per
https://developer.apple.com/library/content/technotes/tn2162/_index.html
.
---
 libavformat/movenc.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/libavformat/movenc.c b/libavformat/movenc.c
index 98fcc7a44b..d9d3c2bf1e 100644
--- a/libavformat/movenc.c
+++ b/libavformat/movenc.c
@@ -1896,7 +1896,10 @@ static int mov_write_video_tag(AVIOContext *pb, 
MOVMuxContext *mov, MOVTrack *tr
 avio_w8(pb, strlen(compressor_name));
 avio_write(pb, compressor_name, 31);
 
-if (track->mode == MODE_MOV && track->par->bits_per_coded_sample)
+if (track->mode == MODE_MOV &&
+   (track->par->codec_id == AV_CODEC_ID_V410 || track->par->codec_id == 
AV_CODEC_ID_V210))
+avio_wb16(pb, 0x18);
+else if (track->mode == MODE_MOV && track->par->bits_per_coded_sample)
 avio_wb16(pb, track->par->bits_per_coded_sample |
   (track->par->format == AV_PIX_FMT_GRAY8 ? 0x20 : 0));
 else
-- 
2.15.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] movenc: write clap tag

2017-11-16 Thread Dave Rice

> On Nov 16, 2017, at 11:30 AM, Dave Rice <d...@dericed.com> wrote:
> 
>> On Jul 9, 2017, at 7:26 PM, Dave Rice <d...@dericed.com> wrote:
>> 
>>> On Jul 7, 2017, at 7:06 PM, Derek Buitenhuis <derek.buitenh...@gmail.com> 
>>> wrote:
>>> 
>>> On 7/7/2017 10:13 PM, James Almer wrote:
>>>> Isn't this necessary only for files with raw video? As is, this box
>>>> would be written on any mov file with a video stream.
>>> 
>>> This was addressed a previous email:
>>> 
>>>  http://lists.ffmpeg.org/pipermail/ffmpeg-devel/2017-July/213350.html
>>> 
>>> I guess the spec is up for interpretation.
>> 
>> The quicktime spec says "This is a mandatory extension for all uncompressed 
>> Y´CbCr data formats”. It doesn’t clarify if the clap atom is recommended or 
>> not in mov files that are not “uncompressed Y´CbCr”, though it would make 
>> sense if the video container needs to store cropping data. I think 
>> constraining the change for only  “uncompressed Y´CbCr” would be more 
>> cautious though. I’ll revise my patch to include the condition and resubmit.
>> 
>> If the patch only impacts “uncompressed Y´CbCr” would any fate updates be 
>> needed?
>> Dave Rice
> 
> Here’s an update to only write the clap atom for the specific uncompressed 
> encodings listed in TN2162.
> 
> From 37457c1ee135f39452b91b047af4adf1ec43464b Mon Sep 17 00:00:00 2001
> From: Dave Rice <d...@dericed.com>
> Date: Thu, 16 Nov 2017 11:29:06 -0500
> Subject: [PATCH] avformat/movenc: write clap atom for uncompressed yuv in mov

Sorry, this patch should supersede the prior email's patch. I realized that 
Apple requires new uncompressed ycbcr files to use version 2 in the Image 
Description, so I reused the uncompressed_ycbcr variable to add that in as well.

From 3ea99e7d22f67b8a556152acbcbc8bf2eeec8a39 Mon Sep 17 00:00:00 2001
From: Dave Rice <d...@dericed.com>
Date: Thu, 16 Nov 2017 11:29:06 -0500
Subject: [PATCH 1/2] avformat/movenc: write clap atom for uncompressed yuv in
 mov

fixes 6145
---
 libavformat/movenc.c | 32 +++-
 1 file changed, 31 insertions(+), 1 deletion(-)

diff --git a/libavformat/movenc.c b/libavformat/movenc.c
index cc3fc19d9b..98fcc7a44b 100644
--- a/libavformat/movenc.c
+++ b/libavformat/movenc.c
@@ -1686,6 +1686,21 @@ static int mov_write_sv3d_tag(AVFormatContext *s, 
AVIOContext *pb, AVSphericalMa
 return update_size(pb, sv3d_pos);
 }
 
+static int mov_write_clap_tag(AVIOContext *pb, MOVTrack *track)
+{
+avio_wb32(pb, 40);
+ffio_wfourcc(pb, "clap");
+avio_wb32(pb, track->par->width); /* apertureWidth_N */
+avio_wb32(pb, 1); /* apertureWidth_D (= 1) */
+avio_wb32(pb, track->height); /* apertureHeight_N */
+avio_wb32(pb, 1); /* apertureHeight_D (= 1) */
+avio_wb32(pb, 0); /* horizOff_N (= 0) */
+avio_wb32(pb, 1); /* horizOff_D (= 1) */
+avio_wb32(pb, 0); /* vertOff_N (= 0) */
+avio_wb32(pb, 1); /* vertOff_D (= 1) */
+return 40;
+}
+
 static int mov_write_pasp_tag(AVIOContext *pb, MOVTrack *track)
 {
 AVRational sar;
@@ -1832,6 +1847,13 @@ static int mov_write_video_tag(AVIOContext *pb, 
MOVMuxContext *mov, MOVTrack *tr
 char compressor_name[32] = { 0 };
 int avid = 0;
 
+int uncompressed_ycbcr = ((track->par->codec_id == AV_CODEC_ID_RAWVIDEO && 
track->par->format == AV_PIX_FMT_UYVY422)
+   || (track->par->codec_id == AV_CODEC_ID_RAWVIDEO && 
track->par->format == AV_PIX_FMT_YUYV422)
+   ||  track->par->codec_id == AV_CODEC_ID_V308
+   ||  track->par->codec_id == AV_CODEC_ID_V408
+   ||  track->par->codec_id == AV_CODEC_ID_V410
+   ||  track->par->codec_id == AV_CODEC_ID_V210);
+
 avio_wb32(pb, 0); /* size */
 if (mov->encryption_scheme != MOV_ENC_NONE) {
 ffio_wfourcc(pb, "encv");
@@ -1842,7 +1864,11 @@ static int mov_write_video_tag(AVIOContext *pb, 
MOVMuxContext *mov, MOVTrack *tr
 avio_wb16(pb, 0); /* Reserved */
 avio_wb16(pb, 1); /* Data-reference index */
 
-avio_wb16(pb, 0); /* Codec stream version */
+if (uncompressed_ycbcr) {
+avio_wb16(pb, 2); /* Codec stream version */
+} else {
+avio_wb16(pb, 0); /* Codec stream version */
+}
 avio_wb16(pb, 0); /* Codec stream revision (=0) */
 if (track->mode == MODE_MOV) {
 ffio_wfourcc(pb, "FFMP"); /* Vendor */
@@ -1969,6 +1995,10 @@ static int mov_write_video_tag(AVIOContext *pb, 
MOVMuxContext *mov, MOVTrack *tr
 if (track->par->sample_aspect_ratio.den && 
track->par->sample_aspect_ratio.num) {
 mov_w

[FFmpeg-devel] [PATCH] avformat/movenc: correct ImageDescription depth for v210 v410

2017-11-16 Thread Dave Rice
This corrects a few values in the Image Description for v210 and v410 in mov. 
Apple defines what the depth value for these uncompressed formats should be in 
https://developer.apple.com/library/content/technotes/tn2162/_index.html 
<https://developer.apple.com/library/content/technotes/tn2162/_index.html>.


From 47def189b270ac93245e002580463b254daf8484 Mon Sep 17 00:00:00 2001
From: Dave Rice <d...@dericed.com>
Date: Thu, 16 Nov 2017 11:53:32 -0500
Subject: [PATCH] avformat/movenc: correct ImageDescription depth for v210 v410

Per
https://developer.apple.com/library/content/technotes/tn2162/_index.html
.
---
 libavformat/movenc.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/libavformat/movenc.c b/libavformat/movenc.c
index 18232e8ba3..f7b08e2885 100644
--- a/libavformat/movenc.c
+++ b/libavformat/movenc.c
@@ -1885,7 +1885,11 @@ static int mov_write_video_tag(AVIOContext *pb, 
MOVMuxContext *mov, MOVTrack *tr
 avio_w8(pb, strlen(compressor_name));
 avio_write(pb, compressor_name, 31);
 
-if (track->mode == MODE_MOV && track->par->bits_per_coded_sample)
+if (track->mode == MODE_MOV && track->par->codec_id == AV_CODEC_ID_V410)
+avio_wb16(pb, 0x18);
+else if (track->mode == MODE_MOV && track->par->codec_id == 
AV_CODEC_ID_V210)
+avio_wb16(pb, 0x18);
+else if (track->mode == MODE_MOV && track->par->bits_per_coded_sample)
 avio_wb16(pb, track->par->bits_per_coded_sample |
   (track->par->format == AV_PIX_FMT_GRAY8 ? 0x20 : 0));
 else
-- 
2.15.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] movenc: write clap tag

2017-11-16 Thread Dave Rice

> On Jul 9, 2017, at 7:26 PM, Dave Rice <d...@dericed.com> wrote:
> 
> 
>> On Jul 7, 2017, at 7:06 PM, Derek Buitenhuis <derek.buitenh...@gmail.com> 
>> wrote:
>> 
>> On 7/7/2017 10:13 PM, James Almer wrote:
>>> Isn't this necessary only for files with raw video? As is, this box
>>> would be written on any mov file with a video stream.
>> 
>> This was addressed a previous email:
>> 
>>   http://lists.ffmpeg.org/pipermail/ffmpeg-devel/2017-July/213350.html
>> 
>> I guess the spec is up for interpretation.
> 
> The quicktime spec says "This is a mandatory extension for all uncompressed 
> Y´CbCr data formats”. It doesn’t clarify if the clap atom is recommended or 
> not in mov files that are not “uncompressed Y´CbCr”, though it would make 
> sense if the video container needs to store cropping data. I think 
> constraining the change for only  “uncompressed Y´CbCr” would be more 
> cautious though. I’ll revise my patch to include the condition and resubmit.
> 
> If the patch only impacts “uncompressed Y´CbCr” would any fate updates be 
> needed?
> Dave Rice

Here’s an update to only write the clap atom for the specific uncompressed 
encodings listed in TN2162.

From 37457c1ee135f39452b91b047af4adf1ec43464b Mon Sep 17 00:00:00 2001
From: Dave Rice <d...@dericed.com>
Date: Thu, 16 Nov 2017 11:29:06 -0500
Subject: [PATCH] avformat/movenc: write clap atom for uncompressed yuv in mov

fixes 6145
---
 libavformat/movenc.c | 25 +
 1 file changed, 25 insertions(+)

diff --git a/libavformat/movenc.c b/libavformat/movenc.c
index cc3fc19d9b..18232e8ba3 100644
--- a/libavformat/movenc.c
+++ b/libavformat/movenc.c
@@ -1686,6 +1686,21 @@ static int mov_write_sv3d_tag(AVFormatContext *s, 
AVIOContext *pb, AVSphericalMa
 return update_size(pb, sv3d_pos);
 }
 
+static int mov_write_clap_tag(AVIOContext *pb, MOVTrack *track)
+{
+avio_wb32(pb, 40);
+ffio_wfourcc(pb, "clap");
+avio_wb32(pb, track->par->width); /* apertureWidth_N */
+avio_wb32(pb, 1); /* apertureWidth_D (= 1) */
+avio_wb32(pb, track->height); /* apertureHeight_N */
+avio_wb32(pb, 1); /* apertureHeight_D (= 1) */
+avio_wb32(pb, 0); /* horizOff_N (= 0) */
+avio_wb32(pb, 1); /* horizOff_D (= 1) */
+avio_wb32(pb, 0); /* vertOff_N (= 0) */
+avio_wb32(pb, 1); /* vertOff_D (= 1) */
+return 40;
+}
+
 static int mov_write_pasp_tag(AVIOContext *pb, MOVTrack *track)
 {
 AVRational sar;
@@ -1970,6 +1985,16 @@ static int mov_write_video_tag(AVIOContext *pb, 
MOVMuxContext *mov, MOVTrack *tr
 mov_write_pasp_tag(pb, track);
 }
 
+int uncompressed_ycbcr = ((track->par->codec_id == AV_CODEC_ID_RAWVIDEO && 
track->par->format == AV_PIX_FMT_UYVY422)
+   || (track->par->codec_id == AV_CODEC_ID_RAWVIDEO && 
track->par->format == AV_PIX_FMT_YUYV422)
+   ||  track->par->codec_id == AV_CODEC_ID_V308
+   ||  track->par->codec_id == AV_CODEC_ID_V408
+   ||  track->par->codec_id == AV_CODEC_ID_V410
+   ||  track->par->codec_id == AV_CODEC_ID_V210);  

+if (uncompressed_ycbcr){
+mov_write_clap_tag(pb, track);
+}
+
 if (mov->encryption_scheme != MOV_ENC_NONE) {
 ff_mov_cenc_write_sinf_tag(track, pb, mov->encryption_kid);
 }
-- 
2.15.0


___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH RFC] libavdevice/decklink: Add support for EIA-708 output over SDI

2017-10-18 Thread Dave Rice

> On Oct 6, 2017, at 5:31 PM, Devin Heitmueller <dheitmuel...@ltnglobal.com> 
> wrote:
> 
>> Sorry, what I meant was:
>> Nothing inside FFmpeg except the decklink device could use
>> VANC?
> 
> Ah, I understand now.
> 
> Yes, the decklink device is currently the only SDI device which is supported 
> by libavdevice.  I’ve got a whole pile of patches coming which add support 
> for a variety of protocols for both capture and output (e.g. EIA-708, 
> SCTE-104, AFD, SMPTE 2038, etc).  But today the decklink module is the only 
> device supported.
> 
> Would love to see more SDI devices supported and potentially interested in 
> adding such support myself if we can find good candidates.  The DVEO/linsys 
> cards are largely obsolete and the AJA boards are significantly more 
> expensive than any of BlackMagic’s cards.  If anyone has good experiences 
> with other vendors I would like to hear about it (and there may be an 
> opportunity to extend libavdevice to support another SDI vendor).

The President of AJA has publicly stated an intent to add an open license to 
their SDK, https://twitter.com/ajaprez/status/910100436224499713 
<https://twitter.com/ajaprez/status/910100436224499713>. I’m glad to hear that 
handling other VANC data is in the works, I’m particularly interested in VITC 
and EIA-608 with inputs.

Dave Rice
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] libavdevice/decklink: 32 bit audio support

2017-10-18 Thread Dave Rice

> On Oct 18, 2017, at 3:07 PM, Marton Balint <c...@passwd.hu> wrote:
> 
> On Mon, 16 Oct 2017, Dave Rice wrote:
> 
>> Hi,
>> 
>> I tested this with my Ultrastudio Express and confirmed that I'm getting 
>> higher bit depth recordings with the abitscope filter. This patch adds an 
>> option to get 32 bit audio as an input with the decklink device (beforehand 
>> only 16 bit audio was supported). This resolves 
>> http://trac.ffmpeg.org/ticket/6708 and is partly based upon Georg 
>> Lippitisch's earlier draft at 
>> https://ffmpeg.org/pipermail/ffmpeg-devel/2015-January/167649.html.
>> 
>> 
>> From fbf2bd40471c8fa35374bb1a51c51a3f4f36b992 Mon Sep 17 00:00:00 2001
>> From: Dave Rice <d...@dericed.com>
>> Date: Thu, 12 Oct 2017 13:40:59 -0400
>> Subject: [PATCH] libavdevice/decklink: 32 bit audio support
>> 
>> Partially based upon Georg Lippitsch's patch at 
>> https://ffmpeg.org/pipermail/ffmpeg-devel/2015-January/167649.html.
>> ---
>> libavdevice/decklink_common.h   |  1 +
>> libavdevice/decklink_common_c.h |  1 +
>> libavdevice/decklink_dec.cpp| 17 ++---
>> libavdevice/decklink_dec_c.c|  1 +
>> 4 files changed, 17 insertions(+), 3 deletions(-)
> 
> Missing docs/indevs.texi update and libavdevice micro bump.

Updated.

From 1e5ff78fec9b13eccac9a96acc358bbfd6a7015d Mon Sep 17 00:00:00 2001
From: Dave Rice <d...@dericed.com>
Date: Wed, 18 Oct 2017 15:21:46 -0400
Subject: [PATCH] libavdevice/decklink: 32 bit audio support

---
 doc/indevs.texi |  4 
 libavdevice/decklink_common.h   |  1 +
 libavdevice/decklink_common_c.h |  1 +
 libavdevice/decklink_dec.cpp| 17 ++---
 libavdevice/decklink_dec_c.c|  1 +
 libavdevice/version.h   |  2 +-
 6 files changed, 22 insertions(+), 4 deletions(-)

diff --git a/doc/indevs.texi b/doc/indevs.texi
index 55a4084bb2..d308bbf7de 100644
--- a/doc/indevs.texi
+++ b/doc/indevs.texi
@@ -311,6 +311,10 @@ Sets maximum input buffer size in bytes. If the buffering 
reaches this value,
 incoming frames will be dropped.
 Defaults to @samp{1073741824}.
 
+@item audio_depth
+Sets the audio sample bit depth. Must be @samp{16} or @samp{32}.
+Defaults to @samp{16}.
+
 @end table
 
 @subsection Examples
diff --git a/libavdevice/decklink_common.h b/libavdevice/decklink_common.h
index 6b2525fb53..b6acb01bb9 100644
--- a/libavdevice/decklink_common.h
+++ b/libavdevice/decklink_common.h
@@ -97,6 +97,7 @@ struct decklink_ctx {
 int frames_buffer_available_spots;
 
 int channels;
+int audio_depth;
 };
 
 typedef enum { DIRECTION_IN, DIRECTION_OUT} decklink_direction_t;
diff --git a/libavdevice/decklink_common_c.h b/libavdevice/decklink_common_c.h
index 5616ab32f9..368ac259e4 100644
--- a/libavdevice/decklink_common_c.h
+++ b/libavdevice/decklink_common_c.h
@@ -42,6 +42,7 @@ struct decklink_cctx {
 double preroll;
 int v210;
 int audio_channels;
+int audio_depth;
 int duplex_mode;
 DecklinkPtsSource audio_pts_source;
 DecklinkPtsSource video_pts_source;
diff --git a/libavdevice/decklink_dec.cpp b/libavdevice/decklink_dec.cpp
index d9ac01ac91..7e97d5f064 100644
--- a/libavdevice/decklink_dec.cpp
+++ b/libavdevice/decklink_dec.cpp
@@ -771,7 +771,7 @@ HRESULT decklink_input_callback::VideoInputFrameArrived(
 av_init_packet();
 
 //hack among hacks
-pkt.size = audioFrame->GetSampleFrameCount() * 
ctx->audio_st->codecpar->channels * (16 / 8);
+pkt.size = audioFrame->GetSampleFrameCount() * 
ctx->audio_st->codecpar->channels * (ctx->audio_depth / 8);
 audioFrame->GetBytes();
 audioFrame->GetPacketTime(_pts, ctx->audio_st->time_base.den);
 pkt.pts = get_pkt_pts(videoFrame, audioFrame, wallclock, 
ctx->audio_pts_source, ctx->audio_st->time_base, _audio_pts);
@@ -854,6 +854,7 @@ av_cold int ff_decklink_read_header(AVFormatContext *avctx)
 ctx->audio_pts_source = cctx->audio_pts_source;
 ctx->video_pts_source = cctx->video_pts_source;
 ctx->draw_bars = cctx->draw_bars;
+ctx->audio_depth = cctx->audio_depth;
 cctx->ctx = ctx;
 
 /* Check audio channel option for valid values: 2, 8 or 16 */
@@ -866,6 +867,16 @@ av_cold int ff_decklink_read_header(AVFormatContext *avctx)
 av_log(avctx, AV_LOG_ERROR, "Value of channels option must be one 
of 2, 8 or 16\n");
 return AVERROR(EINVAL);
 }
+
+/* Check audio bit depth option for valid values: 16 or 32 */
+switch (cctx->audio_depth) {
+case 16:
+case 32:
+break;
+default:
+av_log(avctx, AV_LOG_ERROR, "Value for audio bit depth option must 
be either 16 or 32\n");
+return AVERROR(EINVAL);
+}
 
 /* Lis

Re: [FFmpeg-devel] [PATCH] libavdevice/decklink: 32 bit audio support

2017-10-18 Thread Dave Rice

> On Oct 17, 2017, at 3:30 PM, Douglas Marsh <ffm...@dx9s.net> wrote:
> 
> On 2017-10-17 09:10, Dave Rice wrote:
> 
>>>> -audio_depth   .D.. audio bitdepth (from 0 to 1)
>>>> (default 16bits)
>>>>16bits   .D..
> 
>>> Hmm, first patch might be enough.
>> Sounds good to me. Unless anyone prefers "-audio_depth thirtytwo" :-D
>> Dave Rice
> 
> Yeah that works.. so if they have any other depths, can go 0, 1 or 2 (2=some 
> new bit depth yet to be created)
> 
> And for clarification: yes 32-bits (PCM_S32LE) -- I was just pointing out the 
> ADC/DAC's are 24-bit (8-bits padded).

I was suggesting `-audio_depth thirtytwo` in jest. IMHO assigning enumerated 
index numbers to purely numerical values will be confusing. For instance if in 
the future 12 bit is added then 24 bit, we'll have 
-audio_depth   .D.. audio bitdepth (from 0 to 3) (default 
16bits)
   16bits [0]
   32bits [1]
   12bits [2]
   24bits [3]

The alternative patch in the "decklink 24/32 bit question" thread, changes the 
default behavior of the decklink input which I think should be avoided. I agree 
with Moritz that the  first patch of this thread 
(https://patchwork.ffmpeg.org/patch/5588/) is the best option. Also the method 
used in the patch to validate a limited non-consecutive range of values is 
already in used elsewhere in the device for the channel count at 
https://github.com/FFmpeg/FFmpeg/blob/278588cd0bf788df0194f74e62745f68559616f9/libavdevice/decklink_dec.cpp#L859-L868.

Best Regards,

Dave Rice
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] decklink 24/32 bit question

2017-10-17 Thread Dave Rice

> On Oct 17, 2017, at 2:59 PM, Moritz Barsnick <barsn...@gmx.net> wrote:
> 
> Hi Doug,
> 
> On Tue, Oct 03, 2017 at 20:39:49 -0700, Douglas Marsh wrote:
>> After digging around in places, made the following changes:
> [...]
>> It doesn't work (the audio capture is close but wrong), but believe it 
>> is a step in the correct direction. Anybody have a clue? Saw several 
>> names in cpp,c,h files including: Ramiro Polla, Luca Barbato, Deti 
>> Fliegl, Rafaël Carré and Akamai Technologies Inc.
> 
> Did you check out Dave Rice's recent patch (on this list)? It touches
> code in a few more places, and adds an option to select 16 vs. 32 bits.
> Please test, if you can.
> 
> Is your subject indicating that 24 bits depth could also be supported?
> If so, Dave perhaps should expand his patch to cover that.

The decklink sdk only defines two BMDAudioSampleType values: 
bmdAudioSampleType16bitInteger and bmdAudioSampleType32bitInteger. I don't 
think there's an easy way to support a 24 bit input here. Generally in this 
case I've used bmdAudioSampleType32bitInteger and then encode it at pcm_s24le.
Dave Rice

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


  1   2   3   >