Re: [FFmpeg-devel] [PATCH] libavdevice/decklink: extend available actions on signal loss

2023-11-02 Thread Devin Heitmueller
Hi Michael,

I haven't tried your patch, but a quick review suggests that while
you've declared the option as deprecated that it no longer works.
Presumably somewhere in there should be a line of code that says
something like "if (ctx->draw_bars == 0) then ctx->signal_loss_action
= SIGNAL_LOSS_NONE"

Even though the option is deprecated, it should still continue to work
until it is completely removed.

Devin

-- 
Devin Heitmueller, Senior Software Engineer
LTN Global Communications
o: +1 (301) 363-1001
w: https://ltnglobal.com  e: devin.heitmuel...@ltnglobal.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [ANNOUNCE] upcoming vote: TC/CC elections

2023-11-28 Thread Devin Heitmueller
On Tue, Nov 28, 2023 at 8:23 AM Anton Khirnov  wrote:
>
> For the record, I've edited the vote description to make it more clear.
> It now looks like this:
>
>  Five people from the list below will become the members of the Technical
>  Committee (TC). Assign weights to each person according to how much you
>  want them to be in the committee (higher weight = higher preference).
>
>  The system will assume you want to maximise the sum of weights of
>  selected candidates. E.g. if X is given a weight of 10 and Y and Z have
>  weights 8 and 6 respectively, then the voting algorithm will assume you
>  prefer a committee with both Y and Z over one with X, because 14 > 10.
>  However, giving Y and Z weight of 4 and 2 instead would have expressed
>  that X is preferred to a combination of Y and Z, because 6 < 10.

Yeah, I suspect much of the confusion came from the (unintentionally)
misleading text in the original email which said, "Rank them in the
order in which you prefer them for that role.".  I almost ranked them
1-6 before reading the full voting page.

My vote (no pun intended) would be, given the vote was only put out a
few hours ago, to tell everyone the first vote is being thrown away
(without revealing the voting results), and give everyone the
opportunity to vote again now that it's been made clear how the
weighting works.

Devin

-- 
Devin Heitmueller, Senior Software Engineer
LTN Global Communications
o: +1 (301) 363-1001
w: https://ltnglobal.com  e: devin.heitmuel...@ltnglobal.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] libavdevice/decklink: extend available actions on signal loss

2023-11-28 Thread Devin Heitmueller
On Tue, Nov 28, 2023 at 4:22 AM Michael Riedl
 wrote:
>
> Ping

My apologies, I saw your remarks that this didn't change backward
compatibility but failed to reply.  I have no further issues with this
patch.

Thanks,

Devin

-- 
Devin Heitmueller, Senior Software Engineer
LTN Global Communications
o: +1 (301) 363-1001
w: https://ltnglobal.com  e: devin.heitmuel...@ltnglobal.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] There may be a bug for .mp4 reader.

2019-04-01 Thread Devin Heitmueller
Hello Yufei,

> On Apr 1, 2019, at 1:21 PM, Yufei He  wrote:
> 
> Hi
> 
> There may be a bug for .mp4 reader.
> 
> On decoding some ntsc mp4 files with my h.264 codec, from actvx I received,
> 
> avctx->framerate.den is 100
> 
> avctx->framerate.num is 2997
> 
> avctx->pkt_timebase.num is 1
> avctx->pkt_timebase.den is 2997
> 
> Duration of every packet I received from ff_decode_get_packet(avctx, &pkt) is 
> 100, which means it's a encoded frame. But actually, it's a encoded field, 
> it's duration should be 50.

Libavcodec doesn’t have any support for delivering individual fields.  Decoders 
are expected to reassemble fields into frames before providing them back to 
callers.

Presumably this is some sort of PAFF encoded stream?  Does it decode properly 
when using the software h.264 decoder?

Devin

---
Devin Heitmueller - LTN Global Communications
dheitmuel...@ltnglobal.com





___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Using copyts with mpegts output fails after 26, 5 hours

2019-05-03 Thread Devin Heitmueller
Hello Panagiotis,

> On May 3, 2019, at 4:50 AM, Panagiotis Malakoudis  wrote:
> 
> As I mentioned in ticket http://trac.ffmpeg.org/ticket/7876 this is
> not my code, I just adapted it for current git. It is also very old
> code (back from 2012), probably it fitted OK back then. This code
> fixes this specific issue reported in the ticket. If it breaks other
> things, then of course it can't be commited. Still there is a need for
> a fix for the reported issue in the ticket, as it is you can't
> transcode with -copyts above 26,5 hours, it fails after that.
> 

For what it’s worth, this is a pretty difficult problem, and not one exclusive 
to -copyts.  I had to put a bunch of code in the demux to support the TS 
wraparound for use with the decklink output, which supports a 90KHz clock but 
the value is 64-bits, so the output doesn’t expect it to jump back to zero 
after hitting the 33-bit limit.

Also, at least in my case, the existing wraparound code that was there works 
exactly once - even in cases where it worked as expected it would properly 
handle the wraparound after 26.5 hours, but then fail after 53 hours.

The approach I took was to track the PTS values as they approached 2^33, and 
then have a separate wraparound count such that the values coming out of the 
demux keep incrementing past the 33-bit limit.  This works well for the TS 
output case as well, since the output will simply truncate the lower 33-bits 
and it will continue working as it was before.  But, as Michael suggested, it 
almost certainly breaks seeking.  While my use cases don’t rely on seeking 
since I’m doing 24x7 decoding of realtime TS streams, such prevents my patches 
from being accepted upstream as well.

I’ve got no easy answers to this one - having TS streams longer than 26.5 hours 
and expecting seeking to work properly seem like they are not compatible 
concepts.

Devin

---
Devin Heitmueller - LTN Global Communications
dheitmuel...@ltnglobal.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avdevice/decklink: fix checking video mode in SDK version 11

2019-05-05 Thread Devin Heitmueller
Hello Marton,

> On May 5, 2019, at 2:28 PM, Marton Balint  wrote:
> 
> 
> On Wed, 1 May 2019, Marton Balint wrote:
> 
>> Apparently in the new SDK one cannot query if VANC output is supported, so we
>> will fall back to non-VANC output if enabling the video output with VANC 
>> fails.
>> 
>> Fixes ticket #7867.
> 
> Applied.

I know it’s a bit late for a review given I’m only seeing this after it’s been 
applied.  However, has it been confirmed that the new logic works with decklink 
SDKs older than version 11?  If not, then the old logic probably needs to stay 
and the new logic probably needs to be #ifdef’d based on the SDK version.  If 
we’re talking about increasing the minimum SDK version for ffmpeg to build 
against, that’s also an option (although given version 11 is very new I 
wouldn’t particularly be in favor of that).

Also, have you ascertained that the change in question works with more than one 
model of card?  What card did you encounter this issue with, and what other 
cards did you test with?  I’m just concerned about the possibility that you 
committed a change to address an issue with some particular card, and we don’t 
know what the effects are on other cards.

Regards,

Devin

---
Devin Heitmueller - LTN Global Communications
dheitmuel...@ltnglobal.com

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] mpeg12enc: Use all Closed Captions side data

2019-05-19 Thread Devin Heitmueller
Hello Paul,

On Fri, May 17, 2019 at 4:44 AM Paul B Mahol  wrote:
>
> On 5/17/19, Mathieu Duponchelle  wrote:
> > There isn't one, as I said the added indentation is because of the new loop!
>
> To get this committed to tree you need to comply to review requests.

I think Mathieu's point is that the code indentation change was not
cosmetic - it's because the code in question is now inside a for loop,
and thus it needed to be indented another level.

Are you suggesting he should make a patch which results in the
indentation being wrong, and then submit a second patch which fixes
the incorrect indentation introduced by the first patch?

I'm confident that Mathieu is trying to comply with review requests so
he can get his code merged.  In this particular case, Carl raised his
concern about the indentation, Mathieu responded by suggesting that
given there was a functional change the re-indent was correct, and
then there was silence (i.e. neither agreement nor disagreement).

I'm also asking because I have work which I'm hoping to get upstreamed
as well at some point, and I'm sure I've got similar things in my
patches.  And having spent 20+ years reviewing patches on lots of
other open source projects, I wouldn't have thought twice about
accepting the patch as-is (given the change in indentation is a result
of a functional change and is not cosmetic).  In my experience, this
particular patch isn't an example of what maintainers mean when they
say "don't mix cosmetic whitespace changes with functional changes".

Regards,

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH v2] libavformat/mpegtsenc: Add minimal support for ATSC PSIP tables

2019-05-19 Thread Devin Heitmueller
Hello Phillip,

On Thu, May 16, 2019 at 9:32 AM Phillip Burr  wrote:
>
> Minimal support for ATSC PSIP tables.  Does not support STT or
> EIT tables and so is not compliant with terrestrial ATSC.
> ATSC tables are not created by default, and will only be transmitted
> if either "atsc_name" or "atsc_channel" metadata is supplied.

In general I am supportive of this effort (producing ATSC compliant
streams).  Thanks for your work to improve the TS mux support in this
area.

While I haven't looked that closely at your implementation, one key
thing missing is updated documentation that describes the new
parameters, and in particular something that notes that streams
generated are currently not fully compliant with the spec (as you
noted, the lack of STT).  While it's good to have that in the patch
description, that's something that really needs to be known to end
users who might try to use the functionality.

Regards,

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH v2] libavformat/mpegtsenc: Add minimal support for ATSC PSIP tables

2019-05-20 Thread Devin Heitmueller
On Mon, May 20, 2019 at 8:36 AM Phil Burr  wrote:
>
> Thank you for the feedback.  I will look into adding documentation for the
> atsc metadata.  ATSC requires in addition to the tables I've added, the STT
> and EIT0-EIT3 tables.  I'm thinking of adding support for STT and at least
> producing empty EIT tables so that the stream would be minimally compliant.

I would have no problem with them being excluded in a first version of
the patch, as long as it's documented that they are not expected to be
present in the resulting streams.

Your patch is already a huge improvement over the current state of
affairs, and I don't want to see your patch not get merged because it
lacks two features that I think the vast majority of people won't care
about.

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH v4] avfilter/avf_aphasemeter: Add out-of-phase and mono detection

2019-07-05 Thread Devin Heitmueller
On Wed, Jul 3, 2019 at 9:34 AM Romane Lafon  wrote:
>
> I've added documentation for the extension of aphasemeter filter.
> Also, I'm not sure that "phasing" is the right word to describe the
> detection.

In some commercial analyzers I've also seen audio phase presented
using the term "lissajous" (after the name of the actual curve), but
personally I prefer to refer to it as audio phase.

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH]Support for Frame Doubling/ Tripling in FFMPEG's HEVC Decoder by parsing the picture_struct SEI value (Support for http://ffmpeg.org/pipermail/ffmpeg-devel/2019-June/245521.h

2019-08-01 Thread Devin Heitmueller
> On Aug 1, 2019, at 8:46 AM, Praveen Kumar  wrote:
> 
> Hi,
> 
> This patch has the implementation for frame duplication (doubling/ tripling) 
> in FFmpeg's HEVC decoder based on the picture_structre SEI value (7 for 
> doubling and 8 for tripling) set while encoding.
> This addresses the requirement mentioned in the thread 
> http://ffmpeg.org/pipermail/ffmpeg-devel/2019-June/245521.html
> 
> Thanks & Regards,
> Praveen
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Hi Praveen,

Thanks for your contribution.   If somebody wants to get pic_timing modes 1/2 
working as well in ffmpeg's HEVC decoder, I can probably get that work 
sponsored.  I wrote a libavfilter module to recombine the fields and work 
around the issue, but it would obviously be much better if the decoder did it 
properly to begin with (and there’s no way my filter approach would be accepted 
upstream).

If we could get this done, it would result in ffmpeg properly decoding 
interlaced streams generated by x265, as well as fixing the issue with VLC 
playing the streams at half vertical resolution and twice the frame rate (since 
VLC relies on libavcodec for HEVC decoding).

There is at least one other commercial HEVC encoder I’ve seen which puts out 
modes 11/12 and hence that would be useful as well.  I suspect supporting that 
would be a relatively small incremental step from making modes 1/2 work.

Reach out to me privately if anybody is interested in doing such a project.

Devin

---
Devin Heitmueller - LTN Global Communications
dheitmuel...@ltnglobal.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [CALL] New FFmpeg developers meeting

2019-08-17 Thread Devin Heitmueller
On Sat, Aug 17, 2019 at 4:18 PM James Almer  wrote:
> Start by drafting a list of subjects to discuss, and proposing a date
> (at least two weeks from now) or asking for suggestions. Then wait to
> see how many developers agree with it and confirm they will be there.

Indeed, having some form of goal(s) would be useful.  Are there
particular things/topics you want to work on where having everyone in
a room would be helpful?  Is this just an opportunity for like-minded
developers to socialize over beer?

Also, would be good to have a rough idea as to what part of the world
you want to meet in.  I assume you're thinking "somewhere in Europe"
given your locale.

Regarding when to meet, I would suggest considering what other
conferences are going on, and whether it would conflict with what
you're planning.  Alternatively, consider having it right before/after
one of those conferences in the same region, and you might find people
who are more likely to attend because they are already in the area
(e.g. perhaps with airfare already paid for by their employer because
of the other event).

Devin
-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avdevice/decklink: adjust for timecode lag

2019-08-20 Thread Devin Heitmueller
> A couple of follow-up Qs:
>
> Is auto-detection available for all Decklink devices?

No, but AFAIK it is for all devices which support SDI.  Generally it's
the older analog capture devices which don't support it.

> For those for which it is available, are there any edge cases in which
> it sets inaccurate mode?

I don't trust the existing detection code enough to use it in
production.  It often fails to detect and thus ffmpeg will exit at
startup.  Also, there are cases where it will misdetect 1080i59 as
1080p30 depending on the card.  It's been on my TODO list for a while
to make that code more robust (I believe I know what most of the
issues are), but it hasn't been critical for any of my use cases.

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH] Support 64-bit integers for av_get_frame_filename2()

2018-08-24 Thread Devin Heitmueller
Create a new av_get_frame_filename3() API which is just like the
previous version but accepts a 64-bit integer for the "number"
argument.  This is useful in cases where you want to put the
original PTS into the filename (which can be larger than 32-bits).

Tested with:

./ffmpeg -copyts -vsync 0 -i foo.ts -frame_pts 1 -enc_time_base -1 foo_%d.png

Signed-off-by: Devin Heitmueller 
---
 libavformat/avformat.h | 2 ++
 libavformat/img2enc.c  | 2 +-
 libavformat/utils.c| 9 +++--
 3 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/libavformat/avformat.h b/libavformat/avformat.h
index fdaffa5bf4..c358a9a71e 100644
--- a/libavformat/avformat.h
+++ b/libavformat/avformat.h
@@ -2896,6 +2896,8 @@ void av_dump_format(AVFormatContext *ic,
  * @param flags AV_FRAME_FILENAME_FLAGS_*
  * @return 0 if OK, -1 on format error
  */
+int av_get_frame_filename3(char *buf, int buf_size,
+  const char *path, int64_t number, int flags);
 int av_get_frame_filename2(char *buf, int buf_size,
   const char *path, int number, int flags);
 
diff --git a/libavformat/img2enc.c b/libavformat/img2enc.c
index a09cc8ec50..414eb827e2 100644
--- a/libavformat/img2enc.c
+++ b/libavformat/img2enc.c
@@ -101,7 +101,7 @@ static int write_packet(AVFormatContext *s, AVPacket *pkt)
 return AVERROR(EINVAL);
 }
 } else if (img->frame_pts) {
-if (av_get_frame_filename2(filename, sizeof(filename), img->path, 
pkt->pts, AV_FRAME_FILENAME_FLAGS_MULTIPLE) < 0) {
+if (av_get_frame_filename3(filename, sizeof(filename), img->path, 
pkt->pts, AV_FRAME_FILENAME_FLAGS_MULTIPLE) < 0) {
 av_log(s, AV_LOG_ERROR, "Cannot write filename by pts of the 
frames.");
 return AVERROR(EINVAL);
 }
diff --git a/libavformat/utils.c b/libavformat/utils.c
index b0b5e164a6..d9d4d38a44 100644
--- a/libavformat/utils.c
+++ b/libavformat/utils.c
@@ -4666,7 +4666,7 @@ uint64_t ff_get_formatted_ntp_time(uint64_t ntp_time_us)
 return ntp_ts;
 }
 
-int av_get_frame_filename2(char *buf, int buf_size, const char *path, int 
number, int flags)
+int av_get_frame_filename3(char *buf, int buf_size, const char *path, int64_t 
number, int flags)
 {
 const char *p;
 char *q, buf1[20], c;
@@ -4696,7 +4696,7 @@ int av_get_frame_filename2(char *buf, int buf_size, const 
char *path, int number
 percentd_found = 1;
 if (number < 0)
 nd += 1;
-snprintf(buf1, sizeof(buf1), "%0*d", nd, number);
+snprintf(buf1, sizeof(buf1), "%0*" PRId64, nd, number);
 len = strlen(buf1);
 if ((q - buf + len) > buf_size - 1)
 goto fail;
@@ -4721,6 +4721,11 @@ fail:
 return -1;
 }
 
+int av_get_frame_filename2(char *buf, int buf_size, const char *path, int 
number, int flags)
+{
+return av_get_frame_filename3(buf, buf_size, path, number, flags);
+}
+
 int av_get_frame_filename(char *buf, int buf_size, const char *path, int 
number)
 {
 return av_get_frame_filename2(buf, buf_size, path, number, 0);
-- 
2.13.2

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 2/2] decklink: Add support for output of Active Format Description (AFD)

2018-08-27 Thread Devin Heitmueller
Hello Vittorio,

Thanks for the feedback.

> 
> I think you should add an entry in ff_decode_frame_props() so that pkt side 
> data can propagate to frame side data 
> -- 
> Vittorio

I’ve got a whole patch series related to capture of AFD from decklink and 
getting it through the pipeline (to be encoded by libx264), which includes a 
patch which makes the entry you have suggested.

I’ve got about 40 ffmpeg related patches I am trying to get merged upstream 
(many dealing with side data on both the capture and output side), so for the 
moment I’m focused on getting the first few output related patches merged.  In 
some cases it’s been more energy carving up the patch series in different ways 
to get it merged than the actual engineering required to do the work in the 
first place.

Thanks,

Devin
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 2/2] decklink: Add support for output of Active Format Description (AFD)

2018-08-27 Thread Devin Heitmueller

> On Aug 26, 2018, at 11:34 AM, Marton Balint  wrote:
> 
> 

Hello Marton,

Ok, I’ll take another pass and send an updated patch.

Devin

---
Devin Heitmueller - LTN Global Communications
dheitmuel...@ltnglobal.com

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 0/5] Support for Decklink output of EIA-708 and AFD

2018-09-07 Thread Devin Heitmueller
The following patches add support for output of 708 and AFD over
the Decklink SDI interface.  This series is a subset of a series
submitted in early January, with the hope of getting the less
controversial parts merged upstream.

Note compared to the previous series this includes a bit of
refactoring to separate out the generation of AFD and CC lines from
the actual output.  This makes the exception handling more
straightforward as well as ensuring that one VANC type being
invalid for some reason doesn't cause all the other types to
not be output (in the previous series, the function would bail
out if there was a problem with any VANC data, even if all the
other VANC data was perfectly valid).

Devin Heitmueller (5):
  v210enc: Pass through A53 CC data
  libavdevice/decklink: Add support for EIA-708 output over SDI
  Allow AFD data to be embedded in AVPacket
  v210enc: Pass through Active Format Description (AFD) data
  decklink: Add support for output of Active Format Description (AFD)

 configure   |   4 +
 libavcodec/avcodec.h|   6 ++
 libavcodec/v210enc.c|  17 +++
 libavcodec/version.h|   2 +-
 libavdevice/decklink_common.cpp |  16 ++-
 libavdevice/decklink_common.h   |  10 ++
 libavdevice/decklink_enc.cpp| 229 ++--
 7 files changed, 272 insertions(+), 12 deletions(-)

-- 
2.13.2

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 1/5] v210enc: Pass through A53 CC data

2018-09-07 Thread Devin Heitmueller
When encoding to V210, make sure the CC side data makes it through
in the resulting AVPacket.  This is needed so the decklink output
module can put out captions when in 10-bit mode.

Signed-off-by: Devin Heitmueller 
---
 libavcodec/v210enc.c | 9 +
 1 file changed, 9 insertions(+)

diff --git a/libavcodec/v210enc.c b/libavcodec/v210enc.c
index a6afbbfc41..b9dcf9a672 100644
--- a/libavcodec/v210enc.c
+++ b/libavcodec/v210enc.c
@@ -123,6 +123,7 @@ static int encode_frame(AVCodecContext *avctx, AVPacket 
*pkt,
 int aligned_width = ((avctx->width + 47) / 48) * 48;
 int stride = aligned_width * 8 / 3;
 int line_padding = stride - ((avctx->width * 8 + 11) / 12) * 4;
+AVFrameSideData *side_data;
 int h, w, ret;
 uint8_t *dst;
 
@@ -233,6 +234,14 @@ static int encode_frame(AVCodecContext *avctx, AVPacket 
*pkt,
 }
 }
 
+side_data = av_frame_get_side_data(pic, AV_FRAME_DATA_A53_CC);
+if (side_data && side_data->size) {
+uint8_t *buf = av_packet_new_side_data(pkt, AV_PKT_DATA_A53_CC, 
side_data->size);
+if (!buf)
+return AVERROR(ENOMEM);
+memcpy(buf, side_data->data, side_data->size);
+}
+
 pkt->flags |= AV_PKT_FLAG_KEY;
 *got_packet = 1;
 return 0;
-- 
2.13.2

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 3/5] Allow AFD data to be embedded in AVPacket

2018-09-07 Thread Devin Heitmueller
Create a new AVPacket side data type for Active Format Description,
which mirrors the side data type found in AVFrame.  The primary
use case for this is ensuring AFD gets preserved in the V210
encoder, so that the decklink libavdevice can output AFD.

Signed-off-by: Devin Heitmueller 
---
 libavcodec/avcodec.h | 6 ++
 libavcodec/version.h | 2 +-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/libavcodec/avcodec.h b/libavcodec/avcodec.h
index b6688b7af3..7f714fc1d8 100644
--- a/libavcodec/avcodec.h
+++ b/libavcodec/avcodec.h
@@ -1365,6 +1365,12 @@ enum AVPacketSideDataType {
 AV_PKT_DATA_ENCRYPTION_INFO,
 
 /**
+ * Active Format Description data consisting of a single byte as specified
+ * in ETSI TS 101 154 using AVActiveFormatDescription enum.
+ */
+AV_PKT_DATA_AFD,
+
+/**
  * The number of side data types.
  * This is not part of the public API/ABI in the sense that it may
  * change when new side data types are added.
diff --git a/libavcodec/version.h b/libavcodec/version.h
index ce3349019c..60cc99d753 100644
--- a/libavcodec/version.h
+++ b/libavcodec/version.h
@@ -29,7 +29,7 @@
 
 #define LIBAVCODEC_VERSION_MAJOR  58
 #define LIBAVCODEC_VERSION_MINOR  27
-#define LIBAVCODEC_VERSION_MICRO 101
+#define LIBAVCODEC_VERSION_MICRO 102
 
 #define LIBAVCODEC_VERSION_INT  AV_VERSION_INT(LIBAVCODEC_VERSION_MAJOR, \
LIBAVCODEC_VERSION_MINOR, \
-- 
2.13.2

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 2/5] libavdevice/decklink: Add support for EIA-708 output over SDI

2018-09-07 Thread Devin Heitmueller
Hook in libklvanc and use it for output of EIA-708 captions over
SDI.  The bulk of this patch is just general support for ancillary
data for the Decklink SDI module - the real work for construction
of the EIA-708 CDP and VANC line construction is done by libklvanc.

Libklvanc can be found at: https://github.com/stoth68000/libklvanc

Updated to reflect feedback from Marton Balint ,
Carl Eugen Hoyos , Aaron Levinson
, and Moritz Barsnick 

Signed-off-by: Devin Heitmueller 
---
 configure   |   4 +
 libavdevice/decklink_common.cpp |  16 +++-
 libavdevice/decklink_common.h   |  10 +++
 libavdevice/decklink_enc.cpp| 179 ++--
 4 files changed, 198 insertions(+), 11 deletions(-)

diff --git a/configure b/configure
index 0d6ee0abfc..21d5c619cc 100755
--- a/configure
+++ b/configure
@@ -239,6 +239,7 @@ External library support:
   --enable-libiec61883 enable iec61883 via libiec61883 [no]
   --enable-libilbc enable iLBC de/encoding via libilbc [no]
   --enable-libjack enable JACK audio sound server [no]
+  --enable-libklvanc   enable Kernel Labs VANC processing [no]
   --enable-libkvazaar  enable HEVC encoding via libkvazaar [no]
   --enable-liblensfun  enable lensfun lens correction [no]
   --enable-libmodplug  enable ModPlug via libmodplug [no]
@@ -1720,6 +1721,7 @@ EXTERNAL_LIBRARY_LIST="
 libiec61883
 libilbc
 libjack
+libklvanc
 libkvazaar
 libmodplug
 libmp3lame
@@ -3233,6 +3235,7 @@ decklink_deps_any="libdl LoadLibrary"
 decklink_indev_deps="decklink threads"
 decklink_indev_extralibs="-lstdc++"
 decklink_outdev_deps="decklink threads"
+decklink_outdev_suggest="libklvanc"
 decklink_outdev_extralibs="-lstdc++"
 libndi_newtek_indev_deps="libndi_newtek"
 libndi_newtek_indev_extralibs="-lndi"
@@ -6068,6 +6071,7 @@ enabled libgsm&& { for gsm_hdr in "gsm.h" 
"gsm/gsm.h"; do
check_lib libgsm "${gsm_hdr}" gsm_create 
-lgsm && break;
done || die "ERROR: libgsm not found"; }
 enabled libilbc   && require libilbc ilbc.h WebRtcIlbcfix_InitDecode 
-lilbc $pthreads_extralibs
+enabled libklvanc && require libklvanc libklvanc/vanc.h 
klvanc_context_create -lklvanc
 enabled libkvazaar&& require_pkg_config libkvazaar "kvazaar >= 0.8.1" 
kvazaar.h kvz_api_get
 enabled liblensfun&& require_pkg_config liblensfun lensfun lensfun.h 
lf_db_new
 # While it may appear that require is being used as a pkg-config
diff --git a/libavdevice/decklink_common.cpp b/libavdevice/decklink_common.cpp
index aab9d85b94..503417bb35 100644
--- a/libavdevice/decklink_common.cpp
+++ b/libavdevice/decklink_common.cpp
@@ -239,10 +239,18 @@ int ff_decklink_set_format(AVFormatContext *avctx,
&support, NULL) != S_OK)
 return -1;
 } else {
-if (ctx->dlo->DoesSupportVideoMode(ctx->bmd_mode, bmdFormat8BitYUV,
-   bmdVideoOutputFlagDefault,
-   &support, NULL) != S_OK)
-return -1;
+if (!ctx->supports_vanc || 
ctx->dlo->DoesSupportVideoMode(ctx->bmd_mode, ctx->raw_format,
+  
bmdVideoOutputVANC,
+  &support, 
NULL) != S_OK) {
+/* Try without VANC enabled */
+if (ctx->dlo->DoesSupportVideoMode(ctx->bmd_mode, ctx->raw_format,
+   bmdVideoOutputFlagDefault,
+   &support, NULL) != S_OK) {
+return -1;
+}
+ctx->supports_vanc = 0;
+}
+
 }
 if (support == bmdDisplayModeSupported)
 return 0;
diff --git a/libavdevice/decklink_common.h b/libavdevice/decklink_common.h
index 96b001c2d8..128144f50d 100644
--- a/libavdevice/decklink_common.h
+++ b/libavdevice/decklink_common.h
@@ -27,6 +27,9 @@
 
 #include "libavutil/thread.h"
 #include "decklink_common_c.h"
+#if CONFIG_LIBKLVANC
+#include "libklvanc/vanc.h"
+#endif
 
 #ifdef _WIN32
 #define DECKLINK_BOOL BOOL
@@ -97,6 +100,7 @@ struct decklink_ctx {
 int bmd_width;
 int bmd_height;
 int bmd_field_dominance;
+int supports_vanc;
 
 /* Capture buffer queue */
 AVPacketQueue queue;
@@ -114,6 +118,7 @@ struct decklink_ctx {
 AVStream *audio_st;
 AVStream *video_st;
 AVStream *teletext_st;
+uint16_t cdp_sequence_num;
 
 /* Options */
 int list_devices;
@@ -124,6 +129,7 @@ struct decklink_ctx {
 DecklinkPtsSource a

[FFmpeg-devel] [PATCH 5/5] decklink: Add support for output of Active Format Description (AFD)

2018-09-07 Thread Devin Heitmueller
Implement support for including AFD in decklink output when putting
out 10-bit VANC data.

Updated to reflect feedback from Marton Balint ,
Carl Eugen Hoyos  and Aaron Levinson
.

Signed-off-by: Devin Heitmueller 
---
 libavdevice/decklink_enc.cpp | 54 ++--
 1 file changed, 52 insertions(+), 2 deletions(-)

diff --git a/libavdevice/decklink_enc.cpp b/libavdevice/decklink_enc.cpp
index 6f1941b29f..48a4f0e950 100644
--- a/libavdevice/decklink_enc.cpp
+++ b/libavdevice/decklink_enc.cpp
@@ -362,8 +362,57 @@ static void construct_cc(AVFormatContext *avctx, struct 
decklink_ctx *ctx,
 }
 }
 
+static void construct_afd(AVFormatContext *avctx, struct decklink_ctx *ctx,
+  AVPacket *pkt, struct klvanc_line_set_s *vanc_lines,
+  AVStream *st)
+{
+struct klvanc_packet_afd_s *afd;
+uint16_t *afd_words;
+uint16_t len;
+int size, ret;
+
+const uint8_t *data = av_packet_get_side_data(pkt, AV_PKT_DATA_AFD, &size);
+if (!data || size == 0)
+return;
+
+ret = klvanc_create_AFD(&afd);
+if (ret) {
+return;
+}
+
+ret = klvanc_set_AFD_val(afd, data[0]);
+if (ret) {
+av_log(avctx, AV_LOG_ERROR, "Invalid AFD value specified: %d\n",
+   data[0]);
+klvanc_destroy_AFD(afd);
+return;
+}
+
+/* FIXME: Should really rely on the coded_width but seems like that
+   is not accessible to libavdevice outputs */
+if (av_cmp_q((AVRational) {st->codecpar->width, st->codecpar->height}, 
(AVRational) {4, 3}) == 1)
+afd->aspectRatio = ASPECT_16x9;
+else
+afd->aspectRatio = ASPECT_4x3;
+
+ret = klvanc_convert_AFD_to_words(afd, &afd_words, &len);
+klvanc_destroy_AFD(afd);
+if (ret) {
+av_log(avctx, AV_LOG_ERROR, "Failed converting AFD packet to words\n");
+return;
+}
+
+ret = klvanc_line_insert(ctx->vanc_ctx, vanc_lines, afd_words, len, 12, 0);
+free(afd_words);
+if (ret) {
+av_log(avctx, AV_LOG_ERROR, "VANC line insertion failed\n");
+return;
+}
+}
+
 static int decklink_construct_vanc(AVFormatContext *avctx, struct decklink_ctx 
*ctx,
-   AVPacket *pkt, decklink_frame *frame)
+   AVPacket *pkt, decklink_frame *frame,
+   AVStream *st)
 {
 struct klvanc_line_set_s vanc_lines = { 0 };
 int ret = 0, i;
@@ -372,6 +421,7 @@ static int decklink_construct_vanc(AVFormatContext *avctx, 
struct decklink_ctx *
 return 0;
 
 construct_cc(avctx, ctx, pkt, &vanc_lines);
+construct_afd(avctx, ctx, pkt, &vanc_lines, st);
 
 IDeckLinkVideoFrameAncillary *vanc;
 int result = ctx->dlo->CreateAncillaryData(bmdFormat10BitYUV, &vanc);
@@ -461,7 +511,7 @@ static int decklink_write_video_packet(AVFormatContext 
*avctx, AVPacket *pkt)
 frame = new decklink_frame(ctx, avpacket, st->codecpar->codec_id, 
ctx->bmd_height, ctx->bmd_width);
 
 #if CONFIG_LIBKLVANC
-if (decklink_construct_vanc(avctx, ctx, pkt, frame))
+if (decklink_construct_vanc(avctx, ctx, pkt, frame, st))
 av_log(avctx, AV_LOG_ERROR, "Failed to construct VANC\n");
 #endif
 }
-- 
2.13.2

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 4/5] v210enc: Pass through Active Format Description (AFD) data

2018-09-07 Thread Devin Heitmueller
When encoding to V210, make sure the AFD side data makes it through
in the resulting AVPacket.  This is needed so the decklink output
module can put out AFD when in 10-bit mode.

Signed-off-by: Devin Heitmueller 
---
 libavcodec/v210enc.c | 8 
 1 file changed, 8 insertions(+)

diff --git a/libavcodec/v210enc.c b/libavcodec/v210enc.c
index b9dcf9a672..b024806d0b 100644
--- a/libavcodec/v210enc.c
+++ b/libavcodec/v210enc.c
@@ -242,6 +242,14 @@ static int encode_frame(AVCodecContext *avctx, AVPacket 
*pkt,
 memcpy(buf, side_data->data, side_data->size);
 }
 
+side_data = av_frame_get_side_data(pic, AV_FRAME_DATA_AFD);
+if (side_data && side_data->size) {
+uint8_t *buf = av_packet_new_side_data(pkt, AV_PKT_DATA_AFD, 
side_data->size);
+if (!buf)
+return AVERROR(ENOMEM);
+memcpy(buf, side_data->data, side_data->size);
+}
+
 pkt->flags |= AV_PKT_FLAG_KEY;
 *got_packet = 1;
 return 0;
-- 
2.13.2

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 0/5] Support for Decklink output of EIA-708 and AFD

2018-09-09 Thread Devin Heitmueller
On Sun, Sep 9, 2018 at 4:59 PM, Marton Balint  wrote:

> Thanks, I applied patches 1-4.
>
>>  decklink: Add support for output of Active Format Description (AFD)
>
>
> Regarding this one, I noticed you always set the AFD in line 12. Are you
> sure that it is OK to use line 12 for all resolutions?

12 should be fine at all resolutions, as it just needs to be at least
after the first line for switching (see ST 2016-3-2009 Sec 5).  I
already have a subsequent patch which makes the line configurable (as
well as for 708 and SCTE-104), but I am trying to avoid overloading
you with patches (which tends to result in *nothing* getting merged).

> Also, I think for
> interlaced formats you should set AFD for both fields, otherwise some
> equipment might scale/crop the two fields of a picture differently...

I've never seen a piece of equipment do such an incorrect scale/crop,
but I guess it's possible.  Part of the issue is that there are a few
different conditions in which it can vary between the two fields and
the way the underlying side-data is managed needs to be overhauled in
order to properly handle that case (e.g. the SEI can be on a field
basis in H.264, and we don't presently handle providing both values as
side data for the frame).

I think this patch handles the 99% use case (especially as PAFF
becomes less and less common),  Putting the same value on both lines
for interlaced formats is probably not a bad idea, although I suspect
in practice you're unlikely to run into equipment that has a problem
with it only appearing once.

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 0/5] Support for Decklink output of EIA-708 and AFD

2018-09-09 Thread Devin Heitmueller
Hi James,

> Shouldn't this new packet side data be handled in libavcodec/decode.c
> ff_decode_frame_props() as well?

I have a patch which does this as part of a patch series which adds
AFD parsing to decklink capture, ensures it gets preserved in
ff_decode_frame_props, and encodes the resulting AFD in the libx264
encoder.  I didn't want to submit just the patch for
ff_decode_frame_props() since nothing is presently able to exercise
the functionality other than a patch series I haven't yet submitted
for upstream.

So to answer your question, yes, we definitely need that as soon as
there is something which can actually act on it.

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 0/5] Support for Decklink output of EIA-708 and AFD

2018-09-11 Thread Devin Heitmueller
On Mon, Sep 10, 2018 at 8:00 PM, Marton Balint  wrote:
> Yes, just put the same value to both line 12 (or whichever line the user
> selects) and its pair line in the other field.

Ok, will take care of this today.

Thanks for reviewing.

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avdevice/decklink: Add option to align Capture start time

2018-09-24 Thread Devin Heitmueller
Hello Karthick,


> On Sep 24, 2018, at 7:49 AM, Karthick J  wrote:
> 
> From: Karthick Jeyapal 
> 
> This option is useful for maintaining input synchronization across N
> different hardware devices deployed for 'N-way' redundancy.
> The system time of different hardware devices should be synchronized
> with protocols such as NTP or PTP, before using this option.

I can certainly see the usefulness of such a feature, but is the decklink 
module really the right place for this?  This feels like something that should 
be done through a filter (either as a multimedia filter or a BSF).

Does anyone else have an suggestions as to a better place to do this?

Devin

---
Devin Heitmueller - LTN Global Communications
dheitmuel...@ltnglobal.com

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avdevice/decklink: Add option to align Capture start time

2018-09-27 Thread Devin Heitmueller
Hi Karthick,


>> 
>> Another approch might be to store the wallclock frame time as some kind of 
>> metadata (as it is done for "timecode") and then add the possiblity to 
>> f_select to drop based on this. However the evaluation engine has no concept 
>> of complex objects (like frame, or frame metadata) so this probably needs 
>> additional work.
> This involves a lot of extra work for a feature that can be implemented very 
> easily on the capture plugin. And still other capture plugins will have to 
> add the relevant metadata/sidedata for this feature to work for them. If you 
> still think that decklink plugin is not the right place to add this feature, 
> then I respect that decision. I will live with the f_select solution with 
> extra restrictions on timestamping options (
> Thanks again for your valuable suggestions.
> 

After further discussion of the alternatives, putting it directly in the 
decklink module does seem like the least invasive option.  While it would be 
nice if this functionality could be shared with other realtime sources, the 
framework doesn’t really seem to lend itself to that and it seems like a good 
bit more trouble than it’s worth.

I withdrawal any objections I had to the original patch.

Devin

---
Devin Heitmueller - LTN Global Communications
dheitmuel...@ltnglobal.com


___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avcodec: libdav1d AV1 decoder wrapper

2018-09-29 Thread Devin Heitmueller
On Sat, Sep 29, 2018 at 6:04 AM Rostislav Pehlivanov
 wrote:
> I'd much rather go with the original intent which was to merge the decoder
> into lavc.

Ronald can correct me if I'm wrong, but I suspect a key goal behind
the decoder was to have a standalone library which could be shared
across a variety of projects (both open and closed source).  Merging
it in directly will create a maintenance headache as the two source
trees diverge.  It also makes unit testing more difficult, since
Ronald/VideoLAN can write unit tests for the library (which will
presumably be consumed by a number of projects/products) and be
confident that those same unit tests apply to the version that is used
by ffmpeg.

I don't think having libx264/libx265 out of tree hasn't been a
nightmare for anyone.  I don't think this case would be any different.

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 2/3] lavf/timecode: document SMPTE struct

2018-10-09 Thread Devin Heitmueller

> On Oct 9, 2018, at 4:07 PM, Carl Eugen Hoyos  wrote:
> 
> 2018-10-09 15:32 GMT+02:00, jos...@ob-encoder.com :
>> From: Devin Heitmueller 
>> 
>> There are a number of different binary representations in which
>> SMPTE timecodes can use.  Make clear that the specific representation
>> that ffmpeg refers to corresponds to the DV video spec, which is
>> SMPTE S314M:2005 for standard definition video and ST 370-2013 for
>> high definition video.
> 
> If this is correct - I have no idea - why is only one standard
> mentioned in the actual change?
> 
>> +/* See SMPTE ST 314M-2005 Sec 4.4.2.2.1 "Time code pack (TC)" */
> 

Yeah, I noticed this after I did the original commit and planned to fix it 
before I submitted the patch upstream.  Was reminded of it myself when I saw 
Josh submitted the patch on my behalf.

The information describing the structure is identical between the two specs, 
and thus referring to both doesn’t give you any additional information.  
However it’s possible that someone has access to one spec but not the other 
(since they are not freely available), and thus referring to both specs 
probably makes sense.

Josh, feel free to update the patch to refer to both specifications, or drop 
the patch from your series and I’ll include an updated patch in my next patch 
series.  Whatever works best for you.

Devin

---
Devin Heitmueller - LTN Global Communications
dheitmuel...@ltnglobal.com

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 1/3] lavc/h264: create AVFrame side data from H.264 timecodes

2018-10-09 Thread Devin Heitmueller

> On Oct 9, 2018, at 4:02 PM, Carl Eugen Hoyos  wrote:
> 
> 2018-10-09 15:32 GMT+02:00, jos...@ob-encoder.com :
>> From: Devin Heitmueller 
>> 
>> Create SMPTE ST 12-1 timecodes based on H.264 SEI picture timing
>> info.
>> 
>> For framerates > 30 FPS, the field flag is used in conjunction with
>> pairs of frames which contain the same frame timestamp in S12M.
>> Ensure the field is properly set per the spec.
> 
> I understand why you split the patches like this and I am slightly in
> favour of doing it this way but I believe most developers disagree
> and would prefer one patch "based on a patch by Devin" so give them
> a little time to comment.
> 

If Josh had written both patches I would agree with the sentiment that the 
patches should be compacted to a single patch.  However since they come from 
two different authors I would argue they should be separate.

That’s just my opinion though, and I may appear biased since I was the author 
of the first patch.

Devin

---
Devin Heitmueller - LTN Global Communications
dheitmuel...@ltnglobal.com

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH] decklink: Fix compile breakage on OSX

2018-10-19 Thread Devin Heitmueller
Make the function static, or else Clang complains with:

error: no previous prototype for function 'decklink_get_attr_string' 
[-Werror,-Wmissing-prototypes]

Signed-off-by: Devin Heitmueller 
---
 libavdevice/decklink_common.cpp | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/libavdevice/decklink_common.cpp b/libavdevice/decklink_common.cpp
index b88d6c6219..130e70b2ca 100644
--- a/libavdevice/decklink_common.cpp
+++ b/libavdevice/decklink_common.cpp
@@ -77,7 +77,7 @@ static IDeckLinkIterator 
*decklink_create_iterator(AVFormatContext *avctx)
 return iter;
 }
 
-int decklink_get_attr_string(IDeckLink *dl, BMDDeckLinkAttributeID cfg_id, 
const char **s)
+static int decklink_get_attr_string(IDeckLink *dl, BMDDeckLinkAttributeID 
cfg_id, const char **s)
 {
 DECKLINK_STR tmp;
 HRESULT hr;
-- 
2.13.2

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH]lavc/mpeg12dec: Read Closed Captions from second field

2018-11-29 Thread Devin Heitmueller
On Thu, Nov 29, 2018 at 12:55 PM Michael Niedermayer
 wrote:

> > +if (s1->a53_caption) {
> > +AVFrameSideData *sd;
> > +av_frame_remove_side_data(s->current_picture_ptr->f, 
> > AV_FRAME_DATA_A53_CC);
> > +sd = av_frame_new_side_data(
> > +s->current_picture_ptr->f, AV_FRAME_DATA_A53_CC,
> > +s1->a53_caption_size);
> > +if (sd)
> > +memcpy(sd->data, s1->a53_caption, s1->a53_caption_size);
> > +av_freep(&s1->a53_caption);
> > +}
>
> This is probably ok if only one field has data Attached to it, but if both
> have then both should be exported. Also the user should have some way to
> find out which of 2 fields data came from

Yeah, this will cause the captions from the first field to get lost.
It probably makes sense to look at the H.264 decoder, where this is
done properly (i.e. creating a side data that contains the captions
from both fields).

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] pthread_frame: attempt to get frame to reduce latency

2020-03-11 Thread Devin Heitmueller
On Wed, Mar 11, 2020 at 10:28 AM Derek Buitenhuis
 wrote:
> > E.g. in FF_THREAD_FRAME 4320x2160 30fps video streaming, 4 threads, the 
> > frame caching is 99ms (33ms x 3frames)
> > If the  cpu-decoding-execution-time is 80ms ~ 120ms (dependent on video 
> > frame content).
>
> Also aside: It is not useful to measure frame delay in time. It's also not, 
> IMO, maningful to use
> in-flight (wallclock) time.

Regardless of the actual proposed patch, I think the author's use of
wallclock time to describe the problem is very reasonable.  I do a
large amount of work where I'm measuring "glass-to-glass" latency,
where I am interested in the total pipeline (encode/network/decode),
and I definitely went through the experience of trying to figure out
why ffmpeg was introducing nearly 500ms of extra latency during
decoding.  It turned out that it was because my particular platform
had 8-cores and thus 16 decoding threads and thus 15x33ms of delay,
regardless of the stream complexity.

You may not personally care about latency, but there are lots of
people operating in a world where actual latency matters (e.g. news
interviews) and they want to be able to use ffmpeg for decoding in
such environments.  The "problem" is not how many threads are used.
The problem is "there's way too much latency" and proposed solutions
include reducing the thread count or changing the heuristic for
dequeuing frames from the thread pool.

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] Test message

2020-06-08 Thread Devin Heitmueller
Please ignore

-- 
Devin Heitmueller, Senior Software Engineer
LTN Global Communications
o: +1 (301) 363-1001
w: https://ltnglobal.com  e: devin.heitmuel...@ltnglobal.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] lavdevice: Add VideoToolbox output device.

2020-06-08 Thread Devin Heitmueller
On Sun, Jun 7, 2020 at 6:31 PM Thilo Borgmann  wrote:
>
>
> Not sure if you don't mix it with Video screw me...
> For Audio, I cannot find another device handling more than one format in one 
> device.
> I'd appreciate a better way to do it than having N-devices...

While not yet upstream, my version of the decklink output module
provides a single avdevice that supports multiple formats, including
AC-3 for passthrough.

https://github.com/LTNGlobal-opensource/FFmpeg-ltn/blob/lted1/libavdevice/decklink_enc.cpp

Devin

-- 
Devin Heitmueller, Senior Software Engineer
LTN Global Communications
o: +1 (301) 363-1001
w: https://ltnglobal.com  e: devin.heitmuel...@ltnglobal.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 4/5] fftools/ffmpeg: flush and recreate encoder instance if resolution changes

2020-06-09 Thread Devin Heitmueller
On Tue, Jun 9, 2020 at 4:53 AM Linjie Fu  wrote:
>
> Signed-off-by: Linjie Fu 
> ---
> Should be squashed with:
> https://patchwork.ffmpeg.org/project/ffmpeg/list/?series=1434
>
>  fftools/ffmpeg.c | 11 +++
>  1 file changed, 11 insertions(+)
>
> diff --git a/fftools/ffmpeg.c b/fftools/ffmpeg.c
> index 5859781..8cdd532 100644
> --- a/fftools/ffmpeg.c
> +++ b/fftools/ffmpeg.c
> @@ -130,6 +130,7 @@ static void do_video_stats(OutputStream *ost, int 
> frame_size);
>  static BenchmarkTimeStamps get_benchmark_time_stamps(void);
>  static int64_t getmaxrss(void);
>  static int ifilter_has_all_input_formats(FilterGraph *fg);
> +static void flush_encoders(void);
>
>  static int run_as_daemon  = 0;
>  static int nb_frames_dup = 0;
> @@ -1058,11 +1059,21 @@ static void do_video_out(OutputFile *of,
>
>  if (next_picture && (enc->width != next_picture->width ||
>   enc->height != next_picture->height)) {
> +flush_encoders();
> +avcodec_flush_buffers(enc);
>  if (!(enc->codec->capabilities & AV_CODEC_CAP_VARIABLE_DIMENSIONS)) {
>  av_log(NULL, AV_LOG_ERROR, "Variable dimension encoding "
>  "is not supported by %s.\n", enc->codec->name);
>  goto error;
>  }
> +
> +enc->width  = next_picture->width;
> +enc->height = next_picture->height;

Perhaps from a workflow standpoint it makes more sense to move the
code which changes the encoder parameters to after where you close the
existing encoder (i.e. between the close and init calls).  I can't
think of a specific case where this might break a downstream encoder,
but it seems like a good practice to only have the parameters applied
to the new encoder instance.

> +
> +if (enc->codec->close(enc) < 0)
> +goto error;
> +if (enc->codec->init(enc) < 0)
> +goto error;
>  }
>
>  if (ost->source_index >= 0)

In general do we really think this is a safe thing to do?  Does
something also need to be propagated to the output as well?  I know
that this would break use cases like the decklink output where the
frame resolution suddenly changes in the middle of the stream without
calling the output's write_header() routine.

Devin

-- 
Devin Heitmueller, Senior Software Engineer
LTN Global Communications
o: +1 (301) 363-1001
w: https://ltnglobal.com  e: devin.heitmuel...@ltnglobal.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avdevice/decklink_dec: extracting and outputing klv from vanc

2020-06-17 Thread Devin Heitmueller
_LOG_DEBUG, "KLV with MID: %d and PSC: 
> > %d\n", mid, psc);
> > +
> > +uint32_t data_len = size - 3;
> > +total_size += data_len;
> > +
> > +KLVPacket packet{ psc };
> > +packet.data.resize(data_len);
> > +memcpy(packet.data.data(), data + 3, data_len);
> > +
> > +klv_packets[mid].push_back(std::move(packet));
> > +}
> > +}
> > +}
> > +
> > +packet.destroy();
> > +}
> > +
> > +if (total_size > 0) {
> > +std::vector klv;
> > +klv.reserve(total_size);
> > +
> > +for (size_t i = 0; i < klv_packets.size(); ++i) {
> > +auto& list = klv_packets[i];
> > +
> > +if (list.empty())
> > +continue;
> > +
> > +av_log(avctx, AV_LOG_DEBUG, "Joining MID: %d\n", i);
> > +
> > +std::sort(list.begin(), list.end(), [](const KLVPacket& lhs, 
> > const KLVPacket& rhs) {
> > +return lhs.sequence_counter < rhs.sequence_counter;
> > +});
>
> Is the sorting really needed? Aren't they supposed to be sorted in the VANC?

I had the same question.  Other standards such as SMPTE 2010 require
the packets to be in order (since they simply denote
start/middle/end).  Milos, have you seen any cases where reordering is
actually required prior to reassembly?

>
> > +
> > +for (auto& packet : list)
> > +klv.insert(klv.end(), packet.data.begin(), 
> > packet.data.end());
> > +}

Should you be tracking the PSC values to check for missing packets
(e.g. checksum errors)?  If you get PSC values 1, 2, 4, then you would
concatenate those three packets together without noticing that packet
3 was never provided.  The result would be announcing a partial packet
up the stack.

> > +
> > +AVPacket klv_packet;
> > +av_init_packet(&klv_packet);
> > +klv_packet.pts = pts;
> > +klv_packet.dts = pts;
> > +klv_packet.flags |= AV_PKT_FLAG_KEY;
> > +klv_packet.stream_index = ctx->klv_st->index;
> > +klv_packet.data = klv.data();
> > +klv_packet.size = klv.size();
> > +
> > +if (avpacket_queue_put(&ctx->queue, &klv_packet) < 0) {
> > +av_log(avctx, AV_LOG_INFO, "dropping KLV VANC packet\n");
>
> Maybe avoid the error message and increase ctx->dropped like other similar 
> code
> does it.
>
> > +}
> > +}
> > +}
> > +
> >  class decklink_input_callback : public IDeckLinkInputCallback
> >  {
> >  public:
> > @@ -816,6 +955,10 @@ HRESULT 
> > decklink_input_callback::VideoInputFrameArrived(
> >  //fprintf(stderr,"Video Frame size %d ts %d\n", pkt.size, pkt.pts);
> >
> >  if (!no_video) {
> > +if (ctx->output_klv) {
> > +handle_klv(avctx, ctx, videoFrame, pkt.pts);
> > +}
> > +
>
> Mixed code and declaration.
>
> >  IDeckLinkVideoFrameAncillary *vanc;
> >  AVPacket txt_pkt;
> >  uint8_t txt_buf0[3531]; // 35 * 46 bytes decoded teletext 
> > lines + 1 byte data_identifier + 1920 bytes OP47 decode buffer
> > @@ -973,7 +1116,6 @@ static int decklink_autodetect(struct decklink_cctx 
> > *cctx) {
> >  } else {
> >  return -1;
> >  }
> > -
> >  }
> >
> >  extern "C" {
> > @@ -1012,6 +1154,7 @@ av_cold int ff_decklink_read_header(AVFormatContext 
> > *avctx)
> >  return AVERROR(ENOMEM);
> >  ctx->list_devices = cctx->list_devices;
> >  ctx->list_formats = cctx->list_formats;
> > +ctx->output_klv = cctx->output_klv;
> >  ctx->teletext_lines = cctx->teletext_lines;
> >  ctx->preroll  = cctx->preroll;
> >  ctx->duplex_mode  = cctx->duplex_mode;
> > @@ -1202,6 +1345,21 @@ av_cold int ff_decklink_read_header(AVFormatContext 
> > *avctx)
> >
> >  ctx->video_st=st;
> >
> > +if (ctx->output_klv) {
> > +st = avformat_new_stream(avctx, NULL);
> > +if (!st) {
> > +av_log(avctx, AV_LOG_ERROR, "Cannot add KLV stream\n");
>
> Lose the error message for ENOMEM.
>
> > +ret = AVERROR(ENOMEM);
> > + 

Re: [FFmpeg-devel] [PATCH v1] avcodec/v210enc: add yuv420p/yuv420p10 input pixel format support

2019-09-20 Thread Devin Heitmueller
Hello Michael,


> On Sep 20, 2019, at 12:10 PM, Michael Niedermayer  
> wrote:
> 
> On Fri, Sep 20, 2019 at 11:55:17PM +0800, lance.lmw...@gmail.com wrote:
>> From: Limin Wang 
>> 
>> Signed-off-by: Limin Wang 
>> ---
>> libavcodec/v210_template.c | 20 
>> libavcodec/v210enc.c   |  8 +---
>> 2 files changed, 25 insertions(+), 3 deletions(-)
> 
> Adding a nearest neighbor scaler or in fact any scaler
> into an encoder is not ok
> 
> This belongs in swscale and it is already there.


Just to be clear, there is no scaling going on here.  The patch just expands 
4:2:0 to 4:2:2 while properly supporting interlaced chroma.  It avoids having 
to auto insert the swscale filter in the case where there is no scaling 
required (e.g. H.264 4:2:0 video being output to decklink in its original 
resolution).

Regards,

Devin
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH v1] avcodec/v210enc: add yuv420p/yuv420p10 input pixel format support

2019-09-21 Thread Devin Heitmueller

> On Sep 21, 2019, at 4:44 PM, Michael Niedermayer  
> wrote:
> 
>> The patch just expands 4:2:0 to 4:2:2 while properly supporting interlaced 
>> chroma.  
> 
> 4:2:0 and 4:2:2 have a chroma plane with different resolution.
> converting between planes of different resolution is what i called scaling.
> 
> 
>> It avoids having to auto insert the swscale filter in the case where there 
>> is no scaling required (e.g. H.264 4:2:0 video being output to decklink in 
>> its original resolution).
> 
> yes, doing an operation in the encoder avoids a filter being inserted which
> does that operation.
> Thats true for every encoder and every filter.

The key thing here is the encoder is already touching every pixel, so avoiding 
having the need for the filter essentially allows the conversion to happen at 
essentially zero cost (as we repack the pixels into the requisite v210 layout).

> Also replacing interpolation by a nearest neighbor implementation
> is quite expectedly faster.

Yes, and we can certainly argue about whether doing interpolation of chroma 
when doing 4:2:0 to 4:2:2 actually has any visible benefit.  I can however say 
the cost of having swscaler in the pipeline is considerable.  In fact I didn’t 
appreciate it myself until I was trying to deliver 1080p60 in realtime to four 
decklink outputs and couldn’t keep up on my target platform.  And because 
filters generally aren’t threaded, I got hit with one of those cases where I 
had to break out the profiler and ask “why on Earth is the main ffmpeg thread 
so busy?"


> one problem is
> the user can setup the scale filter with high quality in mind or with 
> low quality and speed in mind.
> But after this patch she always gets low quality because the low quality
> convertion code is hardcoded in the encoder which pretends to support 420.
> The outside code has no chance to know it shouldnt feed 420 if high quality
> is wanted.

The user can still insert a scaler explicitly or use the pix_fmt argument so 
the format filter gets put into the pipeline.

> 
> Also why should this be in one encoder and not be available to other
> encoders supporting 4:2:2 input ?
> A solution should work for all of them

I would assume this would really only be helpful in encoders which only support 
4:2:2 and not 4:2:0, since typical encoders that accept 4:2:0 would preserve 
that in their resulting encoding (i.e. they wouldn’t blindly upscale 4:2:0 to 
4:2:2 for no good reason).

I did actually consider doing a separate filter which just does packed/planer 
conversion and 4:2:0 to 4:2:2 (as opposed to swscaler).  In this case though 
the additional modularity in such a filter was outweighed by my goal to 
minimize the number of times I’m copying the frame data.  Combining it with the 
v210 encoding meant only a single pass over the data.

> 
> Iam not sure what is the best solution but simply hardcoding this in
> one encoder feels rather wrong

The scale filter performs three basic roles:
1.  Scaling
2.  Packed to planer conversion (or vice versa)
3.  Colorspace conversion

I supposed potentially someone could redesign swscale to include the option to 
not take the slow path for cases where scaling isn’t actually required (i.e. 
cases where only 2 and 3 are needed).

Just so we’re all on the same page - this wasn’t a case of random or premature 
optimization.  I have a specific use case where I’m decoding four instances of 
1080p60 video and the platform can’t keep up without this change.  It’s the 
result of actually profiling the entire pipeline as opposed to some unit test 
with a benchmark.  In fact I don’t particularly agree with Limin's numbers 
where he used the benchmark option, since that fails to take into account 
caching behavior or memory bandwidth on a platform which is constrained (a 
problem which is exacerbated when running multiple instances).  In a perfect 
world we would have very small operations which each perform some discrete 
function, and we can combine all of those in a pipeline.  In the real world 
though, significant benefits can be gained by combining certain operations to 
avoid copying the same pixels over and over again.

Devin
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] yadif frame doubling - incorrect closed captioning

2019-01-14 Thread Devin Heitmueller
On Mon, Jan 14, 2019 at 3:31 AM Michael Niedermayer
 wrote:
> Thus a new function should be added which does all this, and that then
> be used

For what it's worth, the fix is actually incorrect both here and in
vf_vps.  When doubling the framerate, the correct behavior is to split
the content across both frames and cut the cc_count in half (it's
actually more tricky than that because certain entries have to be
moved to the front of the array).

That said, using a centralized function is a step in the right
direction.  It should probably be in libavutil, given it will need to
be available to both filters and formats (and potentially codecs as
well if we wanted them to proactively fix cases where they receive an
invalid cc_count).  That said, in order to do the transformation
properly it would need to receive the target framerate as well as be
able to maintain some state (since reducing the framerate requires
content to be cached in order to combine one or more entries).

In short, a centralized function would be good, but we probably need
to think through what the API looks like so we don't have to introduce
a new API in libavutil and then deprecate it once we want to make the
splitting/combining logic work according to the spec.

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] yadif frame doubling - incorrect closed captioning

2019-01-14 Thread Devin Heitmueller
> > In short, a centralized function would be good, but we probably need
> > to think through what the API looks like so we don't have to introduce
> > a new API in libavutil and then deprecate it once we want to make the
> > splitting/combining logic work according to the spec.
>
> I fear that no matter how hard we try we will likely eventually run
> into cases that it cannot handle

Oh, I completely agree.  Some captions are just going to be too
screwed up to render.  However I think there are cases we can recover
from, and in particular I would like to make sure that streams created
with versions of ffmpeg before this patch continue to play properly
(i.e. where the stream only has caption data on every other frame and
the cc_count is 2x what it is supposed to be).

> also maybe 2 functions would keep this simpler
> one to deal with temporal transformations (frame drop, duplicate, interpolate,
> combine)
> one to deal with spatial transformations, crop, pad, scale, rotate

There shouldn't be any need for a function for spatial
transformations.  The expected cc_count is unrelated to the resolution
of the video.  It's tied exclusively to the framerate and whether you
are doing frame or field-level encoding when the video is interlaced.
The underlying goal was to ensure that the bitrate of the caption data
is a constant 9600bps and thus they wanted it to be spread evenly
across the frames/fields.  When people encounter problems going from
1080i to 720p for example, it's because that also involves framerate
conversion, not because the resolution of the individual video frames
is being changed.

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] yadif frame doubling - incorrect closed captioning

2019-01-14 Thread Devin Heitmueller
On Mon, Jan 14, 2019 at 12:45 PM Michael Niedermayer
 wrote:
> well, there are other types of side data too. for example motion
> vectors would need to be updated on crop depending on which rectangle
> is croped out

Ah, yes, of course.  I thought we were just discussing captions.

So are you proposing a single general utility function which you would
pass AVFrames, and it would perform changes to any side data that is
present (e.g. captions, MVs, etc)?  I had assumed we would introduce a
function specific to captions, which can be reused without having an
underlying AVFrame.

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Decklink Output Problem

2019-01-30 Thread Devin Heitmueller
Hello Deron,

> 
> Either way, why would the default behavior ever be to drop even a single 
> frame unless specifically aborted by the user? Seems sloppy, and when half 
> second fades at the end are dropped it makes the video ending abrupt. The 
> current driver would be useful only in a scenario where ffmpeg was used to 
> generate a final HD-SDI video stream. The tool is much more capable than that.

I think part of this issue is a side-effect of the way the 
libavformat/libavdevice API is designed.  The API has a calls for 
write_header(), write_packet(), and write_trailer().  There is the ability to 
explicitly flush out any pending buffers, but the API doesn’t have an explicit 
way to close down the output and throw away any pending video frames.

There are numerous cases where you would want to exit immediately without 
putting out pending video buffers (such as a realtime decoder where you don’t 
want to push out another 1 second of video).

The ffmpeg command line tool basically treats SIGTERM and SIGINT in the same 
manner - they empty out their mux queue and call write_trailer(), which today 
in the case of the decklink module results in it terminating immediately 
without flushing any pending buffers.

One approach that could be taken would be to change the ffmpeg program to 
explicitly call a flush before calling write_trailer() when receiving SIGTERM 
or at the end of input.  This would result in all video frames being put out 
and then write_trailer() would continue to stop output immediately on receipt.  
In the SIGINT case though, it would call write_trailer() without flushing 
pending buffers, which I suspect more closely matches a user’s expectations (“I 
said interrupt the process immediately”).  This is a more invasive change 
though since changing the ffmpeg program impacts all other inputs and outputs.  
Also, flush commands can be received at other points, and blocking while output 
gets flushed could cause stalling in the pipeline (which is really bad when 
reading from a realtime input source).

The whole pre-roll logic in the decklink output needs more work in general.  
There’s also a bug where we track the last_pts, but we don’t distinguish 
between PTS values coming from video versus audio (which are clocked at 
different rates), and thus the Stop call we sometimes hang.  I’ve cleaned up a 
good bit of those sorts of issues in a private tree but haven’t submitted the 
patches upstream yet.

In general I agree with the sentiment that the use case you’re describing 
should be handled - it’s largely just a question of how we do that within the 
constraints of a relatively inflexible API without breaking anything else.

Devin

---
Devin Heitmueller - LTN Global Communications
dheitmuel...@ltnglobal.com

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] configure: request explicitly enabled components

2019-02-05 Thread Devin Heitmueller
On Tue, Feb 5, 2019 at 6:29 AM Carl Eugen Hoyos  wrote:
> How would this be better than printing a warning if the feature
> could not be enabled as it is already done in some situations?

In most systems I've worked with, if I say "enable something" and it
cannot be enabled I want the ./configure to exit out immediately with
a non-zero error code.  Especially when doing scripted builds, the
last thing I want to do is have the script run to completion and then
have to pipe stderr to a file and grep the output for warnings that
something didn't get included.

That's just my opinion though.  :-)

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] mpeg12enc: Use Closed Captions if available

2019-02-07 Thread Devin Heitmueller

> On Feb 7, 2019, at 1:11 PM, Mathieu Duponchelle  
> wrote:
> +if (side_data->size <= 96) {

Isn’t this calculation incorrect?  The max cc_count possible is 31 (0x1F), 
hence the max size should be 93.

> +int i = 0;
> +
> +put_header (s, USER_START_CODE);
> +
> +put_bits(&s->pb, 8, 'G');   // 
> user_identifier
> +put_bits(&s->pb, 8, 'A');
> +put_bits(&s->pb, 8, '9');
> +put_bits(&s->pb, 8, '4');
> +put_bits(&s->pb, 8, 3); // 
> user_data_type_code
> +put_bits(&s->pb, 8,
> +((side_data->size / 3) & 0x1f) | 0x40); // flags, 
> cc_count
> +put_bits(&s->pb, 8, 0xff);  // em_data


---
Devin Heitmueller - LTN Global Communications
dheitmuel...@ltnglobal.com


___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] mpeg12enc: Use Closed Captions if available

2019-02-07 Thread Devin Heitmueller
> On Feb 7, 2019, at 1:22 PM, Mathieu Duponchelle  
> wrote:
> 
> 
> 
> On 2/7/19 7:21 PM, Devin Heitmueller wrote:
>> Isn’t this calculation incorrect?  The max cc_count possible is 31 (0x1F), 
>> hence the max size should be 93.
>> 
> 
> True that, updating

Not to nitpick, but it might also be worthwhile to create some #define such as 
MAX_CC_COUNT and have the comparison be "MAX_CC_COUNT * 3”.  That makes clear 
where the magic value “91” came from, and the compiler will optimize out the 
multiply anyway since it’s a constant.

Devin

---
Devin Heitmueller - LTN Global Communications
dheitmuel...@ltnglobal.com


> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v4] mpeg12enc: Use Closed Captions if available

2019-02-08 Thread Devin Heitmueller
> On Feb 8, 2019, at 9:47 AM, Mathieu Duponchelle  
> wrote:
> 
> On 2/8/19 11:58 AM, Michael Niedermayer wrote:
>> what if size is not a multiple of 3 ?
> 
> Good point, more bytes will be written than advertised. Do you reckon
> the input should be straight up refused? The other solution is to warn then
> iterate over side_data->size rounded down to the closest 3 multiple.

Yeah, so there are all sorts of ways the content could be screwed up and I 
don’t think we want to get into the business of having every encoder module try 
to validate it.  That said, a quick length check is reasonable to avoid a 
possible buffer overflow, so I would just write a log message and throw the 
entire array on the floor.

Devin

---
Devin Heitmueller - LTN Global Communications
dheitmuel...@ltnglobal.com

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v7] mpeg12enc: Use Closed Captions if available

2019-02-16 Thread Devin Heitmueller
> It would be better to test against a decoder from a unrelated codebase
> Otherwise its a bit like testing your new language skills by talking with
> yourself.

It should be pretty easy to just play the resulting TS in VLC and
confirm the captions are present and play correctly.

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] AVPixelFormat for YUYV 10 bit.

2019-03-25 Thread Devin Heitmueller
Hello Yufei,


> On Mar 25, 2019, at 9:53 AM, Yufei He  wrote:
> 
> Hi
> 
> I think there may be one format missing in AVPixelFormat.
> 
> AV_PIX_FMT_YUYV210
> 
> 
> [cid:part1.002B7405.3138ACFB@matrox.com]
> 
> 
> The start of each line in the V210 video buffer format must be
> aligned to a multiple of 48 pixels (128 bytes). This means that if a line is 
> not a
> multiple of 48, each line of video data must be padded out to the nearest 48 
> pixel
> boundary. For example, a 1280 × 720 video buffer is 1280 pixels wide, which 
> is not a
> multiple of 48. Each line of the video buffer must be padded to 1296 pixels 
> (3456
> bytes) in order to make each line a multiple of 48.
> 
> It's popular when video is in 10bit.
> 

So V210 video is supported, but not as a pixel format.  It’s treated as a 
packet format.  Hence in order to use v210 video, you have to pass the video 
frame through the v210enc or v210 decoder module, at which point you end up 
with video in AV_PIX_FMT_YUV422P10 format.

We can argue about whether that was the right approach, but that’s what is done 
in ffmpeg today, and we use it regularly with the Blackmagic cards (i.e. you 
can look at libavdevice/decklink_enc.cpp, decklink_dec.cpp for usage).

Devin

---
Devin Heitmueller - LTN Global Communications
dheitmuel...@ltnglobal.com

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] Add A53 Closed Captions to MPEG header if they are available.

2017-06-09 Thread Devin Heitmueller
Hello Marton,

On Tue, Jun 6, 2017 at 5:45 PM, Marton Balint  wrote:

> As far as I remember multiple side data of the same type is not something we
> wanted to support. Why do you need it? Can't a single AV_FRAME_DATA_A53_CC
> side data packet contain many CC entries?

Could you please expand on where this was discussed?  Is there any
design documentation for side data infrastructure that's been
introduced into ffmpeg?  Is there some list of other known design
constraints/limitations?

While I agree it would be great to simply say that you should never
have multiple side data items of the same type, I'm not sure how
realistic that is.  It would be helpful if I could better understand
the rationale in that thinking.

I'm starting a rather large project which will likely stretch the
design for side data in order to support a variety of other ancillary
data protocols (e.g. SCTE-104, SMPTE 2038, etc), and it would be great
to better understand where the constraints are (so I can either plan
to address those issues, or if too significant then choose a different
multimedia framework to work with before making a significant
investment building out a bunch of features in ffmpeg).

Thanks,

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCHv2 0/2] Add support for EIA-708/AFD on decklink output

2017-11-16 Thread Devin Heitmueller
This patch series hooks in the libklvanc library to provide support
of output for EIA-708 and AFD packets over SDI output.

Patch 1 reflects feedback from Marton Balint 

Devin Heitmueller (2):
  libavdevice/decklink: Add support for EIA-708 output over SDI
  decklink: Add support for output of Active Format Description (AFD)

 configure   |   4 +
 libavcodec/avcodec.h|   6 ++
 libavcodec/v210enc.c|  23 +
 libavdevice/decklink_common.cpp |  17 +++-
 libavdevice/decklink_common.h   |  10 +++
 libavdevice/decklink_enc.cpp| 187 ++--
 6 files changed, 237 insertions(+), 10 deletions(-)

-- 
1.8.3.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 1/2] libavdevice/decklink: Add support for EIA-708 output over SDI

2017-11-16 Thread Devin Heitmueller
Hook in libklvanc and use it for output of EIA-708 captions over
SDI.  The bulk of this patch is just general support for ancillary
data for the Decklink SDI module - the real work for construction
of the EIA-708 CDP and VANC line construction is done by libklvanc.

Libklvanc can be found at: https://github.com/stoth68000/libklvanc

Updated to reflect feedback from Marton Balint 

Signed-off-by: Devin Heitmueller 
---
 configure   |   4 ++
 libavcodec/v210enc.c|  12 
 libavdevice/decklink_common.cpp |  17 +++--
 libavdevice/decklink_common.h   |  10 +++
 libavdevice/decklink_enc.cpp| 150 ++--
 5 files changed, 183 insertions(+), 10 deletions(-)

diff --git a/configure b/configure
index 934ac3a..d5e3dcc 100755
--- a/configure
+++ b/configure
@@ -238,6 +238,7 @@ External library support:
   --enable-libiec61883 enable iec61883 via libiec61883 [no]
   --enable-libilbc enable iLBC de/encoding via libilbc [no]
   --enable-libjack enable JACK audio sound server [no]
+  --enable-libklvanc   enable Kernel Labs VANC processing [no]
   --enable-libkvazaar  enable HEVC encoding via libkvazaar [no]
   --enable-libmodplug  enable ModPlug via libmodplug [no]
   --enable-libmp3lame  enable MP3 encoding via libmp3lame [no]
@@ -1602,6 +1603,7 @@ EXTERNAL_LIBRARY_LIST="
 libiec61883
 libilbc
 libjack
+libklvanc
 libkvazaar
 libmodplug
 libmp3lame
@@ -3076,6 +3078,7 @@ decklink_deps_any="libdl LoadLibrary"
 decklink_indev_deps="decklink threads"
 decklink_indev_extralibs="-lstdc++"
 decklink_outdev_deps="decklink threads"
+decklink_outdev_suggest="libklvanc"
 decklink_outdev_extralibs="-lstdc++"
 libndi_newtek_indev_deps="libndi_newtek"
 libndi_newtek_indev_extralibs="-lndi"
@@ -5847,6 +5850,7 @@ enabled libgsm&& { for gsm_hdr in "gsm.h" 
"gsm/gsm.h"; do
check_lib libgsm "${gsm_hdr}" gsm_create 
-lgsm && break;
done || die "ERROR: libgsm not found"; }
 enabled libilbc   && require libilbc ilbc.h WebRtcIlbcfix_InitDecode 
-lilbc $pthreads_extralibs
+enabled libklvanc && require libklvanc libklvanc/vanc.h 
klvanc_context_create -lklvanc
 enabled libkvazaar&& require_pkg_config libkvazaar "kvazaar >= 0.8.1" 
kvazaar.h kvz_api_get
 # While it may appear that require is being used as a pkg-config
 # fallback for libmfx, it is actually being used to detect a different
diff --git a/libavcodec/v210enc.c b/libavcodec/v210enc.c
index a6afbbf..a825c03 100644
--- a/libavcodec/v210enc.c
+++ b/libavcodec/v210enc.c
@@ -123,6 +123,7 @@ static int encode_frame(AVCodecContext *avctx, AVPacket 
*pkt,
 int aligned_width = ((avctx->width + 47) / 48) * 48;
 int stride = aligned_width * 8 / 3;
 int line_padding = stride - ((avctx->width * 8 + 11) / 12) * 4;
+AVFrameSideData *side_data;
 int h, w, ret;
 uint8_t *dst;
 
@@ -233,6 +234,17 @@ static int encode_frame(AVCodecContext *avctx, AVPacket 
*pkt,
 }
 }
 
+side_data = av_frame_get_side_data(pic, AV_FRAME_DATA_A53_CC);
+if (side_data && side_data->size) {
+uint8_t* buf = av_packet_new_side_data(pkt, AV_PKT_DATA_A53_CC, 
side_data->size);
+if (buf) {
+memcpy(buf, side_data->data, side_data->size);
+} else {
+av_log(avctx, AV_LOG_ERROR, "Unable to allocate side data\n");
+return AVERROR(ENOMEM);
+}
+}
+
 pkt->flags |= AV_PKT_FLAG_KEY;
 *got_packet = 1;
 return 0;
diff --git a/libavdevice/decklink_common.cpp b/libavdevice/decklink_common.cpp
index 2bd63ac..c425f4a 100644
--- a/libavdevice/decklink_common.cpp
+++ b/libavdevice/decklink_common.cpp
@@ -247,10 +247,19 @@ int ff_decklink_set_format(AVFormatContext *avctx,
&support, NULL) != S_OK)
 return -1;
 } else {
-if (ctx->dlo->DoesSupportVideoMode(ctx->bmd_mode, bmdFormat8BitYUV,
-   bmdVideoOutputFlagDefault,
-   &support, NULL) != S_OK)
-return -1;
+ctx->supports_vanc = 1;
+if (ctx->dlo->DoesSupportVideoMode(ctx->bmd_mode, (BMDPixelFormat) 
ctx->raw_format,
+   bmdVideoOutputVANC,
+   &support, NULL) != S_OK) {
+/* Try again, but without VANC enabled */
+if (ctx->dlo->DoesSupportVideoMode(ctx->bmd_mode, (BMDPixelFormat) 
ctx->raw_format,
+   bmdVideoOutputFlagDefault,
+ 

[FFmpeg-devel] [PATCH 2/2] decklink: Add support for output of Active Format Description (AFD)

2017-11-16 Thread Devin Heitmueller
Implement support for including AFD in decklink output.  This
includes making sure the AFD data is preserved when going from
an AVFrame to a V210 packet (needed for 10-bit support).

Signed-off-by: Devin Heitmueller 
---
 libavcodec/avcodec.h |  6 ++
 libavcodec/v210enc.c | 11 +++
 libavdevice/decklink_enc.cpp | 41 +++--
 3 files changed, 56 insertions(+), 2 deletions(-)

diff --git a/libavcodec/avcodec.h b/libavcodec/avcodec.h
index 442b558..6981f07 100644
--- a/libavcodec/avcodec.h
+++ b/libavcodec/avcodec.h
@@ -1327,6 +1327,12 @@ enum AVPacketSideDataType {
 AV_PKT_DATA_A53_CC,
 
 /**
+ * Active Format Description data consisting of a single byte as specified
+ * in ETSI TS 101 154 using AVActiveFormatDescription enum.
+ */
+AV_PKT_DATA_AFD,
+
+/**
  * The number of side data elements (in fact a bit more than it).
  * This is not part of the public API/ABI in the sense that it may
  * change when new side data types are added.
diff --git a/libavcodec/v210enc.c b/libavcodec/v210enc.c
index a825c03..c28f7b1 100644
--- a/libavcodec/v210enc.c
+++ b/libavcodec/v210enc.c
@@ -245,6 +245,17 @@ static int encode_frame(AVCodecContext *avctx, AVPacket 
*pkt,
 }
 }
 
+side_data = av_frame_get_side_data(pic, AV_FRAME_DATA_AFD);
+if (side_data && side_data->size) {
+uint8_t* buf = av_packet_new_side_data(pkt, AV_PKT_DATA_AFD, 
side_data->size);
+if (buf) {
+memcpy(buf, side_data->data, side_data->size);
+} else {
+av_log(avctx, AV_LOG_ERROR, "Unable to allocate afd side data\n");
+return AVERROR(ENOMEM);
+}
+}
+
 pkt->flags |= AV_PKT_FLAG_KEY;
 *got_packet = 1;
 return 0;
diff --git a/libavdevice/decklink_enc.cpp b/libavdevice/decklink_enc.cpp
index 0dcbe79..070bfad 100644
--- a/libavdevice/decklink_enc.cpp
+++ b/libavdevice/decklink_enc.cpp
@@ -280,7 +280,8 @@ av_cold int ff_decklink_write_trailer(AVFormatContext 
*avctx)
 
 #if CONFIG_LIBKLVANC
 static int decklink_construct_vanc(AVFormatContext *avctx, struct decklink_ctx 
*ctx,
-   AVPacket *pkt, decklink_frame *frame)
+   AVPacket *pkt, decklink_frame *frame,
+   AVStream *st)
 {
 struct klvanc_line_set_s vanc_lines = { 0 };
 int ret, size;
@@ -334,6 +335,42 @@ static int decklink_construct_vanc(AVFormatContext *avctx, 
struct decklink_ctx *
 }
 }
 
+data = av_packet_get_side_data(pkt, AV_PKT_DATA_AFD, &size);
+if (data) {
+struct klvanc_packet_afd_s *pkt;
+uint16_t *afd;
+uint16_t len;
+
+ret = klvanc_create_AFD(&pkt);
+if (ret != 0)
+return AVERROR(ENOMEM);
+
+ret = klvanc_set_AFD_val(pkt, data[0]);
+if (ret != 0) {
+av_log(avctx, AV_LOG_ERROR, "Invalid AFD value specified: %d\n",
+   data[0]);
+klvanc_destroy_AFD(pkt);
+return AVERROR(EINVAL);
+}
+
+/* FIXME: Should really rely on the coded_width but seems like that
+   is not accessible to libavdevice outputs */
+if ((st->codecpar->width == 1280 && st->codecpar->height == 720) ||
+(st->codecpar->width == 1920 && st->codecpar->height == 1080))
+pkt->aspectRatio = ASPECT_16x9;
+else
+pkt->aspectRatio = ASPECT_4x3;
+
+klvanc_convert_AFD_to_words(pkt, &afd, &len);
+klvanc_destroy_AFD(pkt);
+
+ret = klvanc_line_insert(ctx->vanc_ctx, &vanc_lines, afd, len, 12, 0);
+if (ret != 0) {
+av_log(avctx, AV_LOG_ERROR, "VANC line insertion failed\n");
+return AVERROR(ENOMEM);
+}
+}
+
 IDeckLinkVideoFrameAncillary *vanc;
 int result = ctx->dlo->CreateAncillaryData(bmdFormat10BitYUV, &vanc);
 if (result != S_OK) {
@@ -429,7 +466,7 @@ static int decklink_write_video_packet(AVFormatContext 
*avctx, AVPacket *pkt)
 frame = new decklink_frame(ctx, avpacket, st->codecpar->codec_id, 
ctx->bmd_height, ctx->bmd_width);
 
 #if CONFIG_LIBKLVANC
-ret = decklink_construct_vanc(avctx, ctx, pkt, frame);
+ret = decklink_construct_vanc(avctx, ctx, pkt, frame, st);
 if (ret != 0) {
 av_log(avctx, AV_LOG_ERROR, "Failed to construct VANC\n");
 }
-- 
1.8.3.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [RFC PATCH 0/6] Decklink capture VANC improvements and AFD in libx264

2017-11-16 Thread Devin Heitmueller
The following patch series extends the decklink capture module to use
libklvanc for EIA-708, AFD, and SCTE-104 VANC messages.  It also
introduces support for generating multiple streams of audio, one for
each SDI pair.

The ffmpeg glue for libx264 has been improved to support encoding
of Active Format Description if present as side-data.  This was
needed in order to capture AFD on the decklink interface and have it
end up in the final TS (assuming an encoding use case).

Devin Heitmueller (6):
  decklink: Fix case where return value wasn't being set before checked
for errors
  decklink: Introduce support for capture of multiple audio streams
  Preserve AFD side data when going from AVPacket to AVFrame
  Support encoding of Active Format Description (AFD) in libx264
  Add suppoort for using libklvanc from within decklink capture module
  decklink: Add support for SCTE-104 to decklink capture

 libavcodec/avcodec.h|   1 +
 libavcodec/codec_desc.c |   6 +
 libavcodec/decode.c |   1 +
 libavcodec/internal.h   |   3 +
 libavcodec/libx264.c|  38 -
 libavcodec/utils.c  |  36 +
 libavdevice/decklink_common.cpp |   9 ++
 libavdevice/decklink_common.h   |  14 +-
 libavdevice/decklink_common_c.h |   7 +
 libavdevice/decklink_dec.cpp| 331 
 libavdevice/decklink_dec_c.c|   4 +
 libavdevice/decklink_enc.cpp|   2 +-
 12 files changed, 417 insertions(+), 35 deletions(-)

-- 
1.8.3.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 1/6] decklink: Fix case where return value wasn't being set before checked for errors

2017-11-16 Thread Devin Heitmueller
I missed an assignement which cauesd the error case to not ever be properly
checked.

Signed-off-by: Devin Heitmueller 
---
 libavdevice/decklink_enc.cpp | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/libavdevice/decklink_enc.cpp b/libavdevice/decklink_enc.cpp
index 070bfad..1fb986e 100644
--- a/libavdevice/decklink_enc.cpp
+++ b/libavdevice/decklink_enc.cpp
@@ -300,7 +300,7 @@ static int decklink_construct_vanc(AVFormatContext *avctx, 
struct decklink_ctx *
 if (ret != 0)
 return AVERROR(ENOMEM);
 
-klvanc_set_framerate_EIA_708B(pkt, ctx->bmd_tb_num, ctx->bmd_tb_den);
+ret = klvanc_set_framerate_EIA_708B(pkt, ctx->bmd_tb_num, 
ctx->bmd_tb_den);
 if (ret != 0) {
 av_log(avctx, AV_LOG_ERROR, "Invalid framerate specified: 
%lld/%lld\n",
ctx->bmd_tb_num, ctx->bmd_tb_den);
-- 
1.8.3.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 3/6] Preserve AFD side data when going from AVPacket to AVFrame

2017-11-16 Thread Devin Heitmueller
This is needed to ensure that AFD data continues to work when
capturing V210 video with the Decklink libavdevice input.

Signed-off-by: Devin Heitmueller 
---
 libavcodec/decode.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/libavcodec/decode.c b/libavcodec/decode.c
index a7f1e23..e981651 100644
--- a/libavcodec/decode.c
+++ b/libavcodec/decode.c
@@ -1537,6 +1537,7 @@ int ff_init_buffer_info(AVCodecContext *avctx, AVFrame 
*frame)
 { AV_PKT_DATA_MASTERING_DISPLAY_METADATA, 
AV_FRAME_DATA_MASTERING_DISPLAY_METADATA },
 { AV_PKT_DATA_CONTENT_LIGHT_LEVEL,
AV_FRAME_DATA_CONTENT_LIGHT_LEVEL },
 { AV_PKT_DATA_A53_CC, AV_FRAME_DATA_A53_CC },
+{ AV_PKT_DATA_AFD,AV_FRAME_DATA_AFD },
 };
 
 if (pkt) {
-- 
1.8.3.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 2/6] decklink: Introduce support for capture of multiple audio streams

2017-11-16 Thread Devin Heitmueller
Add support for the ability to capture all audio pairs available
to the capture hardware.  Each pair is exposed as a different audio
stream, which matches up with the most common use cases for the
broadcast space (i.e. where there is one stereo pair per audio
language).

To support the existing use case where multi-channel audio can be
captured (i.e. 7.1), we introduced a new configuration option, which
defaults to the existing behavior.

Signed-off-by: Devin Heitmueller 
---
 libavdevice/decklink_common.cpp |   9 +++
 libavdevice/decklink_common.h   |   8 ++-
 libavdevice/decklink_common_c.h |   6 ++
 libavdevice/decklink_dec.cpp| 134 +++-
 libavdevice/decklink_dec_c.c|   3 +
 5 files changed, 130 insertions(+), 30 deletions(-)

diff --git a/libavdevice/decklink_common.cpp b/libavdevice/decklink_common.cpp
index c425f4a..050b839 100644
--- a/libavdevice/decklink_common.cpp
+++ b/libavdevice/decklink_common.cpp
@@ -473,5 +473,14 @@ int ff_decklink_init_device(AVFormatContext *avctx, const 
char* name)
 return AVERROR_EXTERNAL;
 }
 
+if (ctx->attr->GetInt(BMDDeckLinkMaximumAudioChannels, 
&ctx->max_audio_channels) != S_OK) {
+av_log(avctx, AV_LOG_WARNING, "Could not determine number of audio 
channels\n");
+ctx->max_audio_channels = 0;
+}
+if (ctx->max_audio_channels > DECKLINK_MAX_AUDIO_CHANNELS) {
+av_log(avctx, AV_LOG_WARNING, "Decklink card reported support for more 
channels than ffmpeg supports\n");
+ctx->max_audio_channels = DECKLINK_MAX_AUDIO_CHANNELS;
+}
+
 return 0;
 }
diff --git a/libavdevice/decklink_common.h b/libavdevice/decklink_common.h
index e951b17..bbe4deb 100644
--- a/libavdevice/decklink_common.h
+++ b/libavdevice/decklink_common.h
@@ -31,6 +31,10 @@
 #include "libklvanc/vanc.h"
 #endif
 
+/* Maximum number of channels possible across variants of Blackmagic cards.
+   Actual number for any particular model of card may be lower */
+#define DECKLINK_MAX_AUDIO_CHANNELS 32
+
 class decklink_output_callback;
 class decklink_input_callback;
 
@@ -65,6 +69,7 @@ struct decklink_ctx {
 int bmd_height;
 int bmd_field_dominance;
 int supports_vanc;
+int64_t max_audio_channels;
 
 /* Capture buffer queue */
 AVPacketQueue queue;
@@ -79,7 +84,8 @@ struct decklink_ctx {
 int64_t last_pts;
 unsigned long frameCount;
 unsigned int dropped;
-AVStream *audio_st;
+AVStream *audio_st[DECKLINK_MAX_AUDIO_CHANNELS];
+int num_audio_streams;
 AVStream *video_st;
 AVStream *teletext_st;
 uint16_t cdp_sequence_num;
diff --git a/libavdevice/decklink_common_c.h b/libavdevice/decklink_common_c.h
index 368ac25..02011ed 100644
--- a/libavdevice/decklink_common_c.h
+++ b/libavdevice/decklink_common_c.h
@@ -30,6 +30,11 @@ typedef enum DecklinkPtsSource {
 PTS_SRC_WALLCLOCK = 4,
 } DecklinkPtsSource;
 
+typedef enum DecklinkAudioMode {
+AUDIO_MODE_DISCRETE = 0,
+AUDIO_MODE_PAIRS = 1,
+} DecklinkAudioMode;
+
 struct decklink_cctx {
 const AVClass *cclass;
 
@@ -42,6 +47,7 @@ struct decklink_cctx {
 double preroll;
 int v210;
 int audio_channels;
+int audio_mode;
 int audio_depth;
 int duplex_mode;
 DecklinkPtsSource audio_pts_source;
diff --git a/libavdevice/decklink_dec.cpp b/libavdevice/decklink_dec.cpp
index e90b428..11b7e60 100644
--- a/libavdevice/decklink_dec.cpp
+++ b/libavdevice/decklink_dec.cpp
@@ -625,9 +625,54 @@ static int64_t get_pkt_pts(IDeckLinkVideoInputFrame 
*videoFrame,
 return pts;
 }
 
+static int setup_audio(AVFormatContext *avctx)
+{
+struct decklink_cctx *cctx = (struct decklink_cctx *)avctx->priv_data;
+struct decklink_ctx *ctx = (struct decklink_ctx *)cctx->ctx;
+AVStream *st;
+int ret = 0;
+
+if (cctx->audio_mode == AUDIO_MODE_DISCRETE) {
+st = avformat_new_stream(avctx, NULL);
+if (!st) {
+av_log(avctx, AV_LOG_ERROR, "Cannot add stream\n");
+ret = AVERROR(ENOMEM);
+goto error;
+}
+st->codecpar->codec_type  = AVMEDIA_TYPE_AUDIO;
+st->codecpar->codec_id= ctx->audio_depth == 32 ? 
AV_CODEC_ID_PCM_S32LE : AV_CODEC_ID_PCM_S16LE;
+st->codecpar->sample_rate = bmdAudioSampleRate48kHz;
+st->codecpar->channels= cctx->audio_channels;
+avpriv_set_pts_info(st, 64, 1, 100);  /* 64 bits pts in us */
+ctx->audio_st[0] = st;
+ctx->num_audio_streams++;
+} else {
+for (int i = 0; i < ctx->max_audio_channels / 2; i++) {
+st = avformat_new_stream(avctx, NULL);
+if (!st) {
+av_log(avctx, AV_LOG_ERROR, "Cannot add stream %d\n", i);
+ret = AVERROR(ENOMEM);
+goto error;
+}
+st->codecpar->codec_type  = AVMEDIA_TYPE_AUDI

[FFmpeg-devel] [PATCH 6/6] decklink: Add support for SCTE-104 to decklink capture

2017-11-16 Thread Devin Heitmueller
Make use of libklvanc to parse SCTE-104 packets and announce them
as a new stream.  Right now we just pass the payload straight
through, but once this is hoooked into libklscte35 we'll be able
to generate SCTE-35 messages in the MPEG TS stream.

Note that this feature needs to be explicitly enabled by the user
through the "-enable_scte_104" option, since we cannot autodetect
the presence of SCTE-104 (because unlike with 708/AFD messages are
not set except when trigger occurs, thus the stream wouldn't get
created during the read_header phase).

Signed-off-by: Devin Heitmueller 
---
 libavcodec/avcodec.h|  1 +
 libavcodec/codec_desc.c |  6 
 libavdevice/decklink_common.h   |  6 
 libavdevice/decklink_common_c.h |  1 +
 libavdevice/decklink_dec.cpp| 64 -
 libavdevice/decklink_dec_c.c|  1 +
 6 files changed, 78 insertions(+), 1 deletion(-)

diff --git a/libavcodec/avcodec.h b/libavcodec/avcodec.h
index 6981f07..453b6be 100644
--- a/libavcodec/avcodec.h
+++ b/libavcodec/avcodec.h
@@ -667,6 +667,7 @@ enum AVCodecID {
 AV_CODEC_ID_TTF = 0x18000,
 
 AV_CODEC_ID_SCTE_35, ///< Contain timestamp estimated through PCR of 
program stream.
+AV_CODEC_ID_SCTE_104,
 AV_CODEC_ID_BINTEXT= 0x18800,
 AV_CODEC_ID_XBIN,
 AV_CODEC_ID_IDF,
diff --git a/libavcodec/codec_desc.c b/libavcodec/codec_desc.c
index c3688de..e198985 100644
--- a/libavcodec/codec_desc.c
+++ b/libavcodec/codec_desc.c
@@ -3103,6 +3103,12 @@ static const AVCodecDescriptor codec_descriptors[] = {
 .name  = "scte_35",
 .long_name = NULL_IF_CONFIG_SMALL("SCTE 35 Message Queue"),
 },
+{
+.id= AV_CODEC_ID_SCTE_104,
+.type  = AVMEDIA_TYPE_DATA,
+.name  = "scte_104",
+.long_name = NULL_IF_CONFIG_SMALL("SCTE 104 Digital Program 
Insertion"),
+},
 
 /* deprecated codec ids */
 };
diff --git a/libavdevice/decklink_common.h b/libavdevice/decklink_common.h
index bbe4deb..3ecdb19 100644
--- a/libavdevice/decklink_common.h
+++ b/libavdevice/decklink_common.h
@@ -35,6 +35,10 @@
Actual number for any particular model of card may be lower */
 #define DECKLINK_MAX_AUDIO_CHANNELS 32
 
+/* This isn't actually tied to the Blackmagic hardware - it's an arbitrary
+   number used to size the array of streams */
+#define DECKLINK_MAX_DATA_STREAMS 16
+
 class decklink_output_callback;
 class decklink_input_callback;
 
@@ -86,6 +90,8 @@ struct decklink_ctx {
 unsigned int dropped;
 AVStream *audio_st[DECKLINK_MAX_AUDIO_CHANNELS];
 int num_audio_streams;
+AVStream *data_st[DECKLINK_MAX_DATA_STREAMS];
+int num_data_streams;
 AVStream *video_st;
 AVStream *teletext_st;
 uint16_t cdp_sequence_num;
diff --git a/libavdevice/decklink_common_c.h b/libavdevice/decklink_common_c.h
index 02011ed..cb73ec9 100644
--- a/libavdevice/decklink_common_c.h
+++ b/libavdevice/decklink_common_c.h
@@ -58,6 +58,7 @@ struct decklink_cctx {
 char *format_code;
 int raw_format;
 int64_t queue_size;
+int enable_scte_104;
 };
 
 #endif /* AVDEVICE_DECKLINK_COMMON_C_H */
diff --git a/libavdevice/decklink_dec.cpp b/libavdevice/decklink_dec.cpp
index bea9213..47323b1 100644
--- a/libavdevice/decklink_dec.cpp
+++ b/libavdevice/decklink_dec.cpp
@@ -670,6 +670,33 @@ error:
 return ret;
 }
 
+static int setup_data(AVFormatContext *avctx)
+{
+struct decklink_cctx *cctx = (struct decklink_cctx *)avctx->priv_data;
+struct decklink_ctx *ctx = (struct decklink_ctx *)cctx->ctx;
+AVStream *st;
+int ret = 0;
+
+if (cctx->enable_scte_104) {
+st = avformat_new_stream(avctx, NULL);
+if (!st) {
+av_log(avctx, AV_LOG_ERROR, "Cannot add data stream\n");
+ret = AVERROR(ENOMEM);
+goto error;
+}
+st->codecpar->codec_type  = AVMEDIA_TYPE_DATA;
+st->time_base.den = ctx->bmd_tb_den;
+st->time_base.num = ctx->bmd_tb_num;
+st->codecpar->codec_id= AV_CODEC_ID_SCTE_104;
+avpriv_set_pts_info(st, 64, 1, 100);  /* 64 bits pts in us */
+ctx->data_st[ctx->num_data_streams] = st;
+ctx->num_data_streams++;
+}
+
+error:
+return ret;
+}
+
 #if CONFIG_LIBKLVANC
 /* VANC Callbacks */
 struct vanc_cb_ctx {
@@ -735,12 +762,44 @@ static int cb_EIA_708B(void *callback_context, struct 
klvanc_context_s *ctx,
 return 0;
 }
 
+static int cb_SCTE_104(void *callback_context, struct klvanc_context_s *ctx,
+   struct klvanc_packet_scte_104_s *pkt)
+{
+struct vanc_cb_ctx *cb_ctx = (struct vanc_cb_ctx *)callback_context;
+decklink_cctx *decklink_cctx = (struct decklink_cctx 
*)cb_ctx->avctx->priv_data;
+struct decklink_ctx *decklink_ctx = (struct decklink_ctx 
*)decklink_cctx->ctx;
+A

[FFmpeg-devel] [PATCH 5/6] Add suppoort for using libklvanc from within decklink capture module

2017-11-16 Thread Devin Heitmueller
Make use of libklvanc from within the decklink capture module,
initially for EIA-708 and AFD.  Support for other VANC types will
come in subsequent patches.

Signed-off-by: Devin Heitmueller 
---
 libavdevice/decklink_dec.cpp | 135 +++
 1 file changed, 135 insertions(+)

diff --git a/libavdevice/decklink_dec.cpp b/libavdevice/decklink_dec.cpp
index 11b7e60..bea9213 100644
--- a/libavdevice/decklink_dec.cpp
+++ b/libavdevice/decklink_dec.cpp
@@ -3,6 +3,7 @@
  * Copyright (c) 2013-2014 Luca Barbato, Deti Fliegl
  * Copyright (c) 2014 Rafaël Carré
  * Copyright (c) 2017 Akamai Technologies, Inc.
+ * Copyright (c) 2017 LTN Global Communications, Inc.
  *
  * This file is part of FFmpeg.
  *
@@ -669,10 +670,128 @@ error:
 return ret;
 }
 
+#if CONFIG_LIBKLVANC
+/* VANC Callbacks */
+struct vanc_cb_ctx {
+AVFormatContext *avctx;
+AVPacket *pkt;
+};
+static int cb_AFD(void *callback_context, struct klvanc_context_s *ctx,
+  struct klvanc_packet_afd_s *pkt)
+{
+struct vanc_cb_ctx *cb_ctx = (struct vanc_cb_ctx *)callback_context;
+uint8_t *afd;
+
+afd = (uint8_t *)av_malloc(1);
+if (afd == NULL)
+return AVERROR(ENOMEM);
+
+afd[0] = pkt->hdr.payload[0] >> 3;
+if (av_packet_add_side_data(cb_ctx->pkt, AV_PKT_DATA_AFD, afd, 1) < 0)
+av_free(afd);
+
+return 0;
+}
+
+static int cb_EIA_708B(void *callback_context, struct klvanc_context_s *ctx,
+   struct klvanc_packet_eia_708b_s *pkt)
+{
+struct vanc_cb_ctx *cb_ctx = (struct vanc_cb_ctx *)callback_context;
+decklink_cctx *cctx = (struct decklink_cctx *)cb_ctx->avctx->priv_data;
+struct decklink_ctx *decklink_ctx = (struct decklink_ctx *)cctx->ctx;
+
+uint16_t expected_cdp;
+uint8_t *cc;
+
+if (!pkt->checksum_valid)
+return 0;
+
+if (!pkt->header.ccdata_present)
+return 0;
+
+expected_cdp = decklink_ctx->cdp_sequence_num + 1;
+decklink_ctx->cdp_sequence_num = pkt->header.cdp_hdr_sequence_cntr;
+if (pkt->header.cdp_hdr_sequence_cntr != expected_cdp) {
+av_log(cb_ctx->avctx, AV_LOG_DEBUG,
+   "CDP counter inconsistent.  Received=0x%04x Expected=%04x\n",
+   pkt->header.cdp_hdr_sequence_cntr, expected_cdp);
+return 0;
+}
+
+cc = (uint8_t *)av_malloc(pkt->ccdata.cc_count * 3);
+if (cc == NULL)
+return AVERROR(ENOMEM);
+
+for (int i = 0; i < pkt->ccdata.cc_count; i++) {
+cc[3*i] = 0xf8 | (pkt->ccdata.cc[i].cc_valid ? 0x04 : 0x00) |
+  (pkt->ccdata.cc[i].cc_type & 0x03);
+cc[3*i+1] = pkt->ccdata.cc[i].cc_data[0];
+cc[3*i+2] = pkt->ccdata.cc[i].cc_data[1];
+}
+
+if (av_packet_add_side_data(cb_ctx->pkt, AV_PKT_DATA_A53_CC, cc, 
pkt->ccdata.cc_count * 3) < 0)
+av_free(cc);
+
+return 0;
+}
+
+static struct klvanc_callbacks_s callbacks =
+{
+.afd   = cb_AFD,
+.eia_708b  = cb_EIA_708B,
+.eia_608   = NULL,
+.scte_104  = NULL,
+.all   = NULL,
+.kl_i64le_counter  = NULL,
+};
+/* End: VANC Callbacks */
+
+/* Take one line of V210 from VANC, colorspace convert and feed it to the
+ * VANC parser. We'll expect our VANC message callbacks to happen on this
+ * same calling thread.
+ */
+static void klvanc_handle_line(AVFormatContext *avctx, struct klvanc_context_s 
*vanc_ctx,
+   unsigned char *buf, unsigned int uiWidth, 
unsigned int lineNr,
+   AVPacket *pkt)
+{
+/* Convert the vanc line from V210 to CrCB422, then vanc parse it */
+
+/* We need two kinds of type pointers into the source vbi buffer */
+/* TODO: What the hell is this, two ptrs? */
+const uint32_t *src = (const uint32_t *)buf;
+
+/* Convert Blackmagic pixel format to nv20.
+ * src pointer gets mangled during conversion, hence we need its own
+ * ptr instead of passing vbiBufferPtr.
+ * decoded_words should be atleast 2 * uiWidth.
+ */
+uint16_t decoded_words[16384];
+
+/* On output each pixel will be decomposed into three 16-bit words (one 
for Y, U, V) */
+memset(&decoded_words[0], 0, sizeof(decoded_words));
+uint16_t *p_anc = decoded_words;
+if (klvanc_v210_line_to_nv20_c(src, p_anc, sizeof(decoded_words), (uiWidth 
/ 6) * 6) < 0)
+return;
+
+if (vanc_ctx) {
+struct vanc_cb_ctx cb_ctx = {
+.avctx = avctx,
+.pkt = pkt
+};
+vanc_ctx->callback_context = &cb_ctx;
+int ret = klvanc_packet_parse(vanc_ctx, lineNr, decoded_words, 
sizeof(decoded_words) / (sizeof(unsigned short)));
+if (ret < 0) {
+/* No VANC on this line */
+}
+}
+}
+#endif
+
 HRESULT decklink_input_callback::VideoInputFrameArrived(
 IDeckLinkVideoInputFram

[FFmpeg-devel] [PATCH 4/6] Support encoding of Active Format Description (AFD) in libx264

2017-11-16 Thread Devin Heitmueller
If AFD side data is present, include it in an H.264 SEI payload when
encoding with libx264.

This is done in the same manner that we currently handle A53 closed
captions (where the business logic for constructing the SEI is in
libavcodec/utils.c), so it should be portable to the other encoder
types (i.e. videotoolbox, etc).

Signed-off-by: Devin Heitmueller 
---
 libavcodec/internal.h |  3 +++
 libavcodec/libx264.c  | 38 ++
 libavcodec/utils.c| 36 
 3 files changed, 73 insertions(+), 4 deletions(-)

diff --git a/libavcodec/internal.h b/libavcodec/internal.h
index d47ce0e..a2c7be4 100644
--- a/libavcodec/internal.h
+++ b/libavcodec/internal.h
@@ -408,6 +408,9 @@ int ff_side_data_set_encoder_stats(AVPacket *pkt, int 
quality, int64_t *error, i
 int ff_alloc_a53_sei(const AVFrame *frame, size_t prefix_len,
  void **data, size_t *sei_size);
 
+int ff_alloc_afd_sei(const AVFrame *frame, size_t prefix_len,
+ void **data, size_t *sei_size);
+
 /**
  * Get an estimated video bitrate based on frame size, frame rate and coded
  * bits per pixel.
diff --git a/libavcodec/libx264.c b/libavcodec/libx264.c
index 9c67c91..f0f3260 100644
--- a/libavcodec/libx264.c
+++ b/libavcodec/libx264.c
@@ -86,6 +86,7 @@ typedef struct X264Context {
 int forced_idr;
 int coder;
 int a53_cc;
+int afd;
 int b_frame_strategy;
 int chroma_offset;
 int scenechange_threshold;
@@ -275,6 +276,7 @@ static int X264_frame(AVCodecContext *ctx, AVPacket *pkt, 
const AVFrame *frame,
 x264_nal_t *nal;
 int nnal, i, ret;
 x264_picture_t pic_out = {0};
+int num_payloads = 0;
 int pict_type;
 
 x264_picture_init( &x4->pic );
@@ -323,10 +325,37 @@ static int X264_frame(AVCodecContext *ctx, AVPacket *pkt, 
const AVFrame *frame,
 } else {
 x4->pic.extra_sei.sei_free = av_free;
 
-x4->pic.extra_sei.payloads[0].payload_size = sei_size;
-x4->pic.extra_sei.payloads[0].payload = sei_data;
-x4->pic.extra_sei.num_payloads = 1;
-x4->pic.extra_sei.payloads[0].payload_type = 4;
+x4->pic.extra_sei.payloads[num_payloads].payload_size = 
sei_size;
+x4->pic.extra_sei.payloads[num_payloads].payload = 
sei_data;
+x4->pic.extra_sei.payloads[num_payloads].payload_type = 4;
+x4->pic.extra_sei.num_payloads++;
+num_payloads++;
+}
+}
+}
+
+/* Active Format Description */
+if (x4->afd) {
+void *sei_data;
+size_t sei_size;
+
+ret = ff_alloc_afd_sei(frame, 0, &sei_data, &sei_size);
+if (ret < 0) {
+av_log(ctx, AV_LOG_ERROR, "Not enough memory for AFD, 
skipping\n");
+} else if (sei_data) {
+x4->pic.extra_sei.payloads = 
av_realloc(x4->pic.extra_sei.payloads,
+
sizeof(x4->pic.extra_sei.payloads[0]) * (num_payloads + 1));
+if (x4->pic.extra_sei.payloads == NULL) {
+av_log(ctx, AV_LOG_ERROR, "Not enough memory for AFD, 
skipping\n");
+av_free(sei_data);
+} else {
+x4->pic.extra_sei.sei_free = av_free;
+
+x4->pic.extra_sei.payloads[num_payloads].payload_size = 
sei_size;
+x4->pic.extra_sei.payloads[num_payloads].payload = 
sei_data;
+x4->pic.extra_sei.payloads[num_payloads].payload_type = 4;
+x4->pic.extra_sei.num_payloads++;
+num_payloads++;
 }
 }
 }
@@ -892,6 +921,7 @@ static const AVOption options[] = {
 {"passlogfile", "Filename for 2 pass stats", OFFSET(stats), 
AV_OPT_TYPE_STRING, {.str=NULL}, 0, 0, VE},
 {"wpredp", "Weighted prediction for P-frames", OFFSET(wpredp), 
AV_OPT_TYPE_STRING, {.str=NULL}, 0, 0, VE},
 {"a53cc",  "Use A53 Closed Captions (if available)",  
OFFSET(a53_cc),AV_OPT_TYPE_BOOL,   {.i64 = 1}, 0, 1, VE},
+{"afd","Use Active Format Description (AFD) (if 
available)",OFFSET(afd),AV_OPT_TYPE_BOOL,   {.i64 = 1}, 0, 1, VE},
 {"x264opts", "x264 options", OFFSET(x264opts), AV_OPT_TYPE_STRING, 
{.str=NULL}, 0, 0, VE},
 { "crf",   "Select the quality for constant quality mode",
OFFSET(crf),   AV_OPT_TYPE_FLOAT,  {.dbl = -1 }, -1, FLT_MAX, VE },
 { "crf_max",   "In CRF mode, prevents VBV from lowering quality beyond 
this point.",OFFSET(crf_max), 

Re: [FFmpeg-devel] [PATCH 2/6] decklink: Introduce support for capture of multiple audio streams

2017-11-16 Thread Devin Heitmueller

> On Nov 16, 2017, at 7:22 PM, Derek Buitenhuis  
> wrote:
> 
> On 11/16/2017 6:34 PM, Devin Heitmueller wrote:
>> +uint8_t *audio_in = ((uint8_t *) audioFrameBytes) + 
>> audio_offset;
>> +for (int x = 0; x < pkt.size; x += sample_size) {
> 
> I realize this is C++, but I'm not sure if we still try to stick
> to our C style (aka no mixed variable decls) in our C++ too.
> 

I don’t have strong feelings either way.  I’m happy to jam this into a 
subsequent cleanup patch if nobody has an objection (it’s just much easier 
since I have about 15 commits after this one in my Git tree).

Devin

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 4/6] Support encoding of Active Format Description (AFD) in libx264

2017-11-16 Thread Devin Heitmueller
Hello Derek,

Thanks for taking the time to review these patches.  Comments below.

> On Nov 16, 2017, at 7:20 PM, Derek Buitenhuis  
> wrote:
> 
> On 11/16/2017 6:34 PM, Devin Heitmueller wrote:
> 
>> +/* Active Format Description */
>> +if (x4->afd) {
>> +void *sei_data;
>> +size_t sei_size;
>> +
>> +ret = ff_alloc_afd_sei(frame, 0, &sei_data, &sei_size);
>> +if (ret < 0) {
>> +av_log(ctx, AV_LOG_ERROR, "Not enough memory for AFD, 
>> skipping\n");
>> +} else if (sei_data) {
> 
> In an OOM situation, we always fail hard.

Ok.

> 
>> +x4->pic.extra_sei.payloads = 
>> av_realloc(x4->pic.extra_sei.payloads,
>> +
>> sizeof(x4->pic.extra_sei.payloads[0]) * (num_payloads + 1));
>> +if (x4->pic.extra_sei.payloads == NULL) {
>> +av_log(ctx, AV_LOG_ERROR, "Not enough memory for AFD, 
>> skipping\n");
> 
> This will leak the original x4->pic.extra_sei.payloads on failure, won't it?
> 
> Also, as above, we should fail hard here.

Ok.

> 
>> +/* country code (SCTE 128-1 Sec 8.1.1) */
>> +sei_data[0] = 181;
>> +sei_data[1] = 0;
>> +sei_data[2] = 49;
>> +
>> +/* country code (SCTE 128-1 Sec 8.1.2) */
>> +AV_WL32(sei_data + 3, MKTAG('D', 'T', 'G', '1'));
>> +
>> +/* country code (SCTE 128-1 Sec 8.2.5) */
>> +sei_data[7] = 0x41;
>> +sei_data[8] = 0xf0 | side_data->data[0];
> 
> I assume these values are supposed to always be the same? Excuse my 
> unfamiliarity
> with SCTE-128 - country codes sounds like something you wouldn't want to 
> hardcode.


For whatever reason, the spec explicitly calls for the country code to be set 
to these values.  Here’s the specific language from the spec:

itu_t_t35_country_code – A fixed 8-bit field, the value of which shall be 0xB5.
itu_t_35_provider_code – A fixed 16-bit field registered by the ATSC. The value 
shall be 0x0031.

(Note, the code in question was actually copied from the function directly 
above it which creates the SEI for A53 captions).

All that said, it looks like I did screw up the comments.  The Spec section 
references are correct but for some reason all three say “country code”, which 
is a typo.

I’ll clean up the OOM handling as you requested, as well as fix the comments in 
a V2 patch.

Just an FYI, the spec is freely available here in case you want to know more:

https://www.scte.org/documents/pdf/Standards/ANSI_SCTE%20128-1%202013.pdf 
<https://www.scte.org/documents/pdf/Standards/ANSI_SCTE%20128-1%202013.pdf>

Devin
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 6/6] decklink: Add support for SCTE-104 to decklink capture

2017-11-16 Thread Devin Heitmueller

> On Nov 16, 2017, at 7:35 PM, Derek Buitenhuis  
> wrote:
> 
> On 11/16/2017 6:34 PM, Devin Heitmueller wrote:
>> ---
>> libavcodec/avcodec.h|  1 +
>> libavcodec/codec_desc.c |  6 
>> libavdevice/decklink_common.h   |  6 
>> libavdevice/decklink_common_c.h |  1 +
>> libavdevice/decklink_dec.cpp| 64 
>> -
>> libavdevice/decklink_dec_c.c|  1 +
>> 6 files changed, 78 insertions(+), 1 deletion(-)
> 
> Needs a version bump.
> 
>> +static int setup_data(AVFormatContext *avctx)
>> +{
>> +struct decklink_cctx *cctx = (struct decklink_cctx *)avctx->priv_data;
>> +struct decklink_ctx *ctx = (struct decklink_ctx *)cctx->ctx;
>> +AVStream *st;
>> +int ret = 0;
>> +
>> +if (cctx->enable_scte_104) {
>> +st = avformat_new_stream(avctx, NULL);
>> +if (!st) {
>> +av_log(avctx, AV_LOG_ERROR, "Cannot add data stream\n");
>> +ret = AVERROR(ENOMEM);
>> +goto error;
>> +}
> 
> This is the only error path in the function, so the goto is superfluous.

Yeah, the goto was a product of some refactoring.  I will get rid of it for the 
V2 series.

Thanks,

Devin

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 1/2] libavdevice/decklink: Add support for EIA-708 output over SDI

2017-11-28 Thread Devin Heitmueller
Hello Marton,

Thanks for taking the time to review.  Most of the comments you’ve raised will 
be fixed and I’ll resubmit an updated patch.  Comments on other issues inline 
below.
>> 
>>/* Options */
>>int list_devices;
>> @@ -88,6 +93,7 @@ struct decklink_ctx {
>>DecklinkPtsSource audio_pts_source;
>>DecklinkPtsSource video_pts_source;
>>int draw_bars;
>> +int raw_format;
> 
> Since this header includes decklink headers, this can be BMDPixelFormat 
> instead of int, and you can use the decklink constants directly instead of 
> MKBETAG.

I used MKBETAG because that was what was being used in decklink_dec.cpp (and I 
wanted to be consistent).  That said, I have no objection to changing it.

> 
> For older decklink models (E.g. Decklink SDI, Decklink Duo 1), when you 
> capture in 8 bit mode, you can only query 8bit VANC. For output, can you 
> always use 10-bit VANC? Even if you use 8bit mode for video? Because if you 
> can't, then it might make sense to return silently here, or only warn to user 
> once, not for every frame (and maybe disable vanc_support?).

All decklink models require that VANC be in the same bit depth as video capture 
(i.e. with both older and newer models you cannot do 8-bit video with 10-bit 
VANC or vice-versa).  The only exception is the RGB formats which do VANC in 
10-bit YUV.  The decklink_construct_vanc() function is only ever called if the 
device is putting out 10-bit video, and thus your question about putting out 
10-bit VANC when doing 8-bit video isn’t an issue since we never hit that code 
path.  

In summary, 8-bit VANC isn’t supported in the module, and I don’t have any 
immediate plans to do such given how rare it is nowadays.  If somebody really 
cares about that use case, we can discuss further.

Devin

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 2/2] decklink: Add support for output of Active Format Description (AFD)

2017-11-28 Thread Devin Heitmueller
Hi Marton,

Comments inline.

>> +data = av_packet_get_side_data(pkt, AV_PKT_DATA_AFD, &size);
>> +if (data) {
>> +struct klvanc_packet_afd_s *pkt;
>> +uint16_t *afd;
>> +uint16_t len;
>> +
>> +ret = klvanc_create_AFD(&pkt);
>> +if (ret != 0)
>> +return AVERROR(ENOMEM);
>> +
>> +ret = klvanc_set_AFD_val(pkt, data[0]);
>> +if (ret != 0) {
>> +av_log(avctx, AV_LOG_ERROR, "Invalid AFD value specified: %d\n",
>> +   data[0]);
>> +klvanc_destroy_AFD(pkt);
>> +return AVERROR(EINVAL);
>> +}
>> +
>> +/* FIXME: Should really rely on the coded_width but seems like that
>> +   is not accessible to libavdevice outputs */
>> +if ((st->codecpar->width == 1280 && st->codecpar->height == 720) ||
>> +(st->codecpar->width == 1920 && st->codecpar->height == 1080))
>> +pkt->aspectRatio = ASPECT_16x9;
>> +else
>> +pkt->aspectRatio = ASPECT_4x3;
> 
> Does this work for SD 16x9? Shouldn't you also use st->sample_aspect_ratio 
> with some rounding to handle 704x576 16:9 and such mess?

Bear in mind that the “aspectRatio” field in question isn’t what the video 
should ultimately be rendered in.  It’s just the aspect ratio of the source 
video.  For cases where you’re doing SD with 16x9, the source video itself is 
in 4x3 aspect ratio, but the expected result should be in 16x9.  The fact that 
the video should be rendered in 16x9 is controlled by the call to 
klvanc_set_AFD_val() further up in the function.  SAR/PAR, etc aren’t a 
consideration for what drives the aspectRatio field in AFD.

In short, this routine could probably be replaced with something that just 
divides pixelwidth/pixelheight and determines whether it’s equal to 1.77 or 
some smaller value.

I know it’s counter-intuitive to have a detailed aspect ratio description with 
something like 16 possible values, and then to also have a single bit which 
indicates whether the original source video is “16x9” or “4x3” when any 
downstream device can just look at the real pixel width/height of the video.  
Complain to the ATSC, I guess.  :-)

Devin
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 1/2] libavdevice/decklink: Add support for EIA-708 output over SDI

2017-11-28 Thread Devin Heitmueller
Hello Marton,

> SDK says:
> 
> When capturing ancillary data with a 4K DeckLink device, the ancillary
> data will always be in the 10-bit YUV pixel format.
> 
> This also applies to 8 bit YUV captures according to my experience.
> 

Thanks for refreshing my memory.  I remember reading that text, but for some 
reason thought it was RGB formats, not 4K capture products.

That’s actually a really nice feature - if you’re not feeding a 10-bit encoder 
then it would avoid having to do 10-to-8bit colorspace conversion in software 
in order for VANC to be properly preserved.  In earlier products you either had 
to choose between 8-bit video but VANC wouldn’t be properly preserved, or you 
could do 10-bit video but then have to colorspace convert from V210 to an 8-bit 
format.

In the future I’ll have to look and see if that’s exposed through a decklink 
attribute (or do we have to hard-code the model info into the application to 
know which cards work this way).

Regards,

Devin
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 5/6] Add suppoort for using libklvanc from within decklink capture module

2017-11-29 Thread Devin Heitmueller
Hello Derek,

Comments inline.

>> 
>> +afd[0] = pkt->hdr.payload[0] >> 3;
>> +if (av_packet_add_side_data(cb_ctx->pkt, AV_PKT_DATA_AFD, afd, 1) < 0)
>> +av_free(afd);
> 
> Is there a reason we shouldn't fail hard here?

Not really.  The parser will log an error if the callback returns a nonzero 
value, but beyond the return value isn’t actively used.  That said, no 
objection to having it return -1 for clarity.
> 
>> +static struct klvanc_callbacks_s callbacks =
>> +{
>> +.afd   = cb_AFD,
>> +.eia_708b  = cb_EIA_708B,
>> +.eia_608   = NULL,
>> +.scte_104  = NULL,
>> +.all   = NULL,
>> +.kl_i64le_counter  = NULL,
>> +};
> 
> I thought C++ didn't have designated initializers? Maybe my C++ is rusty.

Clang didn’t complain, and g++ only complains if you put them in a non-default 
order (i.e. "non-trivial designated initializers not supported").  The 
designated initializers improve readability but aren’t required (since already 
put the items in the default order).  If there’s a portability concern then I 
can get rid of them.

> 
> Same for other occurrences.

I’m sorry, but what other occurrences?  I don’t see any other instances in this 
patch where designated initializers are used — or did I misunderstand your 
comment?
> 
>> +/* Convert the vanc line from V210 to CrCB422, then vanc parse it */
>> +
>> +/* We need two kinds of type pointers into the source vbi buffer */
>> +/* TODO: What the hell is this, two ptrs? */
>> +const uint32_t *src = (const uint32_t *)buf;
> 
> Is buf guaranteed to be properly aligned for this, or will cause aliasing 
> problems?

Hmm, good question.  The start of each line will always be aligned on a 48 byte 
boundary as a result of how the decklink module manages it’s buffers, but I 
agree that this block of code is a bit messy and needs some cleanup (hence the 
TODO).

I suspect the original routine was cribbed from OBE (with portions derived from 
ffmpeg’s v210dec), and the assembly version of the same function probably isn’t 
as forgiving (although libklvanc doesn’t provide an assembly implementation as 
this routine isn’t particularly performance sensitive).

> 
>> +vanc_ctx->callback_context = &cb_ctx;
>> +int ret = klvanc_packet_parse(vanc_ctx, lineNr, decoded_words, 
>> sizeof(decoded_words) / (sizeof(unsigned short)));
> 
> Nobody should be typing 'short' in any C/C++ code in 2017..

Will fix.

> 
>> +if (ret < 0) {
>> +/* No VANC on this line */
>> +}
> 
> Huh?

The parser takes in the complete VANC lines, but it’s possible that those lines 
are blank and don’t contain any actual VANC packets.  That said, you’re right - 
a negative return should be treated as an error and the comment in question 
should only occur if the return value is zero (a positive return value is the 
number of packets parsed).  Will fix.

> 
>> +#if CONFIG_LIBKLVANC
>> +klvanc_handle_line(avctx, ctx->vanc_ctx,
>> +   buf, videoFrame->GetWidth(), 
>> i, &pkt);
>> +#else
> 
> No error checking possible?

Will fix.

> 
>> }
>> +
>> vanc->Release();
> 
> Stray change.

Will fix.

> 
>> +#if CONFIG_LIBKLVANC
>> +if (klvanc_context_create(&ctx->vanc_ctx) < 0) {
>> +av_log(avctx, AV_LOG_ERROR, "Cannot create VANC library context\n");
>> +} else {
>> +ctx->vanc_ctx->verbose = 0;
>> +ctx->vanc_ctx->callbacks = &callbacks;
>> +}
>> +#endif
> 
> Should fail hard, no?

Will fix.

Thanks for reviewing,

Devin
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 5/6] Add suppoort for using libklvanc from within decklink capture module

2017-11-29 Thread Devin Heitmueller
Hello James,

Thanks for reviewing.

>> +afd[0] = pkt->hdr.payload[0] >> 3;
>> +if (av_packet_add_side_data(cb_ctx->pkt, AV_PKT_DATA_AFD, afd, 1) < 0)
>> +av_free(afd);
> 
> For this, av_packet_new_side_data() seems more adequate than av_malloc()
> + av_packet_add_side_data().
> 
> Also, you should propagate the errors av_packet_{add,new}_side_data return.

Good catch.  Yes, I will move to using av_packet_new_side_data() and fix the 
return.

Thanks,

Devin

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] Recent regression in VA-API compatibility (assertion in H.264 encode)

2017-12-01 Thread Devin Heitmueller
Hello,

It looks like a recent patch causes VA-API H.264 encode to stop working and an 
assertion to be thrown.  I ran a git bisect and narrowed it down to the 
following commit:

32a618a948c20f18db102d0b0976790222a57105 is the first bad commit
commit 32a618a948c20f18db102d0b0976790222a57105
Author: Mark Thompson 
Date:   Wed Oct 18 19:46:53 2017 +0100

vaapi_h264: Do not use deprecated header type

SEI headers should be inserted as generic raw data (the old specific
type has been deprecated in libva2).


When run with the above patch, I get the following output:

[h264_vaapi @ 0x37d0a20] Warning: some packed headers are not supported (want 
0xd, got 0xb).
[h264_vaapi @ 0x37d0a20] The encode compression level option is not supported 
with this VAAPI version.
ffmpeg: i965_drv_video.c:352: va_enc_packed_type_to_idx: Assertion `0' failed.

Here’s the vainfo output which provides the version info for the driver, va-api 
version, etc.  This is on a Haswell system running Centos 7.

libva info: VA-API version 0.34.0
libva info: va_getDriverName() returns 0
libva info: Trying to open /usr/lib64/dri/i965_drv_video.so
libva info: Found init function __vaDriverInit_0_34
libva info: va_openDriver() returns 0
vainfo: VA-API version: 0.34 (libva 1.2.1)
vainfo: Driver version: Intel i965 driver - 1.2.2

I’m using the following command line for testing:

./ffmpeg -y -vaapi_device /dev/dri/card0 -i /home/devin/inputfile.ts -vf 
'format=nv12,hwupload' -c:v h264_vaapi out.mp4

Any suggestions that could be offered would be greatly appreciated.  Likewise 
please let me know if there is any other information I can provide that would 
assist in getting this resolved.

Thanks,

Devin Heitmueller
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Recent regression in VA-API compatibility (assertion in H.264 encode)

2017-12-01 Thread Devin Heitmueller
Hi Mark,

>> 
>> Here’s the vainfo output which provides the version info for the driver, 
>> va-api version, etc.  This is on a Haswell system running Centos 7.
>> 
>> libva info: VA-API version 0.34.0
>> libva info: va_getDriverName() returns 0
>> libva info: Trying to open /usr/lib64/dri/i965_drv_video.so
>> libva info: Found init function __vaDriverInit_0_34
>> libva info: va_openDriver() returns 0
>> vainfo: VA-API version: 0.34 (libva 1.2.1)
>> vainfo: Driver version: Intel i965 driver - 1.2.2
> 
> Upgrading to a version less than four years old might be a plan - I admit we 
> do notionally support that version because of old RHEL/CentOS, but it is not 
> well tested (as you are finding).

This is actually what you get with the very latest release of Centos 7.4 
(downloaded yesterday).  Hence while we could certainly argue that perhaps 
they’re shipping versions that are too old, it’s not like I’m running some 
archaic five-year-old copy of Centos I found a DVD for in the bottom of a 
drawer.  :-)

And don’t misunderstand, I’m not against saying “Centos is dumb and should 
bundle newer versions of the library/driver/whatever” - I’m just trying to make 
clear that this is what the experience will be of any non-technical user who 
just does a binary install from the most recent versions of one of the more 
popular distros.

> 
> Try this?  (Not tested, hardware which can run a version that old isn't 
> immediately to hand.)
> 
> --- a/libavcodec/vaapi_encode_h264.c
> +++ b/libavcodec/vaapi_encode_h264.c
> @@ -261,7 +261,8 @@ static int 
> vaapi_encode_h264_write_extra_header(AVCodecContext *avctx,
> return 0;
> 
> #if !CONFIG_VAAPI_1
> -} else if (priv->sei_cbr_workaround_needed) {
> +} else if (priv->sei_cbr_workaround_needed &&
> +   ctx->va_packed_headers & VA_ENC_PACKED_HEADER_SLICE) {
> // Insert a zero-length header using the old SEI type.  This is
> // required to avoid triggering broken behaviour on Intel platforms
> // in CBR mode where an invalid SEI message is generated by the
> 

Ok, will give this a try tonight and report back on my findings.

Thanks!

Devin
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] Recent regression in VA-API compatibility (assertion in H.264 encode)

2017-12-01 Thread Devin Heitmueller

>> Try this?  (Not tested, hardware which can run a version that old isn't 
>> immediately to hand.)
>> 
>> --- a/libavcodec/vaapi_encode_h264.c
>> +++ b/libavcodec/vaapi_encode_h264.c
>> @@ -261,7 +261,8 @@ static int 
>> vaapi_encode_h264_write_extra_header(AVCodecContext *avctx,
>>return 0;
>> 
>> #if !CONFIG_VAAPI_1
>> -} else if (priv->sei_cbr_workaround_needed) {
>> +} else if (priv->sei_cbr_workaround_needed &&
>> +   ctx->va_packed_headers & VA_ENC_PACKED_HEADER_SLICE) {
>>// Insert a zero-length header using the old SEI type.  This is
>>// required to avoid triggering broken behaviour on Intel platforms
>>// in CBR mode where an invalid SEI message is generated by the
>> 
> 
> Ok, will give this a try tonight and report back on my findings.

FYI:  this doesn’t appear to have had any effect - I still get the same assert 
message.

Devin
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 0/4] avdevice/decklink: 10-bit video out and sources/sinks support

2017-09-26 Thread Devin Heitmueller
Hello all,

Below please find several patches which fix a couple of bugs as well
as adding support for 10-bit video on output and the "-sources" and
"-sinks" argument when specified by ffmpeg.c.

If you have any question/concerns, please don't hesitate to reply.

Thanks,

Devin Heitmueller

Devin Heitmueller (4):
  avdevice/decklink: Fix segfault when running -list_devices on OSX
  libavdevice/decklink: add support for -sources and -sinks arguments
  Add support for 10-bit output for Decklink SDI
  livavdevice/decklink: Don't allow any codecs but V210 and UYVY422

 libavdevice/decklink_common.cpp |  54 ++--
 libavdevice/decklink_common.h   |   2 +-
 libavdevice/decklink_dec.cpp|  22 ++-
 libavdevice/decklink_dec.h  |   1 +
 libavdevice/decklink_dec_c.c|   1 +
 libavdevice/decklink_enc.cpp| 134 
 libavdevice/decklink_enc.h  |   1 +
 libavdevice/decklink_enc_c.c|   1 +
 8 files changed, 182 insertions(+), 34 deletions(-)

-- 
2.13.2

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 1/4] avdevice/decklink: Fix segfault when running -list_devices on OSX

2017-09-26 Thread Devin Heitmueller
The string is allocated with CFStringGetCString but was being
deallocated with free(), which would intermittently result in
a segmentation fault.  Use the correct function for freeing the
allocated CFString.

Signed-off-by: Devin Heitmueller 
---
 libavdevice/decklink_common.cpp | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/libavdevice/decklink_common.cpp b/libavdevice/decklink_common.cpp
index cbb591ce64..7745575d0e 100644
--- a/libavdevice/decklink_common.cpp
+++ b/libavdevice/decklink_common.cpp
@@ -84,7 +84,7 @@ static char *dup_cfstring_to_utf8(CFStringRef w)
 }
 #define DECKLINK_STRconst __CFString *
 #define DECKLINK_STRDUP dup_cfstring_to_utf8
-#define DECKLINK_FREE(s) free((void *) s)
+#define DECKLINK_FREE(s) CFRelease(s)
 #define DECKLINK_BOOL bool
 #else
 #define DECKLINK_STRconst char *
-- 
2.13.2

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 4/4] livavdevice/decklink: Don't allow any codecs but V210 and UYVY422

2017-09-26 Thread Devin Heitmueller
Make sure that codecs other than V210 or wrapped avframes with
uyvy422 video are passed to decklink output (which would result
in undefined behavior).

Signed-off-by: Devin Heitmueller 
---
 libavdevice/decklink_enc.cpp | 4 
 1 file changed, 4 insertions(+)

diff --git a/libavdevice/decklink_enc.cpp b/libavdevice/decklink_enc.cpp
index 0d965699ec..8f08ded933 100644
--- a/libavdevice/decklink_enc.cpp
+++ b/libavdevice/decklink_enc.cpp
@@ -159,6 +159,10 @@ static int decklink_setup_video(AVFormatContext *avctx, 
AVStream *st)
" Only AV_PIX_FMT_UYVY422 is supported.\n");
 return -1;
 }
+} else if (c->codec_id != AV_CODEC_ID_V210) {
+av_log(avctx, AV_LOG_ERROR, "Unsupported codec type!"
+   " Only V210 and wrapped frame with AV_PIX_FMT_UYVY422 are 
supported.\n");
+return -1;
 }
 
 if (ff_decklink_set_format(avctx, c->width, c->height,
-- 
2.13.2

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 2/4] libavdevice/decklink: add support for -sources and -sinks arguments

2017-09-26 Thread Devin Heitmueller
Add support for enumerating the sources/sinks via the ffmpeg
command line options, as opposed to having to create a real pipeline
and use the "-list_devices" option which does exit() after dumping
out the options.

Note that this patch preserves the existing "-list_devices" option,
but now shares common code for the actual enumeration.

Signed-off-by: Devin Heitmueller 
---
 libavdevice/decklink_common.cpp | 52 +
 libavdevice/decklink_common.h   |  2 +-
 libavdevice/decklink_dec.cpp| 22 -
 libavdevice/decklink_dec.h  |  1 +
 libavdevice/decklink_dec_c.c|  1 +
 libavdevice/decklink_enc.cpp| 22 -
 libavdevice/decklink_enc.h  |  1 +
 libavdevice/decklink_enc_c.c|  1 +
 8 files changed, 95 insertions(+), 7 deletions(-)

diff --git a/libavdevice/decklink_common.cpp b/libavdevice/decklink_common.cpp
index 7745575d0e..86d6fbb74b 100644
--- a/libavdevice/decklink_common.cpp
+++ b/libavdevice/decklink_common.cpp
@@ -37,6 +37,7 @@ extern "C" {
 #include "libavutil/imgutils.h"
 #include "libavutil/intreadwrite.h"
 #include "libavutil/bswap.h"
+#include "avdevice.h"
 }
 
 #include "decklink_common.h"
@@ -261,24 +262,67 @@ int ff_decklink_set_format(AVFormatContext *avctx, 
decklink_direction_t directio
 return ff_decklink_set_format(avctx, 0, 0, 0, 0, AV_FIELD_UNKNOWN, 
direction, num);
 }
 
-int ff_decklink_list_devices(AVFormatContext *avctx)
+int ff_decklink_list_devices(AVFormatContext *avctx,
+struct AVDeviceInfoList *device_list,
+int show_inputs, int show_outputs)
 {
 IDeckLink *dl = NULL;
 IDeckLinkIterator *iter = CreateDeckLinkIteratorInstance();
+int ret = 0;
+
 if (!iter) {
 av_log(avctx, AV_LOG_ERROR, "Could not create DeckLink iterator\n");
 return AVERROR(EIO);
 }
-av_log(avctx, AV_LOG_INFO, "Blackmagic DeckLink devices:\n");
+
 while (iter->Next(&dl) == S_OK) {
+IDeckLinkOutput *output_config;
+IDeckLinkInput *input_config;
 const char *displayName;
+AVDeviceInfo *new_device = NULL;
+int add = 0;
+
 ff_decklink_get_display_name(dl, &displayName);
-av_log(avctx, AV_LOG_INFO, "\t'%s'\n", displayName);
+
+if (show_outputs) {
+if (dl->QueryInterface(IID_IDeckLinkOutput, (void 
**)&output_config) == S_OK) {
+output_config->Release();
+add = 1;
+}
+}
+
+if (show_inputs) {
+if (dl->QueryInterface(IID_IDeckLinkInput, (void **)&input_config) 
== S_OK) {
+input_config->Release();
+add = 1;
+}
+}
+
+if (add == 1) {
+new_device = (AVDeviceInfo *) av_mallocz(sizeof(AVDeviceInfo));
+if (!new_device) {
+ret = AVERROR(ENOMEM);
+goto next;
+}
+new_device->device_name = av_strdup(displayName);
+new_device->device_description = av_strdup(displayName);
+if (!new_device->device_description || !new_device->device_name) {
+ret = AVERROR(ENOMEM);
+goto next;
+}
+
+if ((ret = av_dynarray_add_nofree(&device_list->devices,
+  &device_list->nb_devices, 
new_device)) < 0) {
+goto next;
+}
+}
+
+next:
 av_free((void *) displayName);
 dl->Release();
 }
 iter->Release();
-return 0;
+return ret;
 }
 
 int ff_decklink_list_formats(AVFormatContext *avctx, decklink_direction_t 
direction)
diff --git a/libavdevice/decklink_common.h b/libavdevice/decklink_common.h
index 749eb0f8b8..f81b33ada4 100644
--- a/libavdevice/decklink_common.h
+++ b/libavdevice/decklink_common.h
@@ -135,7 +135,7 @@ static const BMDVideoConnection 
decklink_video_connection_map[] = {
 HRESULT ff_decklink_get_display_name(IDeckLink *This, const char 
**displayName);
 int ff_decklink_set_format(AVFormatContext *avctx, int width, int height, int 
tb_num, int tb_den, enum AVFieldOrder field_order, decklink_direction_t 
direction = DIRECTION_OUT, int num = 0);
 int ff_decklink_set_format(AVFormatContext *avctx, decklink_direction_t 
direction, int num);
-int ff_decklink_list_devices(AVFormatContext *avctx);
+int ff_decklink_list_devices(AVFormatContext *avctx, struct AVDeviceInfoList 
*device_list, int show_inputs, int show_outputs);
 int ff_decklink_list_formats(AVFormatContext *avctx, decklink_direction_t 
direction = DIRECTION_OUT);
 void ff_decklink_cleanup(AVFormatContext *avctx);
 int ff_decklink_init_device(AVFormatContext *avctx, const char* name);
diff --git a/libavdevice/decklink_dec.cpp b/libavde

[FFmpeg-devel] [PATCH 3/4] Add support for 10-bit output for Decklink SDI

2017-09-26 Thread Devin Heitmueller
From: Devin Heitmueller 

Can be tested via the following command:

./ffmpeg -i foo.ts -f decklink -vcodec v210 'DeckLink Duo (1)'

Note that the 8-bit support works as it did before, and setting
the pix_fmt isn't required for 10-bit mode.  The code defaults to
operating in 8-bit mode when no vcodec is specified, for backward
compatibility.

Signed-off-by: Devin Heitmueller 
---
 libavdevice/decklink_enc.cpp | 110 ---
 1 file changed, 83 insertions(+), 27 deletions(-)

diff --git a/libavdevice/decklink_enc.cpp b/libavdevice/decklink_enc.cpp
index 0f654faa19..0d965699ec 100644
--- a/libavdevice/decklink_enc.cpp
+++ b/libavdevice/decklink_enc.cpp
@@ -44,20 +44,45 @@ extern "C" {
 class decklink_frame : public IDeckLinkVideoFrame
 {
 public:
-decklink_frame(struct decklink_ctx *ctx, AVFrame *avframe) :
-   _ctx(ctx), _avframe(avframe),  _refs(1) { }
-
-virtual long   STDMETHODCALLTYPE GetWidth  (void)  { 
return _avframe->width; }
-virtual long   STDMETHODCALLTYPE GetHeight (void)  { 
return _avframe->height; }
-virtual long   STDMETHODCALLTYPE GetRowBytes   (void)  { 
return _avframe->linesize[0] < 0 ? -_avframe->linesize[0] : 
_avframe->linesize[0]; }
-virtual BMDPixelFormat STDMETHODCALLTYPE GetPixelFormat(void)  { 
return bmdFormat8BitYUV; }
-virtual BMDFrameFlags  STDMETHODCALLTYPE GetFlags  (void)  { 
return _avframe->linesize[0] < 0 ? bmdFrameFlagFlipVertical : 
bmdFrameFlagDefault; }
+decklink_frame(struct decklink_ctx *ctx, AVFrame *avframe, AVCodecID 
codec_id, int height, int width) :
+_ctx(ctx), _avframe(avframe), _avpacket(NULL), _codec_id(codec_id), 
_height(height), _width(width),  _refs(1) { }
+decklink_frame(struct decklink_ctx *ctx, AVPacket *avpacket, AVCodecID 
codec_id, int height, int width) :
+_ctx(ctx), _avframe(NULL), _avpacket(avpacket), _codec_id(codec_id), 
_height(height), _width(width), _refs(1) { }
+
+virtual long   STDMETHODCALLTYPE GetWidth  (void)  { 
return _width; }
+virtual long   STDMETHODCALLTYPE GetHeight (void)  { 
return _height; }
+virtual long   STDMETHODCALLTYPE GetRowBytes   (void)
+{
+  if (_codec_id == AV_CODEC_ID_WRAPPED_AVFRAME)
+  return _avframe->linesize[0] < 0 ? -_avframe->linesize[0] : 
_avframe->linesize[0];
+  else
+  return ((GetWidth() + 47) / 48) * 128;
+}
+virtual BMDPixelFormat STDMETHODCALLTYPE GetPixelFormat(void)
+{
+if (_codec_id == AV_CODEC_ID_WRAPPED_AVFRAME)
+return bmdFormat8BitYUV;
+else
+return bmdFormat10BitYUV;
+}
+virtual BMDFrameFlags  STDMETHODCALLTYPE GetFlags  (void)
+{
+   if (_codec_id == AV_CODEC_ID_WRAPPED_AVFRAME)
+   return _avframe->linesize[0] < 0 ? bmdFrameFlagFlipVertical : 
bmdFrameFlagDefault;
+   else
+   return bmdFrameFlagDefault;
+}
+
 virtual HRESULTSTDMETHODCALLTYPE GetBytes  (void **buffer)
 {
+  if (_avframe) {
 if (_avframe->linesize[0] < 0)
 *buffer = (void *)(_avframe->data[0] + _avframe->linesize[0] * 
(_avframe->height - 1));
 else
 *buffer = (void *)(_avframe->data[0]);
+  } else {
+*buffer = (void *)(_avpacket->data);
+  }
 return S_OK;
 }
 
@@ -70,7 +95,10 @@ public:
 {
 int ret = --_refs;
 if (!ret) {
-av_frame_free(&_avframe);
+if (_codec_id == AV_CODEC_ID_WRAPPED_AVFRAME)
+av_frame_free(&_avframe);
+else
+av_packet_unref(_avpacket);
 delete this;
 }
 return ret;
@@ -78,6 +106,10 @@ public:
 
 struct decklink_ctx *_ctx;
 AVFrame *_avframe;
+AVPacket *_avpacket;
+AVCodecID _codec_id;
+int _height;
+int _width;
 
 private:
 std::atomic  _refs;
@@ -92,7 +124,10 @@ public:
 struct decklink_ctx *ctx = frame->_ctx;
 AVFrame *avframe = frame->_avframe;
 
-av_frame_unref(avframe);
+if (avframe)
+av_frame_unref(avframe);
+else
+av_packet_unref(frame->_avpacket);
 
 pthread_mutex_lock(&ctx->mutex);
 ctx->frames_buffer_available_spots++;
@@ -118,11 +153,14 @@ static int decklink_setup_video(AVFormatContext *avctx, 
AVStream *st)
 return -1;
 }
 
-if (c->format != AV_PIX_FMT_UYVY422) {
-av_log(avctx, AV_LOG_ERROR, "Unsupported pixel format!"
-   " Only AV_PIX_FMT_UYVY422 is supported.\n");
-return -1;
+if (c->codec_id == AV_CODEC_ID_WRAPPED_AVFRAME) {
+if (c->format != AV_PIX_FMT_UYVY422) {
+av_log(avctx, AV_LOG_ERROR, 

Re: [FFmpeg-devel] [PATCH 2/4] libavdevice/decklink: add support for -sources and -sinks arguments

2017-10-04 Thread Devin Heitmueller

> On Sep 30, 2017, at 4:02 PM, Marton Balint  wrote:
> 
> 
> 
> On Tue, 26 Sep 2017, Devin Heitmueller wrote:
> 
>> Add support for enumerating the sources/sinks via the ffmpeg
>> command line options, as opposed to having to create a real pipeline
>> and use the "-list_devices" option which does exit() after dumping
>> out the options.
>> 
>> Note that this patch preserves the existing "-list_devices" option,
>> but now shares common code for the actual enumeration.
>> 
>> Signed-off-by: Devin Heitmueller 
>> ---
>> libavdevice/decklink_common.cpp | 52 
>> +
>> libavdevice/decklink_common.h   |  2 +-
>> libavdevice/decklink_dec.cpp| 22 -
>> libavdevice/decklink_dec.h  |  1 +
>> libavdevice/decklink_dec_c.c|  1 +
>> libavdevice/decklink_enc.cpp| 22 -
>> libavdevice/decklink_enc.h  |  1 +
>> libavdevice/decklink_enc_c.c|  1 +
>> 8 files changed, 95 insertions(+), 7 deletions(-)
>> 
>> diff --git a/libavdevice/decklink_common.cpp 
>> b/libavdevice/decklink_common.cpp
>> index 7745575d0e..86d6fbb74b 100644
>> --- a/libavdevice/decklink_common.cpp
>> +++ b/libavdevice/decklink_common.cpp
>> @@ -37,6 +37,7 @@ extern "C" {
>> #include "libavutil/imgutils.h"
>> #include "libavutil/intreadwrite.h"
>> #include "libavutil/bswap.h"
>> +#include "avdevice.h"
>> }
>> #include "decklink_common.h"
>> @@ -261,24 +262,67 @@ int ff_decklink_set_format(AVFormatContext *avctx, 
>> decklink_direction_t directio
>>return ff_decklink_set_format(avctx, 0, 0, 0, 0, AV_FIELD_UNKNOWN, 
>> direction, num);
>> }
>> -int ff_decklink_list_devices(AVFormatContext *avctx)
>> +int ff_decklink_list_devices(AVFormatContext *avctx,
>> + struct AVDeviceInfoList *device_list,
>> + int show_inputs, int show_outputs)
>> {
>>IDeckLink *dl = NULL;
>>IDeckLinkIterator *iter = CreateDeckLinkIteratorInstance();
>> +int ret = 0;
>> +
>>if (!iter) {
>>av_log(avctx, AV_LOG_ERROR, "Could not create DeckLink iterator\n");
>>return AVERROR(EIO);
>>}
>> -av_log(avctx, AV_LOG_INFO, "Blackmagic DeckLink devices:\n");
>> +
>>while (iter->Next(&dl) == S_OK) {
> 
> This probably needs an additional && ret == 0 condition, if there was an 
> ENOMEM error, you want to return instantly instead of trying with the next 
> device, and ignoring the ENOMEM.
> 
>> +IDeckLinkOutput *output_config;
>> +IDeckLinkInput *input_config;
>>const char *displayName;
>> +AVDeviceInfo *new_device = NULL;
>> +int add = 0;
>> +
>>ff_decklink_get_display_name(dl, &displayName);
>> -av_log(avctx, AV_LOG_INFO, "\t'%s'\n", displayName);
>> +
>> +if (show_outputs) {
>> +if (dl->QueryInterface(IID_IDeckLinkOutput, (void 
>> **)&output_config) == S_OK) {
>> +output_config->Release();
>> +add = 1;
>> +}
>> +}
>> +
>> +if (show_inputs) {
>> +if (dl->QueryInterface(IID_IDeckLinkInput, (void 
>> **)&input_config) == S_OK) {
>> +input_config->Release();
>> +add = 1;
>> +}
>> +}
>> +
>> +if (add == 1) {
>> +new_device = (AVDeviceInfo *) av_mallocz(sizeof(AVDeviceInfo));
>> +if (!new_device) {
>> +ret = AVERROR(ENOMEM);
>> +goto next;
>> +}
>> +new_device->device_name = av_strdup(displayName);
>> +new_device->device_description = av_strdup(displayName);
>> +if (!new_device->device_description || 
>> !new_device->device_name) {
> 
> you might leak device_name here.
> 
>> +ret = AVERROR(ENOMEM);
>> +goto next;
>> +}
>> +
>> +if ((ret = av_dynarray_add_nofree(&device_list->devices,
>> +  &device_list->nb_devices, 
>> new_device)) < 0) {
> 
> you should free the struct on error here I think
> 
>> +goto next;
>> +}
>> +}
>> +
>> +

Re: [FFmpeg-devel] [PATCH 3/4] Add support for 10-bit output for Decklink SDI

2017-10-05 Thread Devin Heitmueller
Hello Marton,

Thanks for taking the time to provide feedback.

>> +  } else {
>> +*buffer = (void *)(_avpacket->data);
> 
> The DeckLink SDK requires a 128 byte alignment for data. I am thinking 
> AVPacket does not always provides that. Maybe we should simply ignore the SDK 
> requirement (if it works without it?) Can you test this somehow?

The SDK does expect the stride to be a multiple of 128 (which the v210 codec 
does), and the SDK does suggest that each line should start on a 128-byte 
aligned boundary.  I’ve worked with three or four different implementations on 
both the input and output side, and never had an issue where alignment was a 
problem.  That said, it’s possible that the issue is present on some less 
common model decklink card, or perhaps the implementation of some feature 
internal to the SDK has optimized assembly which expects the alignment.

In any case, it would be a pre-existing bug in libavcodec/v210enc.c, not 
something specific to the decklink output.

If someone wants to point me to an example of aligned allocation in libavcodec, 
I can take a look.  I think any such issue though would be separate from the 
content in this patch.

The other issues you mentioned have been addressed and I’m testing the changes 
now.  I should have updated patches for you shortly.

Regards,

Devin
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 1/2] libavdevice/decklink: add support for -sources and -sinks arguments

2017-10-05 Thread Devin Heitmueller
Add support for enumerating the sources/sinks via the ffmpeg
command line options, as opposed to having to create a real pipeline
and use the "-list_devices" option which does exit() after dumping
out the options.

Note that this patch preserves the existing "-list_devices" option,
but now shares common code for the actual enumeration.

Updated to reflect feedback from Marton Balink .

Signed-off-by: Devin Heitmueller 
---
 libavdevice/decklink_common.cpp | 89 ++---
 libavdevice/decklink_common.h   |  3 +-
 libavdevice/decklink_dec.cpp|  8 +++-
 libavdevice/decklink_dec.h  |  1 +
 libavdevice/decklink_dec_c.c|  1 +
 libavdevice/decklink_enc.cpp| 10 -
 libavdevice/decklink_enc.h  |  1 +
 libavdevice/decklink_enc_c.c|  1 +
 8 files changed, 104 insertions(+), 10 deletions(-)

diff --git a/libavdevice/decklink_common.cpp b/libavdevice/decklink_common.cpp
index c782171f2c..61d8ad86a9 100644
--- a/libavdevice/decklink_common.cpp
+++ b/libavdevice/decklink_common.cpp
@@ -37,6 +37,7 @@ extern "C" {
 #include "libavutil/imgutils.h"
 #include "libavutil/intreadwrite.h"
 #include "libavutil/bswap.h"
+#include "avdevice.h"
 }
 
 #include "decklink_common.h"
@@ -261,24 +262,100 @@ int ff_decklink_set_format(AVFormatContext *avctx, 
decklink_direction_t directio
 return ff_decklink_set_format(avctx, 0, 0, 0, 0, AV_FIELD_UNKNOWN, 
direction, num);
 }
 
-int ff_decklink_list_devices(AVFormatContext *avctx)
+int ff_decklink_list_devices(AVFormatContext *avctx,
+struct AVDeviceInfoList *device_list,
+int show_inputs, int show_outputs)
 {
 IDeckLink *dl = NULL;
 IDeckLinkIterator *iter = CreateDeckLinkIteratorInstance();
+int ret = 0;
+
 if (!iter) {
 av_log(avctx, AV_LOG_ERROR, "Could not create DeckLink iterator\n");
 return AVERROR(EIO);
 }
-av_log(avctx, AV_LOG_INFO, "Blackmagic DeckLink devices:\n");
-while (iter->Next(&dl) == S_OK) {
+
+while (ret == 0 && iter->Next(&dl) == S_OK) {
+IDeckLinkOutput *output_config;
+IDeckLinkInput *input_config;
 const char *displayName;
+AVDeviceInfo *new_device = NULL;
+int add = 0;
+
 ff_decklink_get_display_name(dl, &displayName);
-av_log(avctx, AV_LOG_INFO, "\t'%s'\n", displayName);
-av_free((void *) displayName);
+
+if (show_outputs) {
+if (dl->QueryInterface(IID_IDeckLinkOutput, (void 
**)&output_config) == S_OK) {
+output_config->Release();
+add = 1;
+}
+}
+
+if (show_inputs) {
+if (dl->QueryInterface(IID_IDeckLinkInput, (void **)&input_config) 
== S_OK) {
+input_config->Release();
+add = 1;
+}
+}
+
+if (add == 1) {
+new_device = (AVDeviceInfo *) av_mallocz(sizeof(AVDeviceInfo));
+if (!new_device) {
+ret = AVERROR(ENOMEM);
+goto next;
+}
+new_device->device_name = av_strdup(displayName);
+if (!new_device->device_name) {
+ret = AVERROR(ENOMEM);
+goto next;
+}
+
+new_device->device_description = av_strdup(displayName);
+if (!new_device->device_description) {
+av_freep(&new_device->device_name);
+ret = AVERROR(ENOMEM);
+goto next;
+}
+
+if ((ret = av_dynarray_add_nofree(&device_list->devices,
+  &device_list->nb_devices, 
new_device)) < 0) {
+av_freep(&new_device->device_name);
+av_freep(&new_device->device_description);
+av_freep(&new_device);
+goto next;
+}
+}
+
+next:
+av_freep(&displayName);
 dl->Release();
 }
 iter->Release();
-return 0;
+return ret;
+}
+
+/* This is a wrapper around the ff_decklink_list_devices() which dumps the
+   output to av_log() and exits (for backward compatibility with the
+   "-list_devices" argument). */
+void ff_decklink_list_devices_legacy(AVFormatContext *avctx,
+ int show_inputs, int show_outputs)
+{
+struct AVDeviceInfoList *device_list = NULL;
+int ret;
+
+device_list = (struct AVDeviceInfoList *) 
av_mallocz(sizeof(AVDeviceInfoList));
+if (!device_list)
+return;
+
+ret = ff_decklink_list_devices(avctx, device_list, show_inputs, 
show_outputs);
+if (ret == 0) {
+av_log(avctx, AV_LOG_INFO, "Blackmagic DeckLink %s devices:\n",
+   show_inputs 

[FFmpeg-devel] [PATCH 2/2] libavdevice/decklink: add support for 10-bit output for Decklink SDI

2017-10-05 Thread Devin Heitmueller
Can be tested via the following command:

./ffmpeg -i foo.ts -f decklink -vcodec v210 'DeckLink Duo (1)'

Note that the 8-bit support works as it did before, and setting
the pix_fmt isn't required for 10-bit mode.  The code defaults to
operating in 8-bit mode when no vcodec is specified, for backward
compatibility.

Updated to reflect feedback from Marton Balink 

Signed-off-by: Devin Heitmueller 
---
 libavdevice/decklink_enc.cpp | 112 ---
 1 file changed, 83 insertions(+), 29 deletions(-)

diff --git a/libavdevice/decklink_enc.cpp b/libavdevice/decklink_enc.cpp
index 0776741812..81df563b3b 100644
--- a/libavdevice/decklink_enc.cpp
+++ b/libavdevice/decklink_enc.cpp
@@ -44,20 +44,45 @@ extern "C" {
 class decklink_frame : public IDeckLinkVideoFrame
 {
 public:
-decklink_frame(struct decklink_ctx *ctx, AVFrame *avframe) :
-   _ctx(ctx), _avframe(avframe),  _refs(1) { }
-
-virtual long   STDMETHODCALLTYPE GetWidth  (void)  { 
return _avframe->width; }
-virtual long   STDMETHODCALLTYPE GetHeight (void)  { 
return _avframe->height; }
-virtual long   STDMETHODCALLTYPE GetRowBytes   (void)  { 
return _avframe->linesize[0] < 0 ? -_avframe->linesize[0] : 
_avframe->linesize[0]; }
-virtual BMDPixelFormat STDMETHODCALLTYPE GetPixelFormat(void)  { 
return bmdFormat8BitYUV; }
-virtual BMDFrameFlags  STDMETHODCALLTYPE GetFlags  (void)  { 
return _avframe->linesize[0] < 0 ? bmdFrameFlagFlipVertical : 
bmdFrameFlagDefault; }
-virtual HRESULTSTDMETHODCALLTYPE GetBytes  (void **buffer)
+decklink_frame(struct decklink_ctx *ctx, AVFrame *avframe, AVCodecID 
codec_id, int height, int width) :
+_ctx(ctx), _avframe(avframe), _avpacket(NULL), _codec_id(codec_id), 
_height(height), _width(width),  _refs(1) { }
+decklink_frame(struct decklink_ctx *ctx, AVPacket *avpacket, AVCodecID 
codec_id, int height, int width) :
+_ctx(ctx), _avframe(NULL), _avpacket(avpacket), _codec_id(codec_id), 
_height(height), _width(width), _refs(1) { }
+
+virtual long   STDMETHODCALLTYPE GetWidth  (void)  { 
return _width; }
+virtual long   STDMETHODCALLTYPE GetHeight (void)  { 
return _height; }
+virtual long   STDMETHODCALLTYPE GetRowBytes   (void)
+{
+  if (_codec_id == AV_CODEC_ID_WRAPPED_AVFRAME)
+  return _avframe->linesize[0] < 0 ? -_avframe->linesize[0] : 
_avframe->linesize[0];
+  else
+  return ((GetWidth() + 47) / 48) * 128;
+}
+virtual BMDPixelFormat STDMETHODCALLTYPE GetPixelFormat(void)
 {
-if (_avframe->linesize[0] < 0)
-*buffer = (void *)(_avframe->data[0] + _avframe->linesize[0] * 
(_avframe->height - 1));
+if (_codec_id == AV_CODEC_ID_WRAPPED_AVFRAME)
+return bmdFormat8BitYUV;
 else
-*buffer = (void *)(_avframe->data[0]);
+return bmdFormat10BitYUV;
+}
+virtual BMDFrameFlags  STDMETHODCALLTYPE GetFlags  (void)
+{
+   if (_codec_id == AV_CODEC_ID_WRAPPED_AVFRAME)
+   return _avframe->linesize[0] < 0 ? bmdFrameFlagFlipVertical : 
bmdFrameFlagDefault;
+   else
+   return bmdFrameFlagDefault;
+}
+
+virtual HRESULTSTDMETHODCALLTYPE GetBytes  (void **buffer)
+{
+if (_codec_id == AV_CODEC_ID_WRAPPED_AVFRAME) {
+if (_avframe->linesize[0] < 0)
+*buffer = (void *)(_avframe->data[0] + _avframe->linesize[0] * 
(_avframe->height - 1));
+else
+*buffer = (void *)(_avframe->data[0]);
+} else {
+*buffer = (void *)(_avpacket->data);
+}
 return S_OK;
 }
 
@@ -71,6 +96,7 @@ public:
 int ret = --_refs;
 if (!ret) {
 av_frame_free(&_avframe);
+av_packet_free(&_avpacket);
 delete this;
 }
 return ret;
@@ -78,6 +104,10 @@ public:
 
 struct decklink_ctx *_ctx;
 AVFrame *_avframe;
+AVPacket *_avpacket;
+AVCodecID _codec_id;
+int _height;
+int _width;
 
 private:
 std::atomic  _refs;
@@ -90,9 +120,11 @@ public:
 {
 decklink_frame *frame = static_cast(_frame);
 struct decklink_ctx *ctx = frame->_ctx;
-AVFrame *avframe = frame->_avframe;
 
-av_frame_unref(avframe);
+if (frame->_avframe)
+av_frame_unref(frame->_avframe);
+if (frame->_avpacket)
+av_packet_unref(frame->_avpacket);
 
 pthread_mutex_lock(&ctx->mutex);
 ctx->frames_buffer_available_spots++;
@@ -118,11 +150,18 @@ static int decklink_setup_video(AVFormatContext *avctx, 
AVStream *st)
 return -1;
 }
 
-if (c->format != AV_PIX_FMT_UY

[FFmpeg-devel] [PATCHv2 0/2] avdevice/decklink: 10-bit video out and sources/sinks support

2017-10-05 Thread Devin Heitmueller
Hello all,

Below please find several patches which adds support for 10-bit video
 on output and the "-sources" and"-sinks" argument when specified by ffmpeg.c.

This patch series incorporates feedback provided from Marton Balint.
Note that patch 4 in the previous patch series was consolidated into
patch 3 per Marton's suggestion.

If you have any question/concerns, please don't hesitate to reply.

Thanks,

Devin Heitmueller

Devin Heitmueller (2):
  libavdevice/decklink: add support for -sources and -sinks arguments
  libavdevice/decklink: add support for 10-bit output for Decklink SDI

 libavdevice/decklink_common.cpp |  89 +++--
 libavdevice/decklink_common.h   |   3 +-
 libavdevice/decklink_dec.cpp|   8 ++-
 libavdevice/decklink_dec.h  |   1 +
 libavdevice/decklink_dec_c.c|   1 +
 libavdevice/decklink_enc.cpp| 122 ++--
 libavdevice/decklink_enc.h  |   1 +
 libavdevice/decklink_enc_c.c|   1 +
 8 files changed, 187 insertions(+), 39 deletions(-)

-- 
2.13.2

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCHv2 0/2] avdevice/decklink: 10-bit video out and sources/sinks support

2017-10-06 Thread Devin Heitmueller

> On Oct 6, 2017, at 5:13 AM, Moritz Barsnick  wrote:
> 
> On Thu, Oct 05, 2017 at 15:32:04 -0400, Devin Heitmueller wrote:
>> This patch series incorporates feedback provided from Marton Balint.
>> Note that patch 4 in the previous patch series was consolidated into
>> patch 3 per Marton's suggestion.
> 
> There's no patch 3 in the series you sent. And you misspelled Marton's
> surname in both patches' commit messages.

The previous series had four patches.  Patches 3/4 of that series were folded 
into one.  Because Patch 1 of the original series had already been accepted it 
was not included in the new series.  Hence patch 3/4 in the previous series 
became patch 2 in the new series.

Yes, I apparently misspelled Marton’s name and then cut/pasted it to the second 
patch.  Will resubmit.

Devin
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCHv3 0/2] avdevice/decklink: 10-bit video out and sources/sinks support

2017-10-06 Thread Devin Heitmueller
Hello all,

Below please find several patches which adds support for 10-bit video
 on output and the "-sources" and"-sinks" argument when specified by ffmpeg.c.

This patch series incorporates feedback provided from Marton Balint.
Note that patch 4 in the v2 patch series was consolidated into
patch 3 per Marton's suggestion.

Patch series V3 corrects a misspelling in Marton's name in the commit.

If you have any question/concerns, please don't hesitate to reply.

Thanks,

Devin Heitmueller

Devin Heitmueller (2):
  libavdevice/decklink: add support for -sources and -sinks arguments
  libavdevice/decklink: add support for 10-bit output for Decklink SDI

 libavdevice/decklink_common.cpp |  89 +++--
 libavdevice/decklink_common.h   |   3 +-
 libavdevice/decklink_dec.cpp|   8 ++-
 libavdevice/decklink_dec.h  |   1 +
 libavdevice/decklink_dec_c.c|   1 +
 libavdevice/decklink_enc.cpp| 122 ++--
 libavdevice/decklink_enc.h  |   1 +
 libavdevice/decklink_enc_c.c|   1 +
 8 files changed, 187 insertions(+), 39 deletions(-)

-- 
2.13.2

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCHv3 2/2] libavdevice/decklink: add support for 10-bit output for Decklink SDI

2017-10-06 Thread Devin Heitmueller
Can be tested via the following command:

./ffmpeg -i foo.ts -f decklink -vcodec v210 'DeckLink Duo (1)'

Note that the 8-bit support works as it did before, and setting
the pix_fmt isn't required for 10-bit mode.  The code defaults to
operating in 8-bit mode when no vcodec is specified, for backward
compatibility.

Updated to reflect feedback from Marton Balint 

Signed-off-by: Devin Heitmueller 
---
 libavdevice/decklink_enc.cpp | 112 ---
 1 file changed, 83 insertions(+), 29 deletions(-)

diff --git a/libavdevice/decklink_enc.cpp b/libavdevice/decklink_enc.cpp
index 0776741812..81df563b3b 100644
--- a/libavdevice/decklink_enc.cpp
+++ b/libavdevice/decklink_enc.cpp
@@ -44,20 +44,45 @@ extern "C" {
 class decklink_frame : public IDeckLinkVideoFrame
 {
 public:
-decklink_frame(struct decklink_ctx *ctx, AVFrame *avframe) :
-   _ctx(ctx), _avframe(avframe),  _refs(1) { }
-
-virtual long   STDMETHODCALLTYPE GetWidth  (void)  { 
return _avframe->width; }
-virtual long   STDMETHODCALLTYPE GetHeight (void)  { 
return _avframe->height; }
-virtual long   STDMETHODCALLTYPE GetRowBytes   (void)  { 
return _avframe->linesize[0] < 0 ? -_avframe->linesize[0] : 
_avframe->linesize[0]; }
-virtual BMDPixelFormat STDMETHODCALLTYPE GetPixelFormat(void)  { 
return bmdFormat8BitYUV; }
-virtual BMDFrameFlags  STDMETHODCALLTYPE GetFlags  (void)  { 
return _avframe->linesize[0] < 0 ? bmdFrameFlagFlipVertical : 
bmdFrameFlagDefault; }
-virtual HRESULTSTDMETHODCALLTYPE GetBytes  (void **buffer)
+decklink_frame(struct decklink_ctx *ctx, AVFrame *avframe, AVCodecID 
codec_id, int height, int width) :
+_ctx(ctx), _avframe(avframe), _avpacket(NULL), _codec_id(codec_id), 
_height(height), _width(width),  _refs(1) { }
+decklink_frame(struct decklink_ctx *ctx, AVPacket *avpacket, AVCodecID 
codec_id, int height, int width) :
+_ctx(ctx), _avframe(NULL), _avpacket(avpacket), _codec_id(codec_id), 
_height(height), _width(width), _refs(1) { }
+
+virtual long   STDMETHODCALLTYPE GetWidth  (void)  { 
return _width; }
+virtual long   STDMETHODCALLTYPE GetHeight (void)  { 
return _height; }
+virtual long   STDMETHODCALLTYPE GetRowBytes   (void)
+{
+  if (_codec_id == AV_CODEC_ID_WRAPPED_AVFRAME)
+  return _avframe->linesize[0] < 0 ? -_avframe->linesize[0] : 
_avframe->linesize[0];
+  else
+  return ((GetWidth() + 47) / 48) * 128;
+}
+virtual BMDPixelFormat STDMETHODCALLTYPE GetPixelFormat(void)
 {
-if (_avframe->linesize[0] < 0)
-*buffer = (void *)(_avframe->data[0] + _avframe->linesize[0] * 
(_avframe->height - 1));
+if (_codec_id == AV_CODEC_ID_WRAPPED_AVFRAME)
+return bmdFormat8BitYUV;
 else
-*buffer = (void *)(_avframe->data[0]);
+return bmdFormat10BitYUV;
+}
+virtual BMDFrameFlags  STDMETHODCALLTYPE GetFlags  (void)
+{
+   if (_codec_id == AV_CODEC_ID_WRAPPED_AVFRAME)
+   return _avframe->linesize[0] < 0 ? bmdFrameFlagFlipVertical : 
bmdFrameFlagDefault;
+   else
+   return bmdFrameFlagDefault;
+}
+
+virtual HRESULTSTDMETHODCALLTYPE GetBytes  (void **buffer)
+{
+if (_codec_id == AV_CODEC_ID_WRAPPED_AVFRAME) {
+if (_avframe->linesize[0] < 0)
+*buffer = (void *)(_avframe->data[0] + _avframe->linesize[0] * 
(_avframe->height - 1));
+else
+*buffer = (void *)(_avframe->data[0]);
+} else {
+*buffer = (void *)(_avpacket->data);
+}
 return S_OK;
 }
 
@@ -71,6 +96,7 @@ public:
 int ret = --_refs;
 if (!ret) {
 av_frame_free(&_avframe);
+av_packet_free(&_avpacket);
 delete this;
 }
 return ret;
@@ -78,6 +104,10 @@ public:
 
 struct decklink_ctx *_ctx;
 AVFrame *_avframe;
+AVPacket *_avpacket;
+AVCodecID _codec_id;
+int _height;
+int _width;
 
 private:
 std::atomic  _refs;
@@ -90,9 +120,11 @@ public:
 {
 decklink_frame *frame = static_cast(_frame);
 struct decklink_ctx *ctx = frame->_ctx;
-AVFrame *avframe = frame->_avframe;
 
-av_frame_unref(avframe);
+if (frame->_avframe)
+av_frame_unref(frame->_avframe);
+if (frame->_avpacket)
+av_packet_unref(frame->_avpacket);
 
 pthread_mutex_lock(&ctx->mutex);
 ctx->frames_buffer_available_spots++;
@@ -118,11 +150,18 @@ static int decklink_setup_video(AVFormatContext *avctx, 
AVStream *st)
 return -1;
 }
 
-if (c->format != AV_PIX_FMT_UY

[FFmpeg-devel] [PATCHv3 1/2] libavdevice/decklink: add support for -sources and -sinks arguments

2017-10-06 Thread Devin Heitmueller
Add support for enumerating the sources/sinks via the ffmpeg
command line options, as opposed to having to create a real pipeline
and use the "-list_devices" option which does exit() after dumping
out the options.

Note that this patch preserves the existing "-list_devices" option,
but now shares common code for the actual enumeration.

Updated to reflect feedback from Marton Balint .

Signed-off-by: Devin Heitmueller 
---
 libavdevice/decklink_common.cpp | 89 ++---
 libavdevice/decklink_common.h   |  3 +-
 libavdevice/decklink_dec.cpp|  8 +++-
 libavdevice/decklink_dec.h  |  1 +
 libavdevice/decklink_dec_c.c|  1 +
 libavdevice/decklink_enc.cpp| 10 -
 libavdevice/decklink_enc.h  |  1 +
 libavdevice/decklink_enc_c.c|  1 +
 8 files changed, 104 insertions(+), 10 deletions(-)

diff --git a/libavdevice/decklink_common.cpp b/libavdevice/decklink_common.cpp
index c782171f2c..61d8ad86a9 100644
--- a/libavdevice/decklink_common.cpp
+++ b/libavdevice/decklink_common.cpp
@@ -37,6 +37,7 @@ extern "C" {
 #include "libavutil/imgutils.h"
 #include "libavutil/intreadwrite.h"
 #include "libavutil/bswap.h"
+#include "avdevice.h"
 }
 
 #include "decklink_common.h"
@@ -261,24 +262,100 @@ int ff_decklink_set_format(AVFormatContext *avctx, 
decklink_direction_t directio
 return ff_decklink_set_format(avctx, 0, 0, 0, 0, AV_FIELD_UNKNOWN, 
direction, num);
 }
 
-int ff_decklink_list_devices(AVFormatContext *avctx)
+int ff_decklink_list_devices(AVFormatContext *avctx,
+struct AVDeviceInfoList *device_list,
+int show_inputs, int show_outputs)
 {
 IDeckLink *dl = NULL;
 IDeckLinkIterator *iter = CreateDeckLinkIteratorInstance();
+int ret = 0;
+
 if (!iter) {
 av_log(avctx, AV_LOG_ERROR, "Could not create DeckLink iterator\n");
 return AVERROR(EIO);
 }
-av_log(avctx, AV_LOG_INFO, "Blackmagic DeckLink devices:\n");
-while (iter->Next(&dl) == S_OK) {
+
+while (ret == 0 && iter->Next(&dl) == S_OK) {
+IDeckLinkOutput *output_config;
+IDeckLinkInput *input_config;
 const char *displayName;
+AVDeviceInfo *new_device = NULL;
+int add = 0;
+
 ff_decklink_get_display_name(dl, &displayName);
-av_log(avctx, AV_LOG_INFO, "\t'%s'\n", displayName);
-av_free((void *) displayName);
+
+if (show_outputs) {
+if (dl->QueryInterface(IID_IDeckLinkOutput, (void 
**)&output_config) == S_OK) {
+output_config->Release();
+add = 1;
+}
+}
+
+if (show_inputs) {
+if (dl->QueryInterface(IID_IDeckLinkInput, (void **)&input_config) 
== S_OK) {
+input_config->Release();
+add = 1;
+}
+}
+
+if (add == 1) {
+new_device = (AVDeviceInfo *) av_mallocz(sizeof(AVDeviceInfo));
+if (!new_device) {
+ret = AVERROR(ENOMEM);
+goto next;
+}
+new_device->device_name = av_strdup(displayName);
+if (!new_device->device_name) {
+ret = AVERROR(ENOMEM);
+goto next;
+}
+
+new_device->device_description = av_strdup(displayName);
+if (!new_device->device_description) {
+av_freep(&new_device->device_name);
+ret = AVERROR(ENOMEM);
+goto next;
+}
+
+if ((ret = av_dynarray_add_nofree(&device_list->devices,
+  &device_list->nb_devices, 
new_device)) < 0) {
+av_freep(&new_device->device_name);
+av_freep(&new_device->device_description);
+av_freep(&new_device);
+goto next;
+}
+}
+
+next:
+av_freep(&displayName);
 dl->Release();
 }
 iter->Release();
-return 0;
+return ret;
+}
+
+/* This is a wrapper around the ff_decklink_list_devices() which dumps the
+   output to av_log() and exits (for backward compatibility with the
+   "-list_devices" argument). */
+void ff_decklink_list_devices_legacy(AVFormatContext *avctx,
+ int show_inputs, int show_outputs)
+{
+struct AVDeviceInfoList *device_list = NULL;
+int ret;
+
+device_list = (struct AVDeviceInfoList *) 
av_mallocz(sizeof(AVDeviceInfoList));
+if (!device_list)
+return;
+
+ret = ff_decklink_list_devices(avctx, device_list, show_inputs, 
show_outputs);
+if (ret == 0) {
+av_log(avctx, AV_LOG_INFO, "Blackmagic DeckLink %s devices:\n",
+   show_inputs 

[FFmpeg-devel] [PATCH RFC] libavdevice/decklink: Add support for EIA-708 output over SDI

2017-10-06 Thread Devin Heitmueller
From: Devin Heitmueller 

Hook in libklvanc and use it for output of EIA-708 captions over
SDI.  The bulk of this patch is just general support for ancillary
data for the Decklink SDI module - the real work for construction
of the EIA-708 CDP and VANC line construction is done by libklvanc.

Libklvanc can be found at: https://github.com/stoth68000/libklvanc

Signed-off-by: Devin Heitmueller 
---
 configure |   3 ++
 libavcodec/v210enc.c  |   8 +++
 libavdevice/decklink_common.h |   1 +
 libavdevice/decklink_enc.cpp  | 113 +++---
 4 files changed, 119 insertions(+), 6 deletions(-)

diff --git a/configure b/configure
index 391c141e7a..18647896b1 100755
--- a/configure
+++ b/configure
@@ -238,6 +238,7 @@ External library support:
   --enable-libgsm  enable GSM de/encoding via libgsm [no]
   --enable-libiec61883 enable iec61883 via libiec61883 [no]
   --enable-libilbc enable iLBC de/encoding via libilbc [no]
+  --enable-libklvanc   enable Kernel Labs VANC processing [no]
   --enable-libkvazaar  enable HEVC encoding via libkvazaar [no]
   --enable-libmodplug  enable ModPlug via libmodplug [no]
   --enable-libmp3lame  enable MP3 encoding via libmp3lame [no]
@@ -1603,6 +1604,7 @@ EXTERNAL_LIBRARY_LIST="
 libgsm
 libiec61883
 libilbc
+libklvanc
 libkvazaar
 libmodplug
 libmp3lame
@@ -6027,6 +6029,7 @@ enabled libx264   && { use_pkg_config libx264 
x264 "stdint.h x264.h" x26
 enabled libx265   && require_pkg_config libx265 x265 x265.h 
x265_api_get &&
  require_cpp_condition x265.h "X265_BUILD >= 68"
 enabled libxavs   && require libxavs "stdint.h xavs.h" 
xavs_encoder_encode -lxavs
+enabled libklvanc && require libklvanc libklvanc/vanc.h 
vanc_context_create -lklvanc
 enabled libxvid   && require libxvid xvid.h xvid_global -lxvidcore
 enabled libzimg   && require_pkg_config libzimg "zimg >= 2.3.0" zimg.h 
zimg_get_api_version
 enabled libzmq&& require_pkg_config libzmq libzmq zmq.h zmq_ctx_new
diff --git a/libavcodec/v210enc.c b/libavcodec/v210enc.c
index a6afbbfc41..44cc3c5c81 100644
--- a/libavcodec/v210enc.c
+++ b/libavcodec/v210enc.c
@@ -123,6 +123,7 @@ static int encode_frame(AVCodecContext *avctx, AVPacket 
*pkt,
 int aligned_width = ((avctx->width + 47) / 48) * 48;
 int stride = aligned_width * 8 / 3;
 int line_padding = stride - ((avctx->width * 8 + 11) / 12) * 4;
+AVFrameSideData *side_data = NULL;
 int h, w, ret;
 uint8_t *dst;
 
@@ -233,6 +234,13 @@ static int encode_frame(AVCodecContext *avctx, AVPacket 
*pkt,
 }
 }
 
+side_data = av_frame_get_side_data(pic, AV_FRAME_DATA_A53_CC);
+if (side_data && side_data->size) {
+uint8_t* buf = av_packet_new_side_data(pkt, AV_PKT_DATA_A53_CC, 
side_data->size);
+if (buf)
+memcpy(buf, side_data->data, side_data->size);
+}
+
 pkt->flags |= AV_PKT_FLAG_KEY;
 *got_packet = 1;
 return 0;
diff --git a/libavdevice/decklink_common.h b/libavdevice/decklink_common.h
index 6b2525fb53..285a244000 100644
--- a/libavdevice/decklink_common.h
+++ b/libavdevice/decklink_common.h
@@ -78,6 +78,7 @@ struct decklink_ctx {
 AVStream *audio_st;
 AVStream *video_st;
 AVStream *teletext_st;
+uint16_t cdp_sequence_num;
 
 /* Options */
 int list_devices;
diff --git a/libavdevice/decklink_enc.cpp b/libavdevice/decklink_enc.cpp
index 81df563b3b..3049e936a9 100644
--- a/libavdevice/decklink_enc.cpp
+++ b/libavdevice/decklink_enc.cpp
@@ -38,16 +38,20 @@ extern "C" {
 
 #include "decklink_common.h"
 #include "decklink_enc.h"
-
+#if CONFIG_LIBKLVANC
+#include "libklvanc/vanc.h"
+#include "libklvanc/vanc-lines.h"
+#include "libklvanc/pixels.h"
+#endif
 
 /* DeckLink callback class declaration */
 class decklink_frame : public IDeckLinkVideoFrame
 {
 public:
 decklink_frame(struct decklink_ctx *ctx, AVFrame *avframe, AVCodecID 
codec_id, int height, int width) :
-_ctx(ctx), _avframe(avframe), _avpacket(NULL), _codec_id(codec_id), 
_height(height), _width(width),  _refs(1) { }
+_ctx(ctx), _avframe(avframe), _avpacket(NULL), _codec_id(codec_id), 
_ancillary(NULL), _height(height), _width(width),  _refs(1) { }
 decklink_frame(struct decklink_ctx *ctx, AVPacket *avpacket, AVCodecID 
codec_id, int height, int width) :
-_ctx(ctx), _avframe(NULL), _avpacket(avpacket), _codec_id(codec_id), 
_height(height), _width(width), _refs(1) { }
+_ctx(ctx), _avframe(NULL), _avpacket(avpacket), _codec_id(codec_id), 
_ancillary(NULL), _height(height), _width(width), _refs(1) { }
 
 virtual long   STDMETHODCALLTYPE GetWidth  (void) 

Re: [FFmpeg-devel] [PATCH RFC] libavdevice/decklink: Add support for EIA-708 output over SDI

2017-10-06 Thread Devin Heitmueller
Hello Carl,

> On Oct 6, 2017, at 5:07 PM, Carl Eugen Hoyos  wrote:
> 
> 2017-10-06 18:56 GMT+02:00 Devin Heitmueller :
>> From: Devin Heitmueller 
>> 
>> Hook in libklvanc and use it for output of EIA-708 captions over
>> SDI.  The bulk of this patch is just general support for ancillary
>> data for the Decklink SDI module - the real work for construction
>> of the EIA-708 CDP and VANC line construction is done by libklvanc.
> 
> Nothing except the decklink device could use VANC?

You could absolutely have other SDI device types which can contain VANC.  This 
was a key reason that we put all the business logic for VANC processing in a 
separate library.

The goal behind developing libklvanc was to separate out VANC processing from 
device and application specific business logic.  This allows us to have a 
single parser and implementation of popular VANC protocols in a single library 
that can be reused by VLC, OBE, and ffmpeg.  It also allows the VANC processing 
to be shared across different device types, although admittedly there are not 
many vendors other than BlackMagic that are very popular.

The point of the remark in the commit message though was to observe that most 
of the code in the patch implements the decklink specific glue for accessing 
VANC lines.  The libklvanc library provides all the functions for 
parsing/generation of the VANC packets.

Devin

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH RFC] libavdevice/decklink: Add support for EIA-708 output over SDI

2017-10-06 Thread Devin Heitmueller

> 
> Sorry, what I meant was:
> Nothing inside FFmpeg except the decklink device could use
> VANC?

Ah, I understand now.

Yes, the decklink device is currently the only SDI device which is supported by 
libavdevice.  I’ve got a whole pile of patches coming which add support for a 
variety of protocols for both capture and output (e.g. EIA-708, SCTE-104, AFD, 
SMPTE 2038, etc).  But today the decklink module is the only device supported.

Would love to see more SDI devices supported and potentially interested in 
adding such support myself if we can find good candidates.  The DVEO/linsys 
cards are largely obsolete and the AJA boards are significantly more expensive 
than any of BlackMagic’s cards.  If anyone has good experiences with other 
vendors I would like to hear about it (and there may be an opportunity to 
extend libavdevice to support another SDI vendor).

Devin
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avdevice/decklink_dec: Added SCTE104 message decode from VANC

2017-10-13 Thread Devin Heitmueller
Hello Vishwanath,

> On Oct 13, 2017, at 1:42 AM, Dixit, Vishwanath  wrote:
> 
> Hi,
> 
> Please find the attached patch which adds support to decode SCTE-104 message 
> from VANC for decklink capture use case.
> 

I’ve got a couple of concerns about this approach.

Because I am doing a bunch of VANC related work, I looked at various VANC types 
and assessed their suitability for the use of AV_FRAME_SIDE_DATA versus 
generating new streams.  While side data is well suited for content that will 
ultimately be fed into the encoder (e.g. EIA-708 captions, AFD, etc), there are 
significant limitations for using side data for content types that may need to 
result in new stream creation.  Examples of this include SCTE-104 (which in 
many use cases will result in creation of an SCTE-35 stream), SMPTE 2038 (which 
again creates a corresponding elementary stream), and Teletext (which you see 
in the implementation today), etc.

Further, there are significant limitations in the ability to access side data 
from within a pipeline.  If tied to the video frame as side data, the only way 
to access it if not the target encoder codec is to use a video filter.  Video 
filters cannot create new streams, and hence creating pipelines which make use 
of SCTE-104 packets can be very difficult.

What you’ve submitted seems like half an actual use case - you’re ingesting the 
SCTE-104 packet and inserting it into as frame side data, but what are you 
actually planning to do with the data?  Do your use cases involve 
transformation to SCTE-35?  Do you care about the decklink input exclusively, 
or do you need the decklink output working as well?

I’ve got patches queueing which implement SCTE-104 and SCTE-35 for the 
following use cases:

- Ingest SCTE-104 on the decklink input, and create an SCTE-35 stream in the TS 
(including proper timing)
- Decode a TS containing SCTE-35 and output the resulting data on the decklink 
output as SCTE-104
- Be able to ingest SCTE-104 from network port (either TCP or UDP) and output 
the data into the decklink output (i.e. a SCTE-104 inserter)
- Be able to ingest SCTE-104 from a network port and insert the data into a TS 
as SCTE-35

The above cases require the flexibility to access the data from within the 
general pipeline as a stream, as opposed to the data being tied to the video 
frame as side-data.

It may be beneficial for the two of us to talk about use cases so we can avoid 
duplicated effort.  I already have patches which incorporate our mature VANC 
processing library for both the decklink input and output interfaces, and I’ve 
got trees which already implement EIA-708, AFD, SCTE-104, SCTE-35, and SMPTE 
2038.  I’ve been trickling the patches on the ML and the speed at which I’ve 
been providing them has been driven by how quickly they are reviewed and 
merged, not whether the work is already completed.

For your reference it may be worthwhile for you to look at our “libklvanc” 
library, which we are actively deploying in OBE and VLC and which already 
implements much of the stuff you’re in the process of writing from scratch.  My 
plan is to work on adding the glue in ffmpeg to leverage this library as well, 
given it’s code which is already in active use elsewhere and that work can be 
shared across multiple open source projects.

https://github.com/stoth68000/libklvanc 
<https://github.com/stoth68000/libklvanc>

I’m tied up working on multi-channel audio right now but I think there is 
definitely value in us syncing up to avoid duplicated effort and conflicting 
patches.  Let’s see where we can divide/conquer rather than re-implementing 
work the other may have already done.

Cheers,

Devin Heitmueller
LTN Global Communications


___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avdevice/decklink_dec: Added SCTE104 message decode from VANC

2017-10-15 Thread Devin Heitmueller

> On Oct 15, 2017, at 6:36 PM, Moritz Barsnick  wrote:
> 
> On Fri, Oct 13, 2017 at 09:10:30 -0400, Devin Heitmueller wrote:
>> Video filters cannot create new streams
> 
> No? I thought they could.
> 

I couldn’t find a single filter which calls avformat_new_stream(), and the docs 
for avformat_new_stream() indicate that it can only be called by input format 
handlers during read_header(), or it can also be called during read_packet() if 
AVFMTCTX_NOHEADER is set by the demux.

Don’t confuse what I’m asking about with the ability to have a filter which has 
one input and multiple outputs.  These are targeted at raw audio and video 
frames, as opposed to streams.

I would be quite happen to be proven wrong, of course.  Just point me to an 
example and I’m happy to copy/reuse whatever approach others have taken.

Devin

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] decklink 24/32 bit question

2017-10-17 Thread Devin Heitmueller
> 
> The decklink sdk only defines two BMDAudioSampleType values: 
> bmdAudioSampleType16bitInteger and bmdAudioSampleType32bitInteger. I don't 
> think there's an easy way to support a 24 bit input here. Generally in this 
> case I've used bmdAudioSampleType32bitInteger and then encode it at pcm_s24le.
> Dave Rice

For what it’s worth, I’ve got deinterleaving code in the works to handle 
capture of multiple pairs of audio (i.e. break 16 channels into 8 pairs and 
announce them as separate S16LE streams).  If we really thought 24-bit was 
desirable, that code could be adjusted accordingly (the hardware would still 
capture 32-bit, but the deinterleaver would put out S24LE).

Devin
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] decklink 24/32 bit question

2017-10-18 Thread Devin Heitmueller
Hello Marton,

> On Oct 18, 2017, at 3:02 PM, Marton Balint  wrote:
> 
> 
> 
> On Tue, 17 Oct 2017, Devin Heitmueller wrote:
> 
>>> > The decklink sdk only defines two BMDAudioSampleType values: 
>>> > bmdAudioSampleType16bitInteger and bmdAudioSampleType32bitInteger. I 
>>> > don't think there's an easy way to support a 24 bit input here. Generally 
>>> > in this case I've used bmdAudioSampleType32bitInteger and then encode it 
>>> > at pcm_s24le.
>>> Dave Rice
>> 
>> For what it’s worth, I’ve got deinterleaving code in the works to handle 
>> capture of multiple pairs of audio (i.e. break 16 channels into 8 pairs and 
>> announce them as separate S16LE streams).  If we really thought 24-bit was 
>> desirable, that code could be adjusted accordingly (the hardware would still 
>> capture 32-bit, but the deinterleaver would put out S24LE).
> 
> Breaking 8/16 channels to stereo streams can be done by an audio filter (by 
> using "asplit" to multiply the source to 4 outputs and then "pan" or 
> "channelmap" on each output to select the proper source channels), so I don't 
> think direct support of splitting channels in the decklink device is 
> acceptable.

So using an audio filter sounds like a great idea and was my initial instinct.  
However then I dug into the ffmpeg API interfaces and discovered that audio 
filters cannot output anything but audio samples.  This prevents me from doing 
detection of compressed audio over SDI (i.e. S302M) during the probing phase 
(i.e. for Dolby-E or AC-3), since the output would be something other than 
uncompressed audio samples.

It also means you cannot do a simple use case for having the decklink demux 
announce 8 streams which can be easily fed to eight different encoders through 
the standard map facility.  You would have to probe the input, and use 
filter_complex to insert an audio filter which deinterleaves the audio, and 
then manually instantiate a series of audio encoders mapped to each output.

The approach I’ve proposed will “just work” in the ffmpeg command line use 
case.  Run the command, say “-map 0”, and eight streams will be fed to eight 
encoders (or pass through with the copy codec for compressed audio) and 
inserted into the transport stream.

I’m not against refactoring the existing S302 codec to create a demux to detect 
audio and create the streams with either raw or compressed codec type based on 
detection.  In that case the decklink demux would spawn an S302 sub-demux for 
each of the streams.  However in any case it would still require the audio to 
be de-interleaved in order to detect the S302 packets.

Now I welcome someone to better design the filter interface to allow filters 
which can take in raw audio and output compressed data.  Likewise I welcome 
someone to introduce improvements which allow filters to create new streams.  
However in the absence of either of these rather large redesigns of ffmpeg’s 
internals, the approach I’m proposing is the only thing I could come up with 
which allows for these decisions to be made at the probing phase.

I’m also not against the notion of invoking a filter from inside the demux  if 
such a deinterleaving filter exists (and then have the decklink demux create 
the streams based on passing the data through the filter).  That would allow 
for some code reuse but is still a hack to overcome the limitations of ffmpeg’s 
probing framework.

I’m also not against the notion of creating a demux which is fed all eight 
audio streams as one blob and having that demux create the streams (and then 
invoking that demux from inside the decklink demux).  While that might lend 
itself to a bit of code sharing if we had other SDI input cards we wanted to 
support (assuming the provide audio in the same basic format), it still 
wouldn’t be an audio filter and I’m not confident the extra abstraction is 
worth the effort.

Devin
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH RFC] libavdevice/decklink: Add support for EIA-708 output over SDI

2017-10-18 Thread Devin Heitmueller

> On Oct 6, 2017, at 12:56 PM, Devin Heitmueller  
> wrote:
> 
> From: Devin Heitmueller 
> 
> Hook in libklvanc and use it for output of EIA-708 captions over
> SDI.  The bulk of this patch is just general support for ancillary
> data for the Decklink SDI module - the real work for construction
> of the EIA-708 CDP and VANC line construction is done by libklvanc.
> 
> Libklvanc can be found at: https://github.com/stoth68000/libklvanc
> 
> Signed-off-by: Devin Heitmueller 

Any additional technical comments related to this patch, or should I resubmit 
for merge?

Devin
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] decklink 24/32 bit question

2017-10-18 Thread Devin Heitmueller
Hi Doug,

> On Oct 18, 2017, at 4:15 PM, Douglas Marsh  wrote:
> 
> I am not really sure I follow. I am not sure supporting 24-bit is a big 
> issue. A sample size of 32-bit should work fine for most folks. I can only 
> think of people (in the output stream) converting to 24-bits (via truncate or 
> dither*) to save disk space or pre-processing for some other step 
> [compression] (but video is really the bit-hog). I only mentioned 24-bits 
> because the ADC/DACs are mentioned at supporting PCM 24-bits natively meaning 
> the PCI card is (assuming) padding the LSB (hence truncate is more logical 
> for any conversion 32->24). AS for what comes in digitally over SDI or HDMI 
> is too assumed to only support PCM 24-bits (but it is subject to standards 
> that can update).
> 
> Making the workflow (stream capturing) of 32-bits is simpler, and moving any 
> bit-depth conversion to the output stream side. Only concerns would be CPU 
> processing (of which truncating bits is very fast and logical due to the 
> assumed padding).

I think you and I are on the same page.  It wasn't clear to me what would 
prompt someone to say they want 24-bit audio as opposed to 32 (which is way 
easier to work with because of alignment).  That said, if you had such a use 
case I think this could be done.

That said, I'm all about not adding functionality nobody cares about.

Thanks for adding this functionality, as I need it to reliably do compressed 
audio detection (which is next on my list after support for multi-channel audio 
is working).

Devin

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH RFC] libavdevice/decklink: Add support for EIA-708 output over SDI

2017-10-18 Thread Devin Heitmueller
Hi Deron,

> I was going to actually test this with some old broadcast equipment I have 
> just dying for a purpose, but I don't see how to generate AV_PKT_DATA_A53_CC 
> side packet data except using the Decklink capture. I have A53 documentation, 
> but it just refers to CEA-708 (or SMPTE 334, or ... what an unraveling ball 
> of yarn it is. Looks like I could spend a months income on standards just 
> trying to learn how this is encoded).

Yeah.  You could certainly spend a good bit of cash if you had to buy the 
individual specs.  Worth noting that the ATSC specs are freely available though 
on their website, and the CEA-708 is largely described in the FCC specification 
(not a substitute for the real thing, but good enough for the casual reader).  
SMPTE has a “digital library” where you can get access to *all* their specs 
with a subscription of around $600/year.  It’s not ideal for a 
non-professional, but for people who *need* the specs it’s way cheaper than 
buying them piecemeal for $120/spec.

> 
> On a side note, can AV_PKT_DATA_A53_CC be used for something besides CEA-708? 
> Not sure I understand the line between A53 CC encoding (which is at least in 
> part what this generates, right?) and CEA-708 (which is what this takes, 
> right?) and why this side data is called A53_CC?
> 
> I know these questions are outside the scope that you were asking…
> 
No problem.  I should really write a primer on this stuff since there are a 
whole bunch of specs which are inter-related.  Briefly….

CEA-708 is what non-technical people typically consider to be “digital closed 
captions”.  They represent the standard that replaces old fashioned NTSC closed 
captions, which were described in EIA/CEA-608.  The spec describes what could 
be characterized as a protocol stack of functionality, including transport 
through presentation layers (i.e. how the captions are constructed, rules for 
how to render them on-screen, etc).  

CEA-708 also includes a construct for tunneling old CEA-608 packets.  In fact, 
most CEA-708 streams are really just up-converted from CEA-608, since the FCC 
requires both to be supported and 608 is a subset in functionality of 708.  On 
the other hand, you can’t typically down convert 708 to 608 since there are a 
bunch of formatting codes in 708 which have no corresponding capability in 608. 
 If you’re using VLC or most other applications, they will claim to render 708 
captions, but they’re really just rendering the 608 captions contained in 708.

One component of the CEA-708 spec describes a “CDP”, which is “Caption 
Distribution Packet”.  This is a low-level packet format which includes not 
just multiple caption streams but also timecodes and service data (e.g. caption 
languages, etc).  CDP packets can be sent over a number of different physical 
transports, including old-fashioned serial ports.

SMPTE 334M describes how to transport CEA-708 CDP packets over an SDI link in 
the VANC area of the frame.

A53 refers to the ATSC A/53 specification, which basically refers to how 
digital TV is transmitted over-the-air.  One part of that spec includes how to 
embed CEA-708 captions into an MPEG2 transport stream.  The A/53 spec basically 
says how to embed the CEA-708 caption bytes into an MPEG-2 stream, and then 
refers you to CEA-708 for the details of what to do with those bytes.

Both the CEA-708 CDP format and A/53 come down to a series of three byte 
packets which contain the actual captioning data.  This corresponds to what is 
being serialized in AV_PKT_DATA_A53_CC.  In order to encode an SDI feed into an 
MPEG-2 stream, you would need to deconstruct the CDP, extract the captioning 
bytes, and load them into the side data packet.  Once that’s done, the avpacket 
is handed off to an H.264/MPEG-2 video encoder, which knows how to take those 
captioning bytes and embed them into the compressed video (using the MPEG-2 
user_data field if it’s MPEG-2 video, or the SEI field if it’s H.264).

That series of three-byte packets is essentially the “lowest common 
denominator” representing the captioning data (assuming you only care about 
closed captions and not timecodes or service info).  I have use cases where 
this stuff should really be preserved, and am weighing the merits of 
introducing a new side data format for the CDP which preserves all the info, 
and then encoders can extract what they need.  There are plusses/minuses to 
this approach and it’s still under consideration.

I hope that gives you a bit more background.

Cheers,

Devin
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH RFC] libavdevice/decklink: Add support for EIA-708 output over SDI

2017-10-18 Thread Devin Heitmueller
Hi Dave,

> 
> The President of AJA has publicly stated an intent to add an open license to 
> their SDK, https://twitter.com/ajaprez/status/910100436224499713 
> .

This is certainly good news.  Looking at AJA’s offering is on my TODO list but 
I just haven’t found the time to pick up a card and dig into their SDK.

> I’m glad to hear that handling other VANC data is in the works, I’m 
> particularly interested in VITC and EIA-608 with inputs.

I am certainly interested in supporting VITC, assuming you’re talking about 
SMPTE 12M/RP-188 time codes.  EIA-608 wouldn’t be very hard to do, although 
there isn’t much equipment out there which does CEA-608 but not CEA-708.  Maybe 
some old legacy pre-HD equipment.

It would actually be pretty easy to do EIA-608 and just shoe-horn it in as 
AV_FRAME_DATA_A53_CC side data.  You would just need to compute the correct 
value for cc_count based on the frame rate, create an array of bytes of size 
[cc_count][3], and then fill the first two entries with the EIA-608 byte pairs. 
 Creating a new side-data type would seem like the reasonable approach at first 
glance, but then you have to duplicate all the logic on the insertion side for 
the various encoder codecs.

That said, supporting both 608 and 708 creates an unfortunate side effect - you 
have to write logic to decide which takes precedence if both are in an SDI 
frame (or expose configuration options to let the user specify).  Any SDI 
capture routine would have to choose one or the other, since downstream codecs 
general don’t have the capacity to insert both into a transport stream.  My 
inclination would probably be to simply ignore EIA-608 if there are also 708 
VANC packets present, but I can imagine you would also want a config option to 
allow the user to override that behavior.

Devin
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH RFC] libavdevice/decklink: Add support for EIA-708 output over SDI

2017-10-29 Thread Devin Heitmueller

> On Oct 25, 2017, at 2:23 PM, Marton Balint  wrote:
> 
> 
> On Fri, 6 Oct 2017, Devin Heitmueller wrote:
> 
>> From: Devin Heitmueller > <mailto:dheitmuel...@kernellabs.com>>
>> 
>> Hook in libklvanc and use it for output of EIA-708 captions over
>> SDI.  The bulk of this patch is just general support for ancillary
>> data for the Decklink SDI module - the real work for construction
>> of the EIA-708 CDP and VANC line construction is done by libklvanc.
>> 
>> Libklvanc can be found at: https://github.com/stoth68000/libklvanc 
>> <https://github.com/stoth68000/libklvanc>
> 
> Sorry for the delay, I had little time lately. In general I think it is OK to 
> put VANC functionality into a library, but libklvanc does not seem like a 
> very mature one, it has some pretty generic function names without 
> namespacing, e.g. "generate_vanc_line". Or it is using simple printf for the 
> dumper functions. You plan to work on these kind of issues to make it more 
> like a "stable" generic library?

Yeah, the name spacing and logging are known issues and have been on my todo 
list for a while.  The focus has been on the core functionality for VANC 
management and protocol support, and there clearly needs a bit more polish on 
some of the peripheral areas.

Thanks for providing feedback.  I will incorporate your suggestions and submit 
a revised patch this week.

Regards,

Devin

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 0/1] Add ETC1 encoder/decoder

2017-02-22 Thread Devin Heitmueller
This patch adds support for encoding and decoding the Ericsson Texture
Compression 1 (ETC1) format.  This includes support for the PKM file
format.

Note the actual codec uses Google's ETC1 implementation from Android
(Apache 2 licensed).  The only changes I made were a couple of very shallow
 fixes to compile in C90 mode (cases where variables were being declared
in-line with code).

Devin Heitmueller (1):
  Add support for Ericsson Texture Compression 1 (ETC1)

 libavcodec/Makefile |   2 +
 libavcodec/allcodecs.c  |   1 +
 libavcodec/avcodec.h|   1 +
 libavcodec/codec_desc.c |   7 +
 libavcodec/etc1.c   | 707 
 libavcodec/etc1.h   | 114 
 libavcodec/etc1dec.c|  81 ++
 libavcodec/etc1enc.c|  90 ++
 libavcodec/utils.c  |   3 +-
 libavformat/img2.c  |   1 +
 libavformat/img2enc.c   |   2 +-
 11 files changed, 1007 insertions(+), 2 deletions(-)
 create mode 100644 libavcodec/etc1.c
 create mode 100644 libavcodec/etc1.h
 create mode 100644 libavcodec/etc1dec.c
 create mode 100644 libavcodec/etc1enc.c

-- 
1.9.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 1/1] Add support for Ericsson Texture Compression 1 (ETC1)

2017-02-22 Thread Devin Heitmueller
This patch adds support for encoding/decoding ETC1 compressed
textures.  This includes support for the PKM file format.

Example usage:

./ffmpeg -i input_image.jpg new.pkm
./ffmpeg -i new.pkm foo.jpg

Signed-off-by: Devin Heitmueller 
---
 libavcodec/Makefile |   2 +
 libavcodec/allcodecs.c  |   1 +
 libavcodec/avcodec.h|   1 +
 libavcodec/codec_desc.c |   7 +
 libavcodec/etc1.c   | 707 
 libavcodec/etc1.h   | 114 
 libavcodec/etc1dec.c|  81 ++
 libavcodec/etc1enc.c|  90 ++
 libavcodec/utils.c  |   3 +-
 libavformat/img2.c  |   1 +
 libavformat/img2enc.c   |   2 +-
 11 files changed, 1007 insertions(+), 2 deletions(-)
 create mode 100644 libavcodec/etc1.c
 create mode 100644 libavcodec/etc1.h
 create mode 100644 libavcodec/etc1dec.c
 create mode 100644 libavcodec/etc1enc.c

diff --git a/libavcodec/Makefile b/libavcodec/Makefile
index a1ce264..f5eec15 100644
--- a/libavcodec/Makefile
+++ b/libavcodec/Makefile
@@ -280,6 +280,8 @@ OBJS-$(CONFIG_EIGHTSVX_EXP_DECODER)+= 8svx.o
 OBJS-$(CONFIG_EIGHTSVX_FIB_DECODER)+= 8svx.o
 OBJS-$(CONFIG_ESCAPE124_DECODER)   += escape124.o
 OBJS-$(CONFIG_ESCAPE130_DECODER)   += escape130.o
+OBJS-$(CONFIG_ETC1_ENCODER)+= etc1enc.o etc1.o
+OBJS-$(CONFIG_ETC1_DECODER)+= etc1dec.o etc1.o
 OBJS-$(CONFIG_EVRC_DECODER)+= evrcdec.o acelp_vectors.o lsp.o
 OBJS-$(CONFIG_EXR_DECODER) += exr.o
 OBJS-$(CONFIG_FFV1_DECODER)+= ffv1dec.o ffv1.o
diff --git a/libavcodec/allcodecs.c b/libavcodec/allcodecs.c
index f12a54d..cd3a662 100644
--- a/libavcodec/allcodecs.c
+++ b/libavcodec/allcodecs.c
@@ -184,6 +184,7 @@ void avcodec_register_all(void)
 REGISTER_DECODER(EIGHTSVX_FIB,  eightsvx_fib);
 REGISTER_DECODER(ESCAPE124, escape124);
 REGISTER_DECODER(ESCAPE130, escape130);
+REGISTER_ENCDEC (ETC1,  etc1);
 REGISTER_DECODER(EXR,   exr);
 REGISTER_ENCDEC (FFV1,  ffv1);
 REGISTER_ENCDEC (FFVHUFF,   ffvhuff);
diff --git a/libavcodec/avcodec.h b/libavcodec/avcodec.h
index 5616fb0..bf86210 100644
--- a/libavcodec/avcodec.h
+++ b/libavcodec/avcodec.h
@@ -388,6 +388,7 @@ enum AVCodecID {
 AV_CODEC_ID_DXV,
 AV_CODEC_ID_SCREENPRESSO,
 AV_CODEC_ID_RSCC,
+AV_CODEC_ID_ETC1,
 
 AV_CODEC_ID_Y41P = 0x8000,
 AV_CODEC_ID_AVRP,
diff --git a/libavcodec/codec_desc.c b/libavcodec/codec_desc.c
index 35846c0..de7695d 100644
--- a/libavcodec/codec_desc.c
+++ b/libavcodec/codec_desc.c
@@ -1199,6 +1199,13 @@ static const AVCodecDescriptor codec_descriptors[] = {
 },
 
 {
+.id= AV_CODEC_ID_ETC1,
+.type  = AVMEDIA_TYPE_VIDEO,
+.name  = "etc1",
+.long_name = NULL_IF_CONFIG_SMALL("ETC1 (Ericsson Texture Compression) 
image"),
+.props = AV_CODEC_PROP_LOSSY,
+},
+{
 .id= AV_CODEC_ID_G2M,
 .type  = AVMEDIA_TYPE_VIDEO,
 .name  = "g2m",
diff --git a/libavcodec/etc1.c b/libavcodec/etc1.c
new file mode 100644
index 000..e28d83c
--- /dev/null
+++ b/libavcodec/etc1.c
@@ -0,0 +1,707 @@
+// Copyright 2009 Google Inc.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+//
+
+// This is a fork of the AOSP project ETC1 codec. The original code can be 
found
+// at the following web site:
+// 
https://android.googlesource.com/platform/frameworks/native/+/master/opengl/include/ETC1/
+
+//
+
+#include "etc1.h"
+
+//#include 
+#include 
+
+/* From 
http://www.khronos.org/registry/gles/extensions/OES/OES_compressed_ETC1_RGB8_texture.txt
+
+ The number of bits that represent a 4x4 texel block is 64 bits if
+  is given by ETC1_RGB8_OES.
+
+ The data for a block is a number of bytes,
+
+ {q0, q1, q2, q3, q4, q5, q6, q7}
+
+ where byte q0 is located at the lowest memory address and q7 at
+ the highest. The 64 bits specifying the block is then represented
+ by the following 64 bit integer:
+
+ int64bit = 256*(256*(256*(256*(256*(256*(256*q0+q1)+q2)+q3)+q4)+q5)+q6)+q7;
+
+ ETC1_RGB8_OES:
+
+ a) bit layout in bits 63 through 32 if diffbit = 0
+
+ 63 62 61 60 59 58 57 56 55 54 53 52 51 50 49 48
+ -

[FFmpeg-devel] FFmpeg table at NAB

2024-04-16 Thread Devin Heitmueller
Hello all,

I wasn't looking to start trouble, but I didn't see any discussion of
this on the mailing list so wanted to bring it to the developer
community's attention.

I attended the NAB conference and went by the "ffmpeg" booth on
Sunday.  What I found was a single table with the official ffmpeg
banner hanging right next to a banner for the GPAC project, and two
salespeople from GPAC handing out marketing literature and trying to
educate me on why I should use their framework for my next project.

I'm not saying that GPAC shouldn't be able to have a table at the
conference, but it feels pretty misleading to have an "ffmpeg" booth
listed in the conference materials, with a table prominently
displaying the ffmpeg logo, with zero people from ffmpeg and people
pushing users to use an alternative framework that some might actually
considered to be a competitor to ffmpeg.

Devin

-- 
Devin Heitmueller, Senior Software Engineer
LTN Global Communications
o: +1 (301) 363-1001
w: https://ltnglobal.com  e: devin.heitmuel...@ltnglobal.com
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


  1   2   3   4   >