Re: [FFmpeg-user] OVERLAY_CUDA and PGS Subtitle burn

2020-09-22 Thread Panda Sing Cool
Hi Dennis,

thanks for the link. I have rebuild ffmpeg with all the latest version of
any source i can find, update to the latest cuda 11 patch 3 and NV
headers...
Also included the latest vulkan sdk and support for ffmpeg.

overlay_cuda and overlay_opencl, same issue: video appear but not subtitle.
overlay_vulkan -> direct crash dump from ffmpeg.

:(

Any change from your side ?










On Fri, 18 Sep 2020 at 22:46, Dennis Mungai  wrote:

> On Thu, 17 Sep 2020 at 03:29, Panda Sing Cool 
> wrote:
>
> > Hi,
> >
> > Changed the input format:
> > Video -> yuv420p
> > Sub -> yuv*a*420p (to include Alpha Channel)
> >
> > Now the video is showing, but still no subtitles.
> > Still get the error message:
> > *Error while add the frame to buffer source(Internal bug, should not have
> > happened).*
> >
> >
> > ./ffmpeg -threads 1 -loglevel info -nostdin -y -fflags +genpts-fastseek \
> >-ss 00:00:00 -t 00:00:15 \
> >-extra_hw_frames 3 -vsync 0 -async 0 -filter_threads 1
> > -filter_complex_threads 1 \
> >-init_hw_device cuda=cuda -hwaccel cuda -filter_hw_device cuda
> > -hwaccel_output_format cuda \
> >-i input.mkv \
> >
> >
> >
> >
> > *   -filter_complex \
> > "[0:v]scale_npp=w=-1:h=720:interp_algo=lanczos:format=yuv420p[vid]; \
> >[0:s]format=yuva420p,hwupload[sub]; \
>  [vid][sub]overlay_cuda[v]"
> > \*-map "[v]" -map 0:a \
> >-force_key_frames "expr:gte(t,n_forced*5)" \
> >-c:v h264_nvenc -preset:v slow -profile:v high -level:v 51 \
> >-rc:v cbr_hq -rc-lookahead:v 32 -refs:v 16 -cq:v 16 -bf:v 3 -b:v 2000K
> > -minrate:v 2000K -maxrate:v 2000k -bufsize:v 8M -coder:v cabac
> > -b_ref_mode:v middle \
> >-c:a libfdk_aac -ac 2 -ar 48000 -b:a 128k \
> >output.mkv
> >
> >
> >
> >
> > On Thu, 17 Sep 2020 at 07:08, Panda Sing Cool 
> > wrote:
> >
> > > Hi everyone,
> > >
> > > i'm trying to use the OVERLAY_CUDA function to burn PGS titles over a
> > > video and the result is not working.
> > > i might  misunderstand the usage of this function, so some help is
> > welcome.
> > >
> > > The result of this command is a black screen with audio, using
> 'standard'
> > > overlay filter is working fine, but slow ...
> > >
> > > Notice this message at the end of the log file (ffmpeg version
> > > N-99194-g142ae27b1d ( compiled myself from git) ):
> > > *Error while add the frame to buffer source(Internal bug, should not
> have
> > > happened).*
> > >
> > > Thanks for any help.
> > >
> > >
> > >
> >
> ***
> > >
> > >./ffmpeg -threads 1 -loglevel info -nostdin -y -fflags
> > +genpts-fastseek
> > > \
> > >-ss 00:00:00 -t 00:01:00 \
> > >-extra_hw_frames 3 -vsync 0 -async 0 -filter_threads 1
> > > -filter_complex_threads 1 \
> > >-init_hw_device cuda=cuda -hwaccel cuda -filter_hw_device cuda
> > > -hwaccel_output_format cuda \
> > >-i input.mkv \
> > >-filter_complex \
> > >
> > >
> > > *   "[0:v]scale_npp=w=-1:h=720:interp_algo=lanczos:format=nv12[vid]; \
> > > [0:s]format=nv12,hwupload_cuda[sub]; \[vid][sub]overlay_cuda[v]" \*
> > > -map "[v]" -map 0:a \
> > >-force_key_frames "expr:gte(t,n_forced*5)" \
> > >-c:v h264_nvenc -preset:v slow -profile:v high -level:v 51 \
> > >-rc:v cbr_hq -rc-lookahead:v 32 -refs:v 16 -cq:v 16 -bf:v 3 -b:v
> 2000K
> > > -minrate:v 2000K -maxrate:v 2000k -bufsize:v 8M -coder:v cabac
> > > -b_ref_mode:v middle \
> > >-c:a libfdk_aac -ac 2 -ar 48000 -b:a 128k \
> > >output.mkv
> > >
> > >
> > > * LOG 
> > >
> > > f*fmpeg version N-99194-g142ae27b1d *Copyright (c) 2000-2020 the FFmpeg
> > > developers
> > >   built with gcc 10 (GCC)
> > >   configuration: --prefix=/home/users/work/ffmpeg_build
> > > --pkg-config-flags=--static --extra-libs=-lpthread --extra-libs=-lm
> > > --bindir=/home/users/work/ffmpeg_build/bin --enable-gpl
> > --enable-libfdk_aac
> > > --enable-libfreetype --enable-libmp3lame --enable-libopus
> --enable-libvpx
> > > --enable-libx264 --enable-libx265 --enable-vulkan --enable-nonfree
> > > --enable-libnpp --enable-nvenc --enable-cuvid --enable-libass
> > > --enable-libfontconfig --enable-libfreetype --enable-libfribidi
> > > --enable-cuda
> > >   libavutil  56. 59.100 / 56. 59.100
> > >   libavcodec 58.106.100 / 58.106.100
> > >   libavformat58. 56.100 / 58. 56.100
> > >   libavdevice58. 11.102 / 58. 11.102
> > >   libavfilter 7. 87.100 /  7. 87.100
> > >   libswscale  5.  8.100 /  5.  8.100
> > >   libswresample   3.  8.100 /  3.  8.100
> > >   libpostproc55.  8.100 / 55.  8.100
> > > Input #0, matroska,webm, from 'input.mkv':
> > >   Metadata:
> > > encoder : libebml v1.3.10 + libmatroska v1.5.2
> > > creation_time   : 2020-08-13T06:58:46.00Z
> > >   Duration: 00:57:53.06, start: 0.00, bitrate: 12993 kb/s
> > > Chapter #0:0: start 0.00, end 508.424583
> > > Metadata:
> > >   title   : Chapter 01
> 

Re: [FFmpeg-user] bwdif filter question

2020-09-22 Thread Mark Filipak (ffmpeg)

On 09/22/2020 04:20 AM, Edward Park wrote:


Not so, Ted. The following two definitions are from the glossary I'm preparing 
(and which cites H.262).


Ah okay I thought that was a bit weird, I assume it was a typo but I saw h.242 and 
thought two different types of "frames" were being mixed. Before saying 
anything if the side project you mentioned was a layman’s glossary type reference 
material, I think you should base it off of the definitions section instead of the 
bitstream definitions, just my $.02.


H.242 was indeed a typo ...Oh, wait! Doesn't (H.222+H.262)/2 = H.242?. :-)

I'm not sure what you mean by "definitions section", but I don't believe in "layman's" glossaries. I 
believe novices can comprehend structures at a codesmith's level if the structures are precisely 
represented. The novices who can't comprehend the structures need to learn. If they don't want to 
learn, then they're not really serious. This video processing stuff is for serious people. That 
written, what is not reasonable, IMHO, is to expect novices to learn codesmith-jargon and 
codesmith-shorthand. English has been around for a long time and it includes everything that is needed.


I would show you some of my mpegps parser documentation and some of my glossary stuff, but 90% of it 
is texipix diagrams and/or spreadsheet-style, plaintext tables that are formatted way wider than 70 
characters/line, so won't paste into email.


-snip-

Since you capitalize "AVFrames", I assume that you cite a standard of some 
sort. I'd very much like to see it. Do you have a link?


This was the main info I was trying to add, it's not a standard of any kind, 
quite the opposite, actually, since technically its declaration could be 
changed in a single commit, but I don't think that is a common occurrence. 
AVFrame is a struct that is used to abstract/implement all frames in the many 
different formats ffmpeg handles. it is noted that its size could change as 
fields are added to the struct.

There's documentation generated for it here: 
https://www.ffmpeg.org/doxygen/trunk/structAVFrame.html


Oh, Thank You! That's going to help me to communicate/discuss with the 
developers.


H.262 refers to "frame pictures" and "field pictures" without clearly delineating them. I am 
calling them "pictures" and "halfpictures".


I thought ISO 13818-2 was basically the identical standard, and it gives pretty 
clear definitions imo, here are some excerpts. (Wall of text coming up… 
standards are very wordy by necessity)


--snip 13818-2 excerpts--

To me, that's all descriptions, not definitions. To me, it's vague and ambiguous. To me, it's sort 
of nebulous.


Standards don't need to be wordy. The *more* one writes, the greater the chance of mistakes and 
ambiguity. Write less, not more.


Novices aren't dumb, they're just ignorant. By your use of "struct" in your reply, I take it that 
you're a 'C' codesmith -- I write assy & other HLL & hardware description languages like VHDL & 
Verilog, but I've never written 'C'. I've employed 'C' codesmiths, therefore, I'm a bit conversant 
with 'C', but just a bit.


What I've noted is that codesmiths generally don't know how to write effective English. Writing well 
constructed English is difficult and time consuming at first, as difficult as learning how to 
effectively use any syntax that requires knowledge and experience. There are clear rules but most 
codesmiths don't know them, especially if English is their native language. They write like they 
speak: conversationally. And when others don't understand what's written, rather than revise 
smaller, the grammar-challenged revise longer thinking that yet-another-perspective is what's 
needed. That produces ambiguity because different perspectives promotes ambiguity. IMHO, there 
should solely be just one perspective: structure. Usage is the place for description but that's not 
(or shouldn't be) in the domain of a glossary.



So field pictures are decoded fields, and frame pictures are decoded frames? 
Not sure if I understand 100% but I think it’s pretty clear, “two field 
pictures comprise a coded frame.” IIRC field pictures aren’t decoded into 
separate fields because two frames in one packet makes something explode within 
FFmpeg


Well, packets are just how transports chop up a stream in order to send it, piecewise, via a 
packetized media. They don't matter. I think that, for mpegps, start at 'sequence_header_code' (i.e. 
x00 00 01 B3) and proceed from there, through the transport packet headers, throwing out the packet 
headers, until encountering the next 'sequence_header_code' or the 'sequence_end_code' (i.e. x00 00 
01 B7).


I don't know how frames are passed from the decoder to a filter inside ffmpeg. I don't know whether 
the frames are in the form of decoded samples in a macroblock structure or are just a glob of bytes. 
Considering the differences between 420 & 422 & 444, I think that the frames passed from the decoder 
must have some 

Re: [FFmpeg-user] 5% of audio samples missing when capturing audio on a mac

2020-09-22 Thread Carl Zwanzig

On 9/22/2020 8:42 AM, Carl Zwanzig wrote:

(which some of the things I was recalling, too)

which -corrects-

sigh, I shouldn't post before the coffee takes hold.

z!

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] 5% of audio samples missing when capturing audio on a mac

2020-09-22 Thread Carl Zwanzig

On 9/22/2020 8:29 AM, Edward Park wrote:

I might be making up the history behind it, but 44.1kHz was basically
just workable, with 20kHz assumed to be the “bandwidth” limit of sound
intended for people to hear, 40kHz would be needed to encode sound
signals that dense, and the extra 4.1kHz would help get rid of artifacts
due to aliasing - and probably the biggest factor was the CD.


My recollection is that you're substantially correct- tradeoffs of number of 
the bits on a CD, human hearing (most people can't actually hear up to 
20kHz), however


"The official Philips history says this capacity was specified by Sony 
executive Norio Ohga to be able to contain the entirety of Beethoven's Ninth 
Symphony on one disc.[25] This is a myth according to Kees Immink, as the 
EFM code format had not yet been decided in December 1979, when the decision 
to adopt the 120 mm was made. The adoption of EFM in June 1980 allowed 30 
percent more playing time that would have resulted in 97 minutes for 120  mm 
diameter or 74 minutes for a disc as small as 100  mm. Instead, however, the 
information density was lowered by 30 percent to keep the playing time at 74 
minutes"

(which some of the things I was recalling, too)


As for 3/1001- that's an artifact of NTSC analog television trying to 
fit color information into a b/w signal and then later applying SMPTE 
timecode to the resulting frame rate. There's a good explanation at 
https://en.wikipedia.org/wiki/SMPTE_timecode#Drop-frame_timecode


z!
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] 5% of audio samples missing when capturing audio on a mac

2020-09-22 Thread Edward Park
Hi,

>> 48000 is certainly a much nicer number when you compare it with the common 
>> video framerates (24, 30/1.001, etc. all divide cleanly)
> 
> Can you explain this? I'm trying to get (30/1.001) or the rounded 29.97 to 
> divide 48k cleanly or be a clean ratio but I don't see it. Maybe that with 
> 30/1.001 it's got a denominator of 5, which is pretty small?

Compared to 44.1kHz? 48kHz is 48000 samples per second, and 29.97 (30/1.001) 
fps is, obviously, 3/1001 (≈29.97) frames per second - flip that around and 
you get 1001/3 seconds duration for each frame.

For each frame there are 1601.6 (16 × 1.001) samples. For 59.97fps, 800.8, for 
film, 2002 per frame. The 1.001 factor might seem a bit ugly, but that’s kind 
of why 48 whole kilohertz works much better.

if you think about an mpeg ts system clock timebase of 1/9 for example, 
common video or film framerates generally come out to an integer number of 
1/9 second “ticks.” A 29.97fps frame is 3003 “ticks”, which also matches 
the 1601.6 samples duration. The fractions of samples might make it look like 
the ratio is not easy to work with, but at 48kHz, one sample has a duration of 
1.875 “ticks”, or 15/8 = 30/16

If you replace 48000 with 44100, the numbers aren’t as nice. (Sometimes not 
even rational? Not sure what combo does that though)

I might be making up the history behind it, but 44.1kHz was basically just 
workable, with 20kHz assumed to be the “bandwidth” limit of sound intended for 
people to hear, 40kHz would be needed to encode sound signals that dense, and 
the extra 4.1kHz would help get rid of artifacts due to aliasing - and probably 
the biggest factor was the CD. I’m sure they could have pressed much more 
density into the medium, but the laser tech that was commercially viable at the 
time to put in players for the general consumer sort of made 44.1kHz a decent 
detent in the sampling frequency dial in an imaginary sample rate-to-cost 
estimating machine.

If you actually do the calculations with 44.1kHz, the ratios you get aren’t 
*too* bad, instead of numbers like 2^3 or 3×5, it’s something like 3×49 or 
something.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] HLS stream delivery problem

2020-09-22 Thread Edward Park
Hi,

> Do you know how to fix this?
> This is my code with ffmpeg, Y:\ is drive letter to the HLS server 
> (WebDAV)

WebDAV is good for lightweight collaboration for old school workgroups like 
maybe wiki pages. It’s basically lots of http requests to simulate a locally 
mounted drive, maybe there is hol blocking going on. If it is a transport 
issue, it would be impossible to find out without something like a span capture.

This doesn’t sound like a problem with ffmpeg at all, unless a different WebDAV 
share mounted in the same manner doesn’t suffer from the same problem, or if 
the problem occurs even when you save to a directly connected storage device.


Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] bwdif filter question

2020-09-22 Thread Mark Filipak (ffmpeg)

On 09/22/2020 05:59 AM, Nicolas George wrote:

Mark Filipak (ffmpeg) (12020-09-21):

No so, Ted. The following two definitions are from the glossary I'm preparing 
(and which cites H.262).


Quoting yourself does not prove you right.


You are correct. That's why H.262 is in the definition. I'm not quoting myself. 
I'm quoting H.262.



___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] SCTE-35 implementation already (bounty)

2020-09-22 Thread Devin Heitmueller
Hi there,

On Tue, Sep 22, 2020 at 7:23 AM Dennis Mungai  wrote:
> To clarify: Devin's ffmpeg branch (linked above) delivers the first two
> objectives. However, these patches need to be forward-ported to git master,
> as they also require significant fix-ups to be applied to the mpegts muxer.

To expand on this a bit, the patches are off a branch I did a couple
of years ago which I am running in production.  The patches can be
forward ported to master, but they are dependent on a few other
commits earlier in that branch.  This includes some changes to the
MPEG-TS demux, a change to stash the source PTS value as AVPacket side
data, and some changes to libavformat/mux.c to treat SCTE-35 packets
as sparse so that the mux framework doesn't stall waiting for packets.
There might be a couple of other misc patches in there as well which
would need to be ported, but I haven't looked in a while so I might
not be thinking of them.

> That way, a downstream packager can use the SCTE-35 payloads to insert the
> relevant splices to the HLS and DASH manifests.

It's worthwhile to note that the patch series today doesn't require
any actual parsing of the SCTE-35 payload beyond modifying the
pts_adjust field (which is at a fixed offset of every packet).  In
order to construct HLS manifests containing SCTE-35 you would have to
actually decode the SCTE-35 payload to extract certain fields
(depending on which form of SCTE-35 over HLS is being implemented --
IIRC there are at least three different methods).  In-house we do this
with libklscte35 and my long term thought would be to get that library
merged upstream as a dependency (as we did for libklvanc), but there
is cleanup required on the library API itself before any patches using
it in ffmpeg would be accepted upstream.

It's probably also worth noting that the patch series Dennis
referenced has a minor bug in the math that results in the splice
point being off by 1-2 frames, even if not transcoding the actual
video.  It's probably fine for most applications but it's on my TODO
list to go through the math and chase that down.

Regarding the ability to force a keyframe at the splice point, this is
harder than one might expect.  We've implemented the code to adjust
the PTS of the SCTE-35 packet as a BSF on the output side, but in
order to influence the behavior of the encoder we would have to
implement a feedback mechanism to notify the encoder (which is further
upstream in the pipeline) on which frame to force the keyframe.  The
ffmpeg frameworks don't really provide an easy way to propagate
information from a BSF back to an upstream video filter (in general
metadata only flows downstream).   My thought was to implement a video
filter which listens on a UDP socket, and then have the BSF do the
math to calculate the actual PTS of the splice point, and then send an
out-of-band message via UDP back to the video filter to set the
appropriate AVFrame flags to force a key frame when appropriate. **

** Note, I haven't actually prototyped this part yet, so don't know if
it will actually work or if there are some unexpected gotcha related
to what the PTS values are at various points in the pipeline.

Devin

-- 
Devin Heitmueller, Senior Software Engineer
LTN Global Communications
o: +1 (301) 363-1001
w: https://ltnglobal.com  e: devin.heitmuel...@ltnglobal.com
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] SCTE-35 implementation already (bounty)

2020-09-22 Thread Dennis Mungai
On Tue, 22 Sep 2020, 13:43 Dennis Mungai,  wrote:

> On Tue, 22 Sep 2020, 13:12 andrei ka,  wrote:
>
>> btw, tsduck suite can extract pts of scte35, chances are you could make
>> filter_complex_script with these pos to insert kframes
>> 
>>
>>
>> On Wed, Sep 2, 2020 at 6:36 AM MediaStream  wrote:
>>
>> > Looking for SCTE-35 pass through implementation:
>> >
>> > 1. Extract SCTE-35 from MPEG-TS.
>> > 2. Translate timing of the original SCTE-35 events to match timing in
>> the
>> > output file appropriately or keep timing as is.
>> > 3. Signal encoder to force key frames at the boundaries of the SCTE-35
>> > event.
>> > 4. Inject SCTE-35 messaging into output MP4 files as emsg and into MPD
>> and
>> > M3U8 manifests if/as needed.
>> >
>> > More info:
>> > https://www.w3.org/TR/media-timed-events/#scte-35
>> >
>> >
>> https://www.tvtechnology.com/opinions/scte10435-and-beyond-a-look-at-ad-insertion-in-an-ott-world
>> >
>> > source TS file with SCTE-35 also saved as ES and XML:
>> > https://www.mediafire.com/folder/zv20csqeo1ixt/
>> >
>> > Bounty of $2,500.00 USD (up to Oct 2020)
>>
>
> So can gstreamer's mpegts muxer implementation.
>

To clarify: Devin's ffmpeg branch (linked above) delivers the first two
objectives. However, these patches need to be forward-ported to git master,
as they also require significant fix-ups to be applied to the mpegts muxer.

That way, a downstream packager can use the SCTE-35 payloads to insert the
relevant splices to the HLS and DASH manifests.

>
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] HLS stream delivery problem

2020-09-22 Thread andrei ka
(disk io check means use dd utility (cygwin) - write some bytes in parallel
with your ts creation (with dd if=/dev/random of=Y... in windoze speak), or
do it somehow else, check if your drops are only cpu induced, not but dd io)

On Tue, Sep 22, 2020 at 12:26 PM andrei ka  wrote:

> hi
> sometimes it helps to separate encoding & packaging - encode to a
> multicast and read that multicast from another process and repackage it.
> color conversions can eat cpu. what for do you need 2nd scaling to 1080
> after defining that you're grabbing at that resolution, your input should
> be already scaled... can your i7 2600 stand 1 resolution encoding without
> choking ? + as you're writing ts to Y run dd on it, check if you have enuf
> dd io speed left...
> 
>
> On Tue, Sep 22, 2020 at 10:37 AM Theo Kooijmans  wrote:
>
>>
>> I have the FFMpeg encoding working fine on a Win10 PC but I have problems
>> with uploading HLS files to a HLS server.
>> This is what I see with CPU load when I transcode from a live
>> Decklinkcard to a local folder.
>> I get a even load on the CPU without big peaks
>>
>> [cid:image001.png@01D690CB.F6946710]
>>
>>
>> Below overview is when try to upload to a drive letter in windows which
>> is connected to a HLS server.
>> There are now high peaks in CPU load and memory is slowly rising to it's
>> limits.
>>
>> Do you know how to fix this?
>> This is my code with ffmpeg, Y:\ is drive letter to the HLS server
>> (WebDAV)
>>
>> ffmpeg -thread_queue_size 5096 -f dshow -video_size 1920x1080
>> -pixel_format uyvy422 -rtbufsize 128M -framerate 25.00 -i video="Decklink
>> Video Capture (2)":audio="Decklink Audio Capture (2)" -c:v libx264 -crf 21
>> -filter_complex
>> "[v:0]split=2[vtemp001][vtemp002];[vtemp001]scale=w=960:h=540[vout001];[vtemp002]scale=w=1920:h=1080[vout002]"
>> -preset veryfast -g 25 -sc_threshold 0 -map [vout001] -c:v:0 libx264 -b:v:0
>> 2000k -map [vout002] -c:v:1 libx264 -b:v:1 6000k -map a:0 -c:a aac -b:a
>> 128k -ac 2 -map a:0 -c:a aac -b:a 128k -ac 2 -f hls -hls_time 6
>> -hls_segment_size 600 -hls_flags +delete_segments+split_by_time
>> -master_pl_name index.m3u8 -var_stream_map "v:0,a:0 v:1,a:1"
>> "Y:\stream_%v.m3u8"
>>
>>
>> [cid:image002.png@01D690CB.F6946710]
>>
>>
>>
>> Regards,
>> Theo Kooijmans
>>
>>
>> ___
>> ffmpeg-user mailing list
>> ffmpeg-user@ffmpeg.org
>> https://ffmpeg.org/mailman/listinfo/ffmpeg-user
>>
>> To unsubscribe, visit link above, or email
>> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
>
>
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] SCTE-35 implementation already (bounty)

2020-09-22 Thread Dennis Mungai
On Tue, 22 Sep 2020, 13:12 andrei ka,  wrote:

> btw, tsduck suite can extract pts of scte35, chances are you could make
> filter_complex_script with these pos to insert kframes
> 
>
>
> On Wed, Sep 2, 2020 at 6:36 AM MediaStream  wrote:
>
> > Looking for SCTE-35 pass through implementation:
> >
> > 1. Extract SCTE-35 from MPEG-TS.
> > 2. Translate timing of the original SCTE-35 events to match timing in the
> > output file appropriately or keep timing as is.
> > 3. Signal encoder to force key frames at the boundaries of the SCTE-35
> > event.
> > 4. Inject SCTE-35 messaging into output MP4 files as emsg and into MPD
> and
> > M3U8 manifests if/as needed.
> >
> > More info:
> > https://www.w3.org/TR/media-timed-events/#scte-35
> >
> >
> https://www.tvtechnology.com/opinions/scte10435-and-beyond-a-look-at-ad-insertion-in-an-ott-world
> >
> > source TS file with SCTE-35 also saved as ES and XML:
> > https://www.mediafire.com/folder/zv20csqeo1ixt/
> >
> > Bounty of $2,500.00 USD (up to Oct 2020)
>

So can gstreamer's mpegts muxer implementation.

>
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] HLS stream delivery problem

2020-09-22 Thread andrei ka
hi
sometimes it helps to separate encoding & packaging - encode to a multicast
and read that multicast from another process and repackage it. color
conversions can eat cpu. what for do you need 2nd scaling to 1080 after
defining that you're grabbing at that resolution, your input should be
already scaled... can your i7 2600 stand 1 resolution encoding without
choking ? + as you're writing ts to Y run dd on it, check if you have enuf
dd io speed left...


On Tue, Sep 22, 2020 at 10:37 AM Theo Kooijmans  wrote:

>
> I have the FFMpeg encoding working fine on a Win10 PC but I have problems
> with uploading HLS files to a HLS server.
> This is what I see with CPU load when I transcode from a live Decklinkcard
> to a local folder.
> I get a even load on the CPU without big peaks
>
> [cid:image001.png@01D690CB.F6946710]
>
>
> Below overview is when try to upload to a drive letter in windows which is
> connected to a HLS server.
> There are now high peaks in CPU load and memory is slowly rising to it's
> limits.
>
> Do you know how to fix this?
> This is my code with ffmpeg, Y:\ is drive letter to the HLS server
> (WebDAV)
>
> ffmpeg -thread_queue_size 5096 -f dshow -video_size 1920x1080
> -pixel_format uyvy422 -rtbufsize 128M -framerate 25.00 -i video="Decklink
> Video Capture (2)":audio="Decklink Audio Capture (2)" -c:v libx264 -crf 21
> -filter_complex
> "[v:0]split=2[vtemp001][vtemp002];[vtemp001]scale=w=960:h=540[vout001];[vtemp002]scale=w=1920:h=1080[vout002]"
> -preset veryfast -g 25 -sc_threshold 0 -map [vout001] -c:v:0 libx264 -b:v:0
> 2000k -map [vout002] -c:v:1 libx264 -b:v:1 6000k -map a:0 -c:a aac -b:a
> 128k -ac 2 -map a:0 -c:a aac -b:a 128k -ac 2 -f hls -hls_time 6
> -hls_segment_size 600 -hls_flags +delete_segments+split_by_time
> -master_pl_name index.m3u8 -var_stream_map "v:0,a:0 v:1,a:1"
> "Y:\stream_%v.m3u8"
>
>
> [cid:image002.png@01D690CB.F6946710]
>
>
>
> Regards,
> Theo Kooijmans
>
>
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-user
>
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] SCTE-35 implementation already (bounty)

2020-09-22 Thread andrei ka
btw, tsduck suite can extract pts of scte35, chances are you could make
filter_complex_script with these pos to insert kframes



On Wed, Sep 2, 2020 at 6:36 AM MediaStream  wrote:

> Looking for SCTE-35 pass through implementation:
>
> 1. Extract SCTE-35 from MPEG-TS.
> 2. Translate timing of the original SCTE-35 events to match timing in the
> output file appropriately or keep timing as is.
> 3. Signal encoder to force key frames at the boundaries of the SCTE-35
> event.
> 4. Inject SCTE-35 messaging into output MP4 files as emsg and into MPD and
> M3U8 manifests if/as needed.
>
> More info:
> https://www.w3.org/TR/media-timed-events/#scte-35
>
> https://www.tvtechnology.com/opinions/scte10435-and-beyond-a-look-at-ad-insertion-in-an-ott-world
>
> source TS file with SCTE-35 also saved as ES and XML:
> https://www.mediafire.com/folder/zv20csqeo1ixt/
>
> Bounty of $2,500.00 USD (up to Oct 2020)
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-user
>
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] bwdif filter question

2020-09-22 Thread Nicolas George
Mark Filipak (ffmpeg) (12020-09-21):
> No so, Ted. The following two definitions are from the glossary I'm preparing 
> (and which cites H.262).

Quoting yourself does not prove you right.

-- 
  Nicolas George


signature.asc
Description: PGP signature
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] bwdif filter question

2020-09-22 Thread Edward Park
Hello,

>> I'm not entirely aware of what is being discussed, but progressive_frame = 
>> !interlaced_frame kind of sent me back a bit, I do remember the discrepancy 
>> you noted in some telecopied material, so I'll just quickly paraphrase from 
>> what we looked into before, hopefully it'll be relevant.
>> The AVFrame interlaced_frame flag isn't completely unrelated to mpeg 
>> progressive_frame, but it's not a simple inverse either, very 
>> context-dependent. With mpeg video, it seems it is an interlaced_frame if it 
>> is not progressive_frame ...
> 
> No so, Ted. The following two definitions are from the glossary I'm preparing 
> (and which cites H.262).

Ah okay I thought that was a bit weird, I assume it was a typo but I saw h.242 
and thought two different types of "frames" were being mixed. Before saying 
anything if the side project you mentioned was a layman’s glossary type 
reference material, I think you should base it off of the definitions section 
instead of the bitstream definitions, just my $.02. I read over what I wrote 
and I don't think it helps at all, let me try again, I am saying that there are 
the "frames" in the context of a container, and a different kind of video 
"frame" that has a width and height dimension. (When I wrote "picture frames" I 
meant to refer to physical wooden picture frames for photo prints, but with 
terms like frame pictures in play not very effective in hindsight)

> Since you capitalize "AVFrames", I assume that you cite a standard of some 
> sort. I'd very much like to see it. Do you have a link?

This was the main info I was trying to add, it's not a standard of any kind, 
quite the opposite, actually, since technically its declaration could be 
changed in a single commit, but I don't think that is a common occurrence. 
AVFrame is a struct that is used to abstract/implement all frames in the many 
different formats ffmpeg handles. it is noted that its size could change as 
fields are added to the struct.

There's documentation generated for it here: 
https://www.ffmpeg.org/doxygen/trunk/structAVFrame.html

> H.262 refers to "frame pictures" and "field pictures" without clearly 
> delineating them. I am calling them "pictures" and "halfpictures".

I thought ISO 13818-2 was basically the identical standard, and it gives pretty 
clear definitions imo, here are some excerpts. (Wall of text coming up… 
standards are very wordy by necessity)

> 6.1.1. Video sequence
> 
> The highest syntactic structure of the coded video bitstream is the video 
> sequence.
> 
> A video sequence commences with a sequence header which may optionally be 
> followed by a group of pictures header and then by one or more coded frames. 
> The order of the coded frames in the coded bitstream is the order in which 
> the decoder processes them, but not necessarily in the correct order for 
> display. The video sequence is terminated by a sequence_end_code. At various 
> points in the video sequence a particular coded frame may be preceded by 
> either a repeat sequence header or a group of pictures header or both. (In 
> the case that both a repeat sequence header and a group of pictures header 
> immediately precede a particular picture, the group of pictures header shall 
> follow the repeat sequence header.)
> 
> 6.1.1.1. Progressive and interlaced sequences
> This specification deals with coding of both progressive and interlaced 
> sequences.
> 
> The output of the decoding process, for interlaced sequences, consists of a 
> series of reconstructed fields that are separated in time by a field period. 
> The two fields of a frame may be coded separately (field- pictures). 
> Alternatively the two fields may be coded together as a frame 
> (frame-pictures). Both frame pictures and field pictures may be used in a 
> single video sequence.
> 
> In progressive sequences each picture in the sequence shall be a frame 
> picture. The sequence, at the output of the decoding process, consists of a 
> series of reconstructed frames that are separated in time by a frame period.
> 
> 6.1.1.2. Frame
> 
> A frame consists of three rectangular matrices of integers; a luminance 
> matrix (Y), and two chrominance matrices (Cb and Cr).
> 
> The relationship between these Y, Cb and Cr components and the primary 
> (analogue) Red, Green and Blue Signals (E’R , E’G and E’B ), the chromaticity 
> of these primaries and the transfer characteristics of the source frame may 
> be specified in the bitstream (or specified by some other means). This 
> information does not affect the decoding process.
> 
> 6.1.1.3. Field
> 
> A field consists of every other line of samples in the three rectangular 
> matrices of integers representing a frame.
> 
> A frame is the union of a top field and a bottom field. The top field is the 
> field that contains the top-most line of each of the three matrices. The 
> bottom field is the other one.
> 
> 6.1.1.4. Picture
> 
> A reconstructed picture is obtained by decoding a