Re: [FFmpeg-user] Trying to Reduce Sizes of Movies Ripped with MakeMKV

2020-10-25 Thread Ted Park
Hi,

> I have been using the following command to recompact the Blu-Ray MKV files:
> 
> ffmpeg -y -hwaccel cuvid -c:v h264_cuvid -vsync 0 -i in.mkv -map 0 -codec:v
> h264_nvenc -codec:a copy -codec:s copy -max_muxing_queue_size 4096 out.mkv
> 
> That command does two things for me. Since I have a halfway decent graphics
> card (Nvidia geforce RTX 2060), it gives me the hardware acceleration I
> desire. It also retains all subtitles when recompacting the files.

I wonder if the BD/DVD ripping software you are using can be made to transcode 
so you wouldn’t have to do this. (Since it doesn’t sound like digitizing an 
exact lossless copy isn’t a priority)

> That command works very well for Blu-Rays. It can reduce a 40-50GB MKV file
> to about 7GB. The problem is, that only works on files produced by ripping
> Blu-Rays. If I try it on DVD files, I get errors.

Does this happen with all DVD files?

> I am using ffmpeg version n4.1.4 on a Ubuntu Mate 20.04 system, Intel X64
> PC. Everything is up-to-date (I run apt update/upgrade regularly).

The “up-to-date” release in most distros package manager default sources are 
usually at least a few months old, try downloading a static build if you can’t 
seem to build for some reason. https://www.ffmpeg.org/download.html#build-linux

> BTW, I noticed that, by default, ffmpeg on Ubuntu 20.04 is installed with
> Snap. I hate Snap. I had nothing but problems with MakeMKV installed with
> Snap, and had to compile MakeMKV from scratch.
What is snap?
> I tried doing that with
> ffmpeg, but I couldn't get it to install absolutely everything I needed. I
> ended up with no Nvidia hardware support, for example.
Try using that anyway, if it works then your solution is probably getting a 
newer version of FFmpeg, then you can work on configuring with hw acceleration 
support, or finding a static build that includes it. 

Regards,
Ted Park
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] need help with making an "in sync" screen recording.

2020-09-25 Thread Ted Park
Hi,

I have a feeling specifying the capture region as an input option instead
of cropping with a filter will help. Also do you really need the scale?
Scaling the encore in another step will probably be indistinguishable if
you do.


Regards,
Ted Park
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] OVERLAY_CUDA and PGS Subtitle burn

2020-09-16 Thread Ted Park
Hi,

> i'm trying to use the OVERLAY_CUDA function to burn PGS titles over a video
>

Try ffmpeg -I input.mkv -filter_complex ‘[0:v][0:s]overlay_cuda’ 

(Not completely sure it works with overlay cuda but that’s how you would do
it with overlay.


Regards,

Ted Park
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Correct conversion of yuvj420p?

2020-09-11 Thread Ted Park
Hi,

>> So, is the file now "AV_PIX_FMT_YUVJ420P" or "AV_PIX_FMT_YUV420P + set
>> color_range"?

Obviously those names are not relevant until the file meets the av* tools, but 
they are the same formats. I think yuvj420p was meant to be a separate variant 
of yuv420p that was not broadcast safe, from ccd sources early on, but since 
everything is digital nowadays and powerful enough to shift if necessary. 

>> My problem is, that I have literally hundreds (actually more than 1000+) of
>> these H.264/yuvj420p files that are to be auto-converted to archival FFV1,
>> but because of the "j" the "pix_fmt +" option cannot be used, which throws
>> all those files into error - and I'd like to fix this :)

setrange affects the frames, not any end result SPS i think. Leaving it out 
doesn’t set the parameters? Does it convert to limited range by default?

Also, what are some of the benefits of reencoding footage for archival? I can 
maybe think of being able to detect partial corruption and possibly a increase 
in data/bitrate, but not much else.
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Correct conversion of yuvj420p?

2020-09-07 Thread Ted Park
Hi,

>  > Incompatible pixel format 'yuvj420p' for codec 'ffv1', auto-selecting 
> 
> format 'yuv420p'
> 
>  > [swscaler @ 0x71c9440] deprecated pixel format used, make sure you 
> 
> did set range correctly
> 
> 
> 
> The comment in 
> 
> [libavutil/pixfmt.h](https://github.com/FFmpeg/FFmpeg/blob/master/libavutil/pixfmt.h#L78),
>  
> 
> says:
> 
> 
> 
>  > AV_PIX_FMT_YUVJ420P, ///< planar YUV 4:2:0, 12bpp, full scale (JPEG), 
> 
> deprecated in favor of AV_PIX_FMT_YUV420P and setting color_range
> 
> The source video is: yuvj420p(pc, smpte170m/bt709/bt709)
> 
> The output video is: yuv420p(pc, smpte170m/bt709/bt709)
I think it means to force output 0-100% levels with yuv420p format if you are 
seeking yuvj420p, with the understanding they won’t necessarily be safe. 

> What would be the right commandline to losslessly convert this to FFV1?
> 
> I've tried, but so far I get differing streamhash MD5s for the video - 
> 
> regardless if I set the color range :(
Isn’t it unavoidable? Since the ranges are not same. Though maybe if you go 
from broadcast and force filter setrange=pc they might match? I’m not very 
sure. One thing I’d mention just in case is color range option only sets what 
the header claims it’s contents are, for the encoders that accept it. 

Regards,
Ted
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Convert DNG

2020-09-07 Thread Ted Park

Hi,

> Let's get back to the matter at hand-
> 
> The results from searching around for "cinemadng ffmpeg" does suggest that 
> 
> cinemadng is not fully implemented in ffmpeg. Yes? No? What are the limits?
I would imagine because the material is often raw sensor data, as in the OP’s 
case (film scans). 

Regards,
Ted
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] build error at libavformat/udp.o

2020-08-31 Thread Ted Park
Hi,

 I will try to explain one step at a time.


Can you explain one step at a time, as in explain the steps you took (one
command line at a time)? Maybe it’s because I’ve never cross compiled
ffmpeg but your mentioning cmake makes me think you are messing up the
build system. Like there’s no info about your c environment.

 3. build error message, when I have used git head :
>
>
>
>
>
>  /ffmpeg/configure: /bin/sh^M: bad interpreter: No such file or directory


Did you do anything after checking out the repository before running
configure (except cd’ing into the directory)? That carriage return I think
I’ve seen when a super long list of arguments were given but probably the
issue is something else.

Regards,
Ted
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Compilation of shared libraries for macOS

2020-08-02 Thread Ted Park
Hi,

How do you use a specific library version? Do you have two different
versions of the app? Or are you simply switching different dylibs in usr
local lib?

Regards,
Ted Park
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Compilation of shared libraries for macOS

2020-08-02 Thread Ted Park
On Sun, Aug 2, 2020 at 1:40 PM Екатерина  wrote:

> Hi,
>
> I have a problem with compilation of shared libraries on macOS Catalina to
> include FFmpeg 4.3.1 dylibs for my app video for a simple video playback:


I compile successfully all dylibs. But my app crashes when I try to open
> any video file:


What am I doing wrong? With Zeranoe's dylibs everything works fine. I would
> be grateful for any advice.


Hi,

What do you use to write the app? If it’s an actual .app package then I
think you can put the one you want in a directory like
.app/Contents/Frameworks and link to the included dylib so it
doesn’t depend on the installed version.

Regards,
Ted Park

>
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Can I convert from .ram to .mp4?

2020-06-17 Thread Ted Park
Hi,

>> NERDfirst, the guy giving a youtube tutorial on FFmpeg
>> <https://www.youtube.com/watch?v=MPV7JXTWPWI>said that one way to use
>> FFmpeg was to have the infile and outfile in the same directory as the
>> FFmpeg.exe. Are you saying he's wrong, then, that one canNOT have those
>> three files in the same directory?

I don’t know what the guy in the video said, but I think that would only make 
sense if you had a folder owned by you (regular user, not root or admin or 
whatever it is in windows) and a static ffmpeg.exe in the folder. If you have 
it in a folder in your path env var, then most likely you only want executable 
binaries there, not media files.

This is just my personal opinion, but more often than not, people on this list 
include contributors to the FFmpeg project, (basically authors of the program) 
and are more knowledgeable and experienced than a random YouTuber you might 
find.

Best regards,
Ted Park
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Issue in video compression

2020-06-11 Thread Ted Park
Hi,

> The description of preset file that has been used is as follows:-
> 
> vcodec=libx264
> vprofile=main
> level=30
> b:v=80
> bufsize=80
> 
> But the output .mp4 file is not getting compressed. 

My first thought is that the b:v=80 in the preset file is leading to a 
constant bit-rate encode. Is the file in question about an hour and 2 minutes 
long? If it’s close, then I would say that is what’s happening. If there is a 
file size constraint, you should encode using multiple passes, or if it is just 
the case that you would like a somewhat smaller file to work with, you can use 
constant quality (vs. constant bitrate) encoding settings, using -crf to 
control the output filesize by trial and error, instead of explicitly 
specifying the video bitrate

Regards,
Ted Park
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Cant find proper code for using complex filter and overlay with VAAPI

2020-06-07 Thread Ted Park
Hi there,

> Always i got error:
> >Impossible to convert between the formats supported by the filter
> 'Parsed_format_' and the filter 'auto_scaler_' Error reinitializing
> filters! Failed to inject frame into filter network: Function not
> implemented

There is an automatic format conversion being attempted that is not happening. 
What is the input format of x11grab? The command is too complicated right now 
to investigate efficiently. Can you try a simple screen capture and single 
overlay with increased verbosity (-loglevel debug)?

Regards,
Ted Park
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] delay time in live ultrasound converter

2020-04-18 Thread Ted Park
I don't know where I can find bats nearby so I couldn't try it but how
does it work? The book makes it sound like you can use any mic, even
one built into a laptop for this? I suppose that's plausible looking
at a typical mic's frequency response graph, they are just cut off at
20khz, and don't roll off after 20khz like I thought they would, but
what about the sample rate? At 44.1kHz doesn't that mean anything over
22khz is more aliasing or harmonic distortion than an actual recording
of bat sounds?

On Sat, Apr 18, 2020 at 11:37 AM Michael Koch
 wrote:
>
> Am 18.04.2020 um 16:52 schrieb Michael Glenn Williams:
> > The subject line about ultrasound caught me eye on this thread that woke up
> > from last year.
> > Can anyone tell us what the original interest in ffmpeg and ultrasound is?
>
> Well, you can use FFmpeg to convert ultrasound to lower frequencies, for
> example if you want to hear bats. I have described that in my book, see
> chapters 3.14 and 3.19 (but the chapter numbers may change in future)
> www.astro-electronic.de/FFmpeg_Book.pdf
>
> Michael
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-user
>
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] hw_decode.c on osx?

2020-04-18 Thread Ted Park
Hi,

> videotoolbox decodes to CMSampleBuffer, which is CoreMedia's generic buffer
> wrapper class.  a couple levels down, it's probably a CVPixelBuffer.  if it's
> working for you, i'd be curious to know what hardware and what os version
> you're running on, and what type of file you're feeding to hw_decode.


Oh okay, makes sense. The one I tried it on had a W5700, I tried again on a 
dual nehalem Xserve with no gpu (almost, has  a GT120 for console/desktop 
monitor) and got similar errors

xserve:~ admin$ $OLDPWD/hw_decode videotoolbox hd-bars-h264.mov hd-bars-h264.raw
[h264 @ 0x7fb74d001800] Failed setup for format videotoolbox_vld: hwaccel 
initialisation returned error.
Failed to get HW surface format.
[h264 @ 0x7fb74d001800] decode_slice_header error
[h264 @ 0x7fb74d001800] no frame!
Error during decoding

xserve:~ admin$ $OLDPWD/hw_decode videotoolbox hd-bars-hevc.mp4 hd-bars-hevc.raw
[hevc @ 0x7fd8a8018a00] get_buffer() failed
[hevc @ 0x7fd8a8018a00] videotoolbox: invalid state
[hevc @ 0x7fd8a8018a00] hardware accelerator failed to decode picture
Error during decoding
Error during decoding

The difference would be I didn't expect it to work in the first place I guess. 
Do you know hardware decoding works in ffplay? It's harder to tell for mac 
frameworks imo, I'd try attaching to ffplay and seeing if you can get it to use 
a hardware decoder. Which gpu does the machine have?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Ted Park
Hi,

> "split[A][B],[A]select='eq(mod((n+1)\,5)\,3)'[C],[B]datascope,null[D],interleave"
> 
> Though the [B][D] branch passes every frame that is presented at [B], 
> datascope does not appear for frames 2 7 12 17 etc.
> 
> That reveals that traversal of ffmpeg filter complexes is not recursive.

I'm pretty sure filter_complex came from "filtergraph that is not simple 
(complex)" rather than a complex of filters. There's no nesting filters, what 
context are you referring to recursion  in?? I've been trying to get to 
understand the 55 telecine filter script you came up with, and eliminate as 
many splits as possible, do you mean how the datascope wouldn't appear for the 
frames selected? Same timestamps might be the issue again, maybe setpts=PTS+1 
would make them show up? Or does interleave take identical dts input and order 
them according to some rule?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] hw_decode.c on osx?

2020-04-17 Thread Ted Park
Hi,

> i have ffmpeg building successfully on osx (10.14.6; xcode 11.3.1),
> and am using the generated libraries in a personal video project i'm
> working on.  now i'm ready to start thinking about implementing hardware
> decoding ...  i tried adapting some of the code from hw_decode.c, but 
> ran into runtime errors.  thinking that it was my code that was at
> fault, i then compiled hw_decode by itself, but i get the same errors
> even there.  is anyone aware of hw_decode.c not working on osx?

The example's working fine for me. (I think... I don't know what kind of format 
videotoolbox decodes to but if I squint and tilt my head a little the hex dump 
looks like it might be raw pixel buffer bytes of some sort lol)
Did you read the readme in the folder the example is in? I'm wondering when you 
say you compiled hw_decode by itself you literally mean just the compile step 
using cc.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Flutter FFmpeg

2020-04-17 Thread Ted Park
Hi,

>> Someone else recently uploaded an iPhone recording that qtfaststart 
>> completely fubar'ed
> 
> Could you point me to that file, I must have missed it.


Found it, but I didn't mean to say that, it's some part of iOS or some app that 
is messing it up, and qtfaststart doesn't recognize the fubar'ed "mov"


> Begin forwarded message:
> 
> From: Peter Alderson 
> Subject: Re: [FFmpeg-user] Converting Quicktime (.mov ) from iPhone to mp4
> Date: March 25, 2020 at 06:11:15.000GMT-4
> To: FFmpeg user questions 
> Reply-To: FFmpeg user questions 
> 

> The Command / Output is...
> 
> $ ffmpeg -i .mov -vcodec h264 -acodec mp2 test-output-21.mp4
> ffmpeg version 4.2.2 Copyright (c) 2000-2019 the FFmpeg developers
>  built with Apple clang version 11.0.0 (clang-1100.0.33.17)
>  configuration: --prefix=/usr/local/Cellar/ffmpeg/4.2.2_2 --enable-shared 
> --enable-pthreads --enable-version3 --enable-avresample --cc=clang 
> --host-cflags= --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl 
> --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus 
> --enable-librubberband --enable-libsnappy --enable-libtesseract 
> --enable-libtheora --enable-libvidstab --enable-libvorbis --enable-libvpx 
> --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid 
> --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r 
> --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb 
> --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-libsoxr 
> --enable-videotoolbox --disable-libjack --disable-indev=jack
>  libavutil  56. 31.100 / 56. 31.100
>  libavcodec 58. 54.100 / 58. 54.100
>  libavformat58. 29.100 / 58. 29.100
>  libavdevice58.  8.100 / 58.  8.100
>  libavfilter 7. 57.100 /  7. 57.100
>  libavresample   4.  0.  0 /  4.  0.  0
>  libswscale  5.  5.100 /  5.  5.100
>  libswresample   3.  5.100 /  3.  5.100
>  libpostproc55.  5.100 / 55.  5.100
> [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7fe4de00c000] moov atom not found
> .mov: Invalid data found when processing input
> $ 
> 
> The test file can be found at 
> https://s3-eu-west-1.amazonaws.com/app.netframe.io/another-test.mov 
> <https://s3-eu-west-1.amazonaws.com/app.netframe.io/another-test.mov> 
> <https://s3-eu-west-1.amazonaws.com/app.netframe.io/another-test.mov 
> <https://s3-eu-west-1.amazonaws.com/app.netframe.io/another-test.mov>>


At some point in the file everything is shifted 16 bits (long is 32 isn't it?)  
and it actually looks like the moov atom is cut short based on the length 
field, not appended by other top-level atoms as I originally thought.


Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] propagation of frames in a filter complex

2020-04-17 Thread Ted Park
Hi,

> A native 60Hz TV runs at 60Hz (or 60/1.001Hz).

Yes, apparently some do. But most display controllers I've seen for LCD panels 
were capable of operating to timings that produce 50Hz, 48Hz, 25, and so on 
lower vertical refresh rates. I thought only a 60Hz interlaced CRT would only 
run at 60Hz (or the mains frequency for some units) and nothing else. But I 
recently bought and returned a small monitor that was 30/60/120Hz only, or at 
least only indicated as such via EDID/DDC.

> Does this explain why a p60, 55 telecine viewed on a 120Hz TV would look 
> worse than p24 viewed on a 120Hz TV?
That wouldn't be surprising, but I'm comparing the result of the same 
processing just switching between a 120Hz timing and 60Hz timing. I think it's 
because my laptop is not doing it fast enough to reach 60fps though.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Copy across core DTS audio

2020-04-17 Thread Ted Park
Hi,

> thank you again for taking the time to help me.  Also thanks for the useful
> tips.  This is steep learning curve to code for FFMPEG, but starting to
> understand the syntax now.  So I tried your solution and unfortunately I get
> the same error.  As you suggest, here is the full console output.
> 
> I:\>ffmpeg -i x.mkv -map 0:0 -map 0:2 -map 0:1 -map 0:3 -c copy -bsf:a:1
> dca_core output.mkv


As Moritz mentioned, on output options the streams are indexed following order 
in output file, so after switching the two streams so dts comes first, it is 
the first (with index 0) audio stream. Hence the suggestion to use -bsf:a:0.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Testing transcode speeds on Raspberry Pi 4

2020-04-17 Thread Ted Park
Hi,

>>ffmpeg -y -i SourcePath -b:v 9000k -c:v h264_omx DestPath
>> 
>> The difference is the -s flag.  Why would leaving that out result in an 
>> error?
> 
> I guess your hardware does not support 4k encoding.
> 
> Carl Eugen


> Makes sense, though until seeing your and Ted's notes I was taking for 
> granted the false notion "if it can display 4k it can no doubt encode 4k"
> Never take anything for granted.
> 
> The overall goal of these tests is to see if / when it makes financial sense 
> to use RPIs as a render farm high res video.
> So far the answer points toward 'not likely'.

I think I did the same thing, or similar at least. I thought the specs said 4k 
decode & encode but I might have been  misreading 2 4k displays & 4k decode, 
for hardware encoding it says 1080p60 max. Apparently the SoC on the Pi 4 is 
basically the same as the one on the 3B+, with better thermal & power 
management, not a significant upgrade as I thought it was when I got it.

For anything up to 1920x1080 though, I feel like there must be a hardware 
accelerated scaler that can use the same format as the input for encoding, and 
if you have a way to manage workers and segment the transcoding job, organizing 
the units (basically some sort of chassis and power distribution solution) and 
a "mini-fleet" management, a portable mini render farm on a dolly is feasible 
for something like a "dailies farm in a backpack". (the "reference" hw 
accelerated encoding tool included in Raspbian produced output that didn't look 
as good as I expected from the bitrate)

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] propagation of frames in a filter complex

2020-04-17 Thread Ted Park
Hey,

>>> What do you mean?
>>> 
>>> split[A]select='not(eq(mod(n+1\,5)\,3))'   [C]interleave
>>> [B]split[D]select='eq(mod(n+1\,5)\,2)'[F]blend[D]
>>> [E]select='eq(mod(n+1\,5)\,3)'[G]
>>> 
>>> I created the filtergraph by hand. I don't know what folks expect, but 
>>> since duplicating pads (e.g., "[A] [A]") just takes up space, I didn't 
>>> duplicate them. I don't know whether that relates to "reuse filter pad 
>>> labels". Could you put some 'meat' on your 'bones'?
>> I meant using D twice.
> 
> How would I use D twice? What do you have in mind? Can you draw a picture? 
> I'm a picture-guy.
> 
>> I thought it might create a cycle or something but since the first pair of 
>> [D] were linked to each other I guess that means you could use it again for 
>> blend's output pad.
> 
> D already is 'blend's output pad.

I'm not really sure how else to put it, I think you might just be missing it 
because it's familiar to you. Anyway, it's not that important, but if you look 
at each pad in the diagram, D is used as the split filters' output pads before 
that, I was just commenting I didn't know you could connect an output pad, 
connect it to an input pad and then use the same label again later on.

>> For the actual filter though, should it look better on a 60Hz vertical 
>> refresh panel than 120Hz? I don't fully understand the rationale, but I was 
>> curious and tried it, on film material I can't tell the difference but 
>> animation looks horrible at 120Hz (like I'm dizzy, like a slow motion blur 
>> effect at regular speed?) but it's fine at 60Hz.
> 
> A p24-to-p120 is a no brainer: 120/24 = 5, therefore, a simple 5x frame 
> repeat. But what about folks who don't have 120Hz TVs?
> 
> A p24-to-p60 is problematic: 60/24 = 2.5, therefore, a telecine.
> 
> A p24-to-p30-to-p60 is a 23 pull-down telecine followed by 2x frame repeat. 
> It's awful.
> 
> A p24-to-p60 55 pull-down telecine is far superior, but players and TVs don't 
> do that -- at least, MPV and/or my TV don't do it.

Yes, now I understand that it would reduce the stuttering you can get, I didn't 
know there were 60Hz display controllers that weren't capable of switching to 
48Hz (and 96Hz for 120Hz displays) until recently. But I don't understand why 
the same 60fps result looks so much worse when the refresh rate is set to 
120Hz. Maybe  it is because I am trying to view the filtered result in real 
time instead of writing to disk and playing.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg architecture question

2020-04-17 Thread Ted Park
Hi,

> no, I meant replace [F][G]blend[D] by [G][F]blend[D] and leave everything 
> else as it is.


I thought the latter was the intended order (or maybe it's just the order my 
brain read it in). The other one results in a ton of duplicate timestamp errors 
and the correction cancels something out, it looks closer to the original 
24/1.001.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Copy across core DTS audio

2020-04-17 Thread Ted Park
Hi,

> thank you for your quick response.
> I noticed a typo in my original script
> I think I had previously tried your solution.
> Note, here by mapping the DD stream (and swapping with the DTS) I still get
> the error.  If I map the DTS stream twice  I get the core twice.  So your
> observation is correct, it is applying the filter to both audio streams.  I
> can't seem to specify this filter to just one stream.
> 
> 
> ffmpeg -i x.mkv -map 0:0 -map 0:2 -map 0:1 -map 0:3 -c:v copy -c:a:2 copy
> -bsf:a dca_core -c:a:1 copy -c:s copy output.mkv


I meant try doing the same thing as the codec copy to the filter to specify the 
stream. So here, it would be -bsf:a:0 dca_core. (and I think when you specify 
stream type, the number that comes after is the index only counting that type 
of stream, so -c:a:1 and -c:a:0. But since everything is copy you could just 
put -c copy and  specify -bsf:a:0 here)

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] propagation of frames in a filter complex

2020-04-17 Thread Ted Park
Hi,

> What do you mean?
> 
> split[A]select='not(eq(mod(n+1\,5)\,3))'   [C]interleave
> [B]split[D]select='eq(mod(n+1\,5)\,2)'[F]blend[D]
> [E]select='eq(mod(n+1\,5)\,3)'[G]
> 
> I created the filtergraph by hand. I don't know what folks expect, but since 
> duplicating pads (e.g., "[A] [A]") just takes up space, I didn't duplicate 
> them. I don't know whether that relates to "reuse filter pad labels". Could 
> you put some 'meat' on your 'bones'?

I meant using D twice. I thought it might create a cycle or something but since 
the first pair of [D] were linked to each other I guess that means you could 
use it again for blend's output pad.
For the actual filter though, should it look better on a 60Hz vertical refresh 
panel than 120Hz? I don't fully understand the rationale, but I was curious and 
tried it, on film material I can't tell the difference but animation looks 
horrible at 120Hz (like I'm dizzy, like a slow motion blur effect at regular 
speed?) but it's fine at 60Hz.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Copy across core DTS audio

2020-04-17 Thread Ted Park
Hi,

> ffmpeg -i x.mkv -map 0:0 -map 0:2 -map 0:2 -map 0:3 -c:v copy -c:a copy
> -bsf:a dca_core -c:a:1 copy -c:s copy output.mkv
> 
> When ever I try and include the DD track, FFMPEG comes back to say that the
> dca-core filter is not compatible with DD.
> I'm sure it is a simple syntax error that I have, so any help would be much
> appreciated

-bsf:a applies to all audio streams, thus the error. You can specify the stream 
in a similar way as you specified -c:a:1 copy (for some reason? -c:a copy 
should be enough I think.)

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] rtsp stream interruption

2020-04-16 Thread Ted Park
Hi,

> sorry for 99, it was just a crazy test that i forgot to remove. I will try to 
> be more clear, i'm trying to capture each frame from a rtsp stream,so  it 
> works for something like 1 hour or a little more, then it suddenly stops, the 
> same thing happen when i try on vlc, i open the stream on vlc client and 
> after some time rtsp stream stops, i dont think its a camera problem because 
> it happen on another camera

It's doing a lot more than that, it says it decoded ~100MB input and wrote out 
~10GB output somewhere. First is that something the system should be able to 
handle? (Memory or storage space-wise) What are you trying to achieve with the 
-r and fps filter, it doesn't look like it's doing anything other than 
duplicating massive numbers of frames, each one just as big as the one before 
with no temporal compression.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Testing transcode speeds on Raspberry Pi 4

2020-04-16 Thread Ted Park
Hey,

> Well, without the -vf scale=1280:720,format=yuv420p I get the following error
> [swscaler @ 0x222c910] deprecated pixel format used, make sure you did set 
> range correctly
> [h264_omx @ 0x1b31010] Using OMX.broadcom.video_encode
> [h264_omx @ 0x1b31010] OMX error 80001000
> [h264_omx @ 0x1b31010] err 80001018 (-2147479528) on line 561
> Error initializing output stream 0:0 -- Error while opening encoder for 
> output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width 
> or height
> [aac @ 0x1b50d90] Qavg: 7615.820
> [aac @ 0x1b50d90] 2 frames left in the queue on closing
> Conversion failed!
> 
> 
> Full output...
> ffmpeg version 4.1.4-1+rpt7~deb10u1 Copyright (c) 2000-2019 the FFmpeg 
> developers
>  built with gcc 8 (Raspbian 8.3.0-6+rpi1)
>  configuration: --prefix=/usr --extra-version='1+rpt7~deb10u1' 
> --toolchain=hardened --incdir=/usr/include/arm-linux-gnueabihf --enable-gpl 
> --disable-stripping --enable-avresample --disable-filter=resample 
> --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom 
> --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca 
> --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig 
> --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm 
> --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg 
> --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg 
> --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr 
> --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame 
> --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack 
> --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid 
> --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal 
> --enable-opengl --enable-sdl2 --enable-omx-rpi --enable-mmal --enable-neon 
> --enable-rpi --enable-libdc1394 --enable-libdrm --enable-libiec61883 
> --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared 
> --libdir=/usr/lib/arm-linux-gnueabihf --cpu=arm1176jzf-s --arch=arm
> WARNING: library configuration mismatch

That's kind of wacky, how did you install ffmpeg? The error apparently 
indicates insufficient resources 
(https://github.com/raspberrypi/userland/blob/a246147c21ae5be92ad1b85199b5b0bb447e0544/interface/vmcs_host/khronos/IL/OMX_Core.h#L142)
 This was an RPi 4 right? 4B? How much memory did you split to video?

I thought it could do 4k encode in hardware, but maybe it was only decode. In 
any case I feel like you should fix that library mismatch warning if only as a 
cosmetic improvement, either reinstall from apt or some static build.

I don't think there's much difference whether you specify -s or use scale 
filter if all you're doing in the end is scaling the frame size.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] rtsp stream interruption

2020-04-16 Thread Ted Park
Hi,

> i'd like to know how to configure rtsp as a input, beacause all the time it 
> stops after some aleatory time, how to make it stable ?
> i'm using these options:
> 
>"-rtsp_transport", "tcp",
>"-i", rtspUrl,
>"-r", "15",
>"-b:v", "1M",
>"-maxrate", "1M",
>"-bufsize", "2M",
>"-vf", "fps=480/60",
>"-f", "image2pipe",
>"-loglevel", "99",
>"-",

Why does it say it stopped? loglevel doesn't have an option 99, I don't know 
what happens when you put in 99 but you might end up with binary 99&56 (56 
being the as verbose as possible option, if my memory serves me right)

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] propagation of frames in a filter complex

2020-04-16 Thread Ted Park
Hey,

> Filter graph:
> 
> split[A]select='not(eq(mod(n+1\,5)\,3))'   [C]interleave
> [B]split[D]select='eq(mod(n+1\,5)\,2)'[F]blend[D]
> [E]select='eq(mod(n+1\,5)\,3)'[G]
> 
> What I expected/hoped:
> 
> split[A] 0 1 _ 3 4 [C]interleave 0 1 B 3 4  //5 frames
> [B]split[D] _ 1 _ _ _ [F]blend[D]   |
> [E] _ _ 2 _ _ [G]   blend of 1+2
> 
> What appears to be happening:
> 
> split[A] 0 1 _ 3 4 [C]interleave 0 1 _ 3 4  //4 frames
> [B]split[D] _ _ _ _ _ [F]blend[D]
> [E] _ _ 2 _ _ [G]
> 
> The behavior is as though because frame n+1==1 can take the [A][C] path, it 
> does take it & that leaves nothing left to also take the [B][D][F] path, so 
> blend never outputs.
> 
> I've used 'datascope' in various parts of the filter graph in an attempt to 
> confirm this on my own. It's difficult because my test video doesn't display 
> frame #s.
> 
> If that indeed is the behavior, then ...
> 
> I need a way to duplicate a frame, # n+1%5==1 in this case, so that the 
> 'blend' operates.

Huh, I didn't know you could reuse filter pad labels. I've translated the 
graph2dot output to svg, it doesn't look like it'll help much though.
I think graphmonitor would be more helpful for this kind of 
split-chained-to-split complex graph.
But I think you could use the multiple output option for select filter, where 
you use the value to select which pad the frame gets sent to. (It isn't 
evaluated as a boolean in that case, so if value is 3.??? it goes to output pad 
3, if it's 0.??? it goes to 0, etc)

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] concat demuxer filter_complex (fade)

2020-04-16 Thread Ted Park
Hi,

> Actually I don't see why it should be complicated. Maybe we're writing about 
> different things. You're still writing about the concat demuxer but I already 
> came to the conclusion that its easier to use the concat filter if you want 
> to insert transitions, since otherwise you'd have to preprocess the images 
> into videos and already add the transition at this point. With the concat 
> filter, you can use one filterchain to add the transitions and concatenate 
> the images in one step (no additional video files in the preprocessing 
> process)

Oh, yeah you're right, I assumed the original approach would be the one in op. 
But I meant it's complicated because ffmpeg didn't really have a transition 
filter for video until recently, and you're left managing everything yourself, 
like when to start reading/queuing frames for clip 2, when you can stop looping 
the first part, etc. Like, one filter vs one filterchain.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Testing transcode speeds on Raspberry Pi 4

2020-04-15 Thread Ted Park
Hey,

> An attempt to do some hardware acceleration...
>> ffmpeg -i path/to/source.mp4 -c:v h264_omx -b:v 9000k -vf 
>> scale=1280:720,format=yuv420p path/to/dest.mp4
> Gave the exact same same results

That sounds to me like the encoder is waiting for filtered frames in both 
cases. Did you try without the -vf ?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Flutter FFmpeg

2020-04-15 Thread Ted Park
Hi,

> ```
>  int resultCode = await _ffmpeg.executeWithArguments([
>"-noautorotate",
>"-i",
>fileToCompress.path,
>"-movflags",
>"+faststart",
>"-vcodec",
>"libx264",
>"-crf",
>"",
>'-preset:v',
>'',
>"-vf",
>"format=yuv420p",
>result.path
>  ]);
> ```
> 
> **Current behaviour**
> This works well on android (testing with a oneplus 6) as it always seems to 
> compress the file to <20MB in a reasonable time. However when i test this on 
> IOS (Iphone SE) the imagepicker seems to compress the video first e.g. 150MB 
> to 25MB. Which I then attempt to compress the 25MB again with above command 
> with params "veryfast" and 18.
> 
> The file always gets larger that the original 25mb and it takes forever to 
> execute (>3min). Similar behaviour is experienced when I use different params 
> e.g. 24 and faster.

Someone else recently uploaded an iPhone recording that qtfaststart completely 
fubar'ed, I think something to do with the export/share option adding a bunch 
of top-level metadata atoms after the moov? 

Could you possibly try importing the file doing your best not to let iOS alter 
it, and compare it to the file you transferred using the original method? If 
you're on a mac, a USB cable and Image Capture app should work, but drag the 
file somewhere, instead of importing it into Photos or other library.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] concat demuxer filter_complex (fade)

2020-04-15 Thread Ted Park
Hi,

> I learned, that it's possible to define multiple inputs and reference then
> later in the filter_complex to the different inputs like this:
> 
> ffmpeg -i vid1.mkv -i vid2.mkv -i vid3.mkv -filter_complex
> "[0:v]fade=t=in:st=0:d=1[v0]; [1:v]fade=t=in:st=0:d=1[v1];
> [2:v]fade=t=in:st=0:d=1[v2];
> [v0][0:a][v1][1:a][v2][2:a]concat=n=3:v=1:a=1[v][a]" -map "[a]" -map "[v]"
> out.mkv
> 
> But is this possible with using the concat demuxer (providing the files by
> writing them into a file) too? And if yes how?
> (Goal is to specify the videos through a file, using the concat demuxer, but
> apply the fade filter to each of the video files before concatenating them.
> Is it possible to make this in one step?)

> Can you please elaborate what you mean by fragile and complicated?


I mean, you've got to see how that is complicated if nothing else.

Anyway, I remembered that using the ffconcat file, you can define streams 
within the virtual single input from the concat demuxer, but I've only ever 
seen it used to specify specific elementary streams within vob files, not sure 
if it can be used with other demuxers. Maybe someone else does?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffplay options for setting audio volume and locating player windows on the screen

2020-04-15 Thread Ted Park
Hi,

> Have you tried the good ole tried and true X / XQuartz -geometry options?
> I have not used OSX in a very long time (since the switch to Quartz), so
> there may be some differences between the old Xorg and XFree nomenclature,
> but I imagine those types of options would still be there for initial
> window placements and sizes.


Unfortunately X11 is entirely deprecated and not even supported as an optional 
install. The latest release of XQuartz is something like 5 years old and 
includes server version 1.17 or something. You can build yourself with the help 
of ports etc but a lot of things don't work and is a pain in the neck in 
general. 

I'm pretty sure the static build is appkit.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Muxing multiple files and concatenating those outputs

2020-04-15 Thread Ted Park
Hi,

> Yeah, when we record the calls, the directory structure preceeding those
> names is /MM/DD/HH and the filenames are MIN_SEC_MSEC.codec(side)


I mean some phones specifically put g729a, and I assume it's the same for 
g729b. So I started  imagining files ending all over the place g729aa, g729ba, 
etc.

> I was really looking for just syntax to group commands, so I could use the
> merge filter output as direct input for the concatenation.

If you mean the afifo I inserted, you could probably get rid of those if the 
machine's fast enough, or the calls short enough. Or a different approach might 
be padding each file in the same call or using the cue filter, and mixing them 
all. But grouping commands isn't really a thing that ffmpeg does, except for 
things like image sequences, preparing segmented delivery media. 

> The man page and web searches I tried came up empty, but I figured someone 
> may know some magic sauce I could not find.


Maybe you've been looking for the wrong terms? It sounds to me like what you 
are looking for is closer to shell features, like parameter substitution, 
filename generation, etc.

If portability is not an issue, some shells have more features than others, 
though there's a different learning curve to each one. 

Or there's always xargs. Yeah probably xargs. I can't really tell but the 
associated filenames seem pretty much arbitrary. Do you parse the date/time in 
the filename to find which ones to put together? Or is there a call log to 
reference?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffplay options for setting audio volume and locating player windows on the screen

2020-04-15 Thread Ted Park
Hey,

> Thank you for responding. Regarding -volume, i'm just putting 30
> without the %.
> Regarding -top and -left, my version of ffplay (the latest MAC OSX download
> from the website)
> Doesn't seem to know these options.
> 
> I'm confused why generic options like these would vary from
> build/distribution to build?

Woah that's really weird. You could say that they vary because they're such 
generic parameters: ffplay mostly doesn't interact with the hardware. Drawing 
the frames in a window, playing the sound through the system audio server, is 
done by an external library, my build uses SDL2 for this. I thought that was 
the most common case when building ffplay for macOS. But anyway, a lot of the 
common parameters are usually "mapped" to the same ffmpeg option, so the 
commands are as consistent as possible even if you use different components in 
your build.

The options do vary from build to build, because each build is probably going 
to have some meaningful difference in the source code, the project is active 
and keeps changing.
The banner that shows up by default when you type any ffmpeg command indicates 
almost everything there is to know about each build configuration, since there 
are practically infinite variations of the software, people tend to assume 
you're running a fairly recent build unless you provide that.

A lot of people think they are downloading the latest version as they download 
the latest release, but it's actually pretty old (relatively) since by nature 
there is only a few a year, if that.
The options I mentioned go back a couple years though, so I'm not sure, maybe 
it wasn't working for some time? Do post the output when you get the chance 
(from ffplay command)

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffplay options for setting audio volume and locating player windows on the screen

2020-04-14 Thread Ted Park
Hi,

> I tried the -w and -h options, which were not found by ffplay. The -x and
> -y options seem to be the options that set the width and heoght, not the
> location on the screen. Can anyone tell me how to position the windows in
> different places on the screen?
> 
> I also tried the -volume option to start out at 30% volume, but that option
> also wasn't found.
I'm not sure if it's the same across all libraries/OS's, but -left and -top set 
the distance from left and top of screen respectively, for me. And when you 
specify volume, try not including the "%" sign.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Muxing multiple files and concatenating those outputs

2020-04-14 Thread Ted Park
Hi,

> Example using these files (suffix denotes codec - a and b are each side of
> a call):
> 
> 18_17_248.g729a
> 18_17_248.g729b
> 19_18_440.g711a
> 19_18_440.g711b
> 20_01_886.g729a
> 20_01_886.g729b
> 
That is really confusing...

> Current method to concatenate and transcode to wav putting caller on left
> and callee on right:
> 
> ffmpeg -f g729 -i 18_17_248.g729a -f g729 -i 18_17_248.g729b
> -filter_complex "[0:a][1:]amerge[aout]" -map "[aout]" out_1.wav
> 
> ffmpeg -ar 8000 -y -f mulaw -i 19_18_440.g711a -ar 8000 -f mulaw -i
> 19_18_440.g711b -filter_complex "[0:a][1:a]amerge[aout]" -map "[aout]"
> out_2.wav
> 
> ffmpeg -f g729 -i 20_01_886.g729a -f g729 -i 20_01_886.g729b
> -filter_complex "[0:a][1:]amerge[aout]" -map "[aout]" out_3.wav
> 
> ffmpeg out_1.wav out_2.wav out_3.wav final.wav
> 
> My goal is to syntactically accomplish this with one ffmpeg string (not
> pipes, ; or && bash syntax).
> 
> I would appreciate any insight - I have tried everything I can find / think
> of without success.
> 
The final command is supposed to concatenate the out_?.wav files into the 
final.wav file right?

I think the way you have them written, it makes it easy to combine it into one 
command, though I have to wonder what benefit (if any) you seek to get from it. 
Because, if you're scripting it, it really shouldn't matter, and if you're 
doing it by hand, doesn't it make one big command that has to be repeated if it 
fails? Anyhow,

> ffmpeg -f g729 -i 18_17_248.g729a -f g729 -i 18_17_248.g729b -ar 8000 -y -f 
> mulaw -i 19_18_440.g711a -ar 8000 -f mulaw -i
> 19_18_440.g711b -f g729 -i 20_01_886.g729a -f g729 -i 20_01_886.g729b 
> -filter_complex 
> '[0][1]amerge,afifo[l1];[2][3]amerge,afifo[l2];[4][5]amerge,afifo[l3];[l1][l2][l3]concat=v=0:a=1:n=3'
>  

I think this will do what you want, the individual input pads to amerge and/or 
fifo might not be needed.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] concat demuxer filter_complex (fade)

2020-04-14 Thread Ted Park
Hi,

The filter is not in the version of ffmpeg you are using, you will need to get 
a more recent version for it to be there. Try downloading a recent static build 
from the website or try compiling it yourself. (It's not in any release version 
as far as I can tell)

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Failed to open codec in avformat_find_stream_info

2020-04-08 Thread Ted Park
Hi,

> What am I doing wrong?
> 
> I searched but could find no resolution to this error.

Maybe my last reply didn’t go through. Try using the bluray scheme to access 
the file.

>> Begin forwarded message:
>> 
>> From: Ted Park 
>> Subject: Re: [FFmpeg-user] Failed to open codec in avformat_find_stream_info
>> Date: April 2, 2020 at 13:55:49.000GMT-4
>> To: FFmpeg user questions 
>> 
>> Hi Mark,
>> 
>>> The test...
>>> Failure. And now I have 2 issues:
>>> 1, "Failed to open codec in avformat_find_stream_info" is still there, and
>>> 2, I realized I need '-map 0' and now get "No wav codec tag found for codec 
>>> pcm_bluray".
>>> Oh, dear.
>>> 
>>> pcm_bluray either doesn't exist, or it's not in the documentation. I assume 
>>> this is LPCM, and I do want that. I'm copying through all audio, '-c:a 
>>> copy', so I don't understand why ffmpeg would need the codec and would be 
>>> looking for codec pcm_bluray.
>> 
>> Well pcm isn’t really a codec the same way mp3 is, I’m guessing but all the 
>> decoder does is probably just parse the LPCM format info like bit depth, 
>> rate and channel layout.
>> 
>>> And I still don't understand what '-map 0' does, but I'll try to find it in 
>>> the docs and read it again. I keep forgetting about it.
>>> 
>>> 
>>> C:\>ffmpeg -i IN.M2TS -vf "telecine=pattern=,bwdif=mode=send_frame" 
>>> -avoid_negative_ts 1 -c:v libx264 -crf 20 -preset slower -tune film -c:a 
>>> copy -c:s copy -map 0 OUT.MKV
>> 
>> 
>> I’m not sure if it’s you or me, but I feel like something is definitely 
>> getting lost in communication…
>> 
>> Referring back to the command in your original message:
>>> C:\CMD & tiny apps\ffmpeg>ffmpeg -i B:\BDMV\STREAM\0.m2ts -vf 
>>> "telecine=pattern=,bwdif=mode=send_frame" -avoid_negative_ts 1 
>>> -analyzeduration 8000 -probesize 8000 -c:v libx265 -crf 20 -preset 
>>> medium -c:a copy -c:s copy C:\AVOut\8.MKV
>> 
>> For ffmpeg to figure out how to read a m2ts file as found on a pressed 
>> commercial BD-ROM designed to play in regular BD players and not used as a 
>> simple data disc, you need to prefix the mount point or root of the 
>> filesystem with bluray:, and possibly also specify which part to read, with 
>> parameters such as angle, playlist, and chapter.
>> 
>> Again I am not sure if escaping or quoting is needed but assuming you can 
>> “cd” to “B:\” in the windows command line and it shows the BDMV folder when 
>> you list the directory entries, I think the command would look something 
>> like this?
>> > ffmpeg -i bluray:B:\ -vf "telecine=pattern=,bwdif=mode=send_frame" 
>> > -avoid_negative_ts 1 -analyzeduration 8000 -probesize 8000 -c:v 
>> > libx265 -crf 20 -preset medium -c:a copy -c:s copy C:\AVOut\8.MKV
>> 
>> 
>> And that would let ffmpeg navigate to whatever the default program is on the 
>> disc. I’m not sure if it is always the main program or if it uses the first 
>> playlist if you don’t specify which one.
>> But you basically need to do this if the elementary streams have not yet 
>> been separated from the menu, studio/distributor logo hero sequence, etc. 
>> because if it was copied directly from the UDF (decryption having been 
>> handled by that nifty tool) the media is still in the very closely 
>> interleaved Blu-Ray specific format as necessitated by the bitrates and 
>> typical player capability.
>> 
>> To be honest I actually don’t know if I’ve ever had a computer bluray drive, 
>> I authored an image for reference, to check the errors I get if I do the 
>> same thing (I didn’t put any Sony subtitles in there so that’s missing but 
>> you can see the difference when you use the bluray protocol vs regular file. 
>> Even with copy protection set to completely open it is necessary.
>> 
>> kumowoon1025@rfarm1 Movies % ls TITLE
>> BDMVCERTIFICATE
>> 
>> So if TITLE was the root folder of the disc file structure, I would need to 
>> do this to read all playlists:
>> 
>> kumowoon1025@rfarm1 Movies % ffmpeg 
>> TITLE/BDMV/PLAYLIST/*.mpls(^P:-i:P#bluray:TITLE#^P:-playlist:^@:t:r:) 
>> ffmpeg -playlist 0 -i bluray:TITLE -playlist 1 -i bluray:TITLE 
>> -playlist 2 -i bluray:TITLE -playlist 3 -i bluray:TITLE -playlist 
>> 4 -i bluray:TITLE 
>> ffmpeg version git-2020-04-02-7039045 Copyright (c) 2000-2020 the FFmpeg 
>> 

Re: [FFmpeg-user] AMD GPU + Vaapi encoding problem

2020-04-08 Thread Ted Park
> On Apr 8, 2020, at 15:56, Colin Bitterfield  wrote:
> 
> I tried VAAPI with a RADEON 580 on Linux. It was intermittent at best.
Maybe it’s because it’s the last RX Polaris card they released before moving 
on, the video encoding/decoding hardware changed. But then again it is the 
exact same gpu as the 480, so what’s up with that I wonder, do you mean 
performance was intermittent?

> I wound up using Vulkan with success.
> 
> This is the hwaccel command I was using for transcoding.
> 
> time ffmpeg -hide_banner -stats -hwaccel vulkan -init_hw_device vulkan:0 
> -threads 16 -i input.mp4 -vf scale=854:480 -c:v  libopenh264  -c:a copy 
> -quality quality -b:v 2M -crf 23 -maxrate 2M -bufsize 6M -slice_mode dyn 
> -max_nal_size 65535 -allow_skip_frames 1 output_gpu_3.mp4 -y

What were you comparing the execution time of this command to? I don’t know 
what Vulkan looks like (with Apple cutting it off with their own API and all…) 
but I thought those ffmpeg options the way you used them would only apply to 
the decoder? And then the encoder used is Cisco’s version… A bit confused here.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] AMD GPU + Vaapi encoding problem

2020-04-08 Thread Ted Park
Hello,

> Okay, i've used latest static build from git of amd64 arch, it can't find
> option -vaapi_device,
> and this is the output (ran from the extracted archive path):
> $ ./ffmpeg -vaapi_device /dev/dri/renderD128 -f x11grab -video_size
> 1920x1080 -i :0 -vf 'format=nv12,hwupload' -c:v h264_vaapi -b:v 7M
> -profile:v main -bf 0 output.mp4
> ffmpeg version N-52056-ge5d25d1147-static https://johnvansickle.com/ffmpeg/
> Copyright (c) 2000-2020 the FFmpeg developers
>  built with gcc 8 (Debian 8.3.0-6)
>  configuration: --enable-gpl --enable-version3 --enable-static
> --disable-debug --disable-ffplay --disable-indev=sndio
> --disable-outdev=sndio --cc=gcc --enable-fontconfig --enable-frei0r
> --enable-gnutls --enable-gmp --enable-libgme --enable-gray --enable-libaom
> --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype
> --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb
> --enable-libopenjpeg --enable-librubberband --enable-libsoxr
> --enable-libspeex --enable-libsrt --enable-libvorbis --enable-libopus
> --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc
> --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265
> --enable-libxml2 --enable-libdav1d --enable-libxvid --enable-libzvbi
> --enable-libzimg
>  libavutil  56. 42.101 / 56. 42.101
>  libavcodec 58. 76.100 / 58. 76.100
>  libavformat58. 42.100 / 58. 42.100
>  libavdevice58.  9.103 / 58.  9.103
>  libavfilter 7. 77.100 /  7. 77.100
>  libswscale  5.  6.101 /  5.  6.101
>  libswresample   3.  6.100 /  3.  6.100
>  libpostproc55.  6.100 / 55.  6.100
> Unrecognized option 'vaapi_device'.
> Error splitting the argument list: Option not found


The option might not be there any more, try using -init_hw_device instead, and 
use -hwaccels to list additional hardware support?
Also confirm this is the same machine (or at least with similar hardware) that 
ran the earlier command?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] ffmpeg low bitrate with with qulaity

2020-04-08 Thread Ted Park
Hi,

> So, as a first step, just try reducing your 5 Mb/s to 2 Mb/s and see
> what happens.
> 
> Secondly, encoding is basically a trade-off between encoding speed,
> resulting size (file size or bit rate), and quality. Up to certain
> limits, you can sacrifice one for the other. E.g. you can get a higher
> compression rate out of some codecs by putting more CPU/GPU (or more
> time, if you're no realtime) into encoding.

Yeah, you just need to try different settings and decide what is acceptable for 
yourself,
results vary wildly depending on the nature of the media as well.
This might be a prejudice I have, but imo until gpu’s started coming with 
dedicated blocks for encoding or decoding, they were faster for sure,
but didn’t compress as well while maintaining quality. That’s just how I 
remember it,
and the difference is practically gone, or surpassed by GPU onboard encoders 
with 
the newer rendering focused models, but I still tend to lean towards multiple 
passes 
on CPU’s for predictable results.

>> yadif_cuda=mode=0:-1:0,scale_npp=1920:1080 -r 25 -b:v 6M -bf 2 -g 150 -c:a
>> aac -b:a 128k -ar 48000 -strict -2 -f flv
> 
> I don't see how you achieve 5 Mb/s is you encode with "-b:v 6M" and "-b:a
> 128k", that's more than 6.1 Mb/s.

I think because it’s a single pass encode, and the encoder simply didn’t find 
the need for 
the extra (target) bitrate by the time it was finished, it has a gop of 6 
seconds, so.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Works in ffmpeg-git, doesn't in ffmpeg 1:4.2.2-5

2020-04-07 Thread Ted Park
> On Apr 7, 2020, at 03:22, Carl Zwanzig  wrote:
> 
> On 4/6/2020 6:59 PM, Kai Hendry wrote:
>> For several months as an Archlinux user, kms screen capture has been broken 
>> in the standard ffmpeg package.
> 
>> The resolution of the ticket was "invalid", but the problem remains, that 
>> the ffmpeg release doesn't work.
>> So how can I help get a stable release out?
> 
> If you're using an _archlinux_ package, you need to talk to the archlinux 
> folks who make it. The ffmpeg project only provides the builds on it's web 
> own site.  ffmpeg is another example where using the distro's package is 
> probably not a good idea.
> 
> z!

Are you talking about how like Debian got their ffmpeg packages hijacked years 
ago

I think the ffmpeg on the default sources are mostly fine nowadays, if a little 
slow on the update.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Upmixing Dolby Pro Logic audio - still not possible?

2020-04-06 Thread Ted Park
Hi,

> It is not that simple, it involves some complicated operations.


I thought the goal was to separate the stereo encoded surround from Dolby 
Surround/DPL, not up-mixing to a layout with more speakers than there are 
channels. Or are you saying that the decoding involves complicated operations?

I mean the operations do involve complex terms and if you find a DPL decoder 
plugin/library then it probably is better than doing it on your own, I was just 
saying all the information to implement a perfect decoder for the original 
Dolby Stereo Matrix-encoded formats is out there. In fact, I don’t think you 
can buy these decoders from Dolby anymore, they will refer you to one of the 
online virtual encoding/decoding houses and tell you that their software 
implementation is just as good as their rackmount units.

And while they transitioned from one algorithm/coding strategy/technology to 
another, they were pretty generous about letting reference material distribute 
freely, if you can’t find a dolby white paper it’s probably only because it’s 
so old, not because its proprietary. That’s all I meant, since the OP found 
what was there to be lacking. Not that writing a decoder/dematrixer(?) is a 
weekend project.

> Yes. The audio track in question is simply Dolby Pro Logic, not even
> DPL2.

>> ...you could get the matrix coefficients by
>> googling for them and do the math and get the separated channels by
>> hand.
> 
> Any pointers to guides on doing this would be greatly appreciated.
> 
> Thanks!

This is more of a conceptual overview rather than a complete specification of 
the stream, but it goes into some detail on how the encoder operates, and in 
turn, how a decoder would work, in inverse.
https://media.kumowoon1025.com/dolby/Assets/US/Doc/Professional/208_Dolby_Surround_Pro_Logic_Decoder.pdf
 
<https://media.kumowoon1025.com/dolby/Assets/US/Doc/Professional/208_Dolby_Surround_Pro_Logic_Decoder.pdf>

Also check out Dolby’s GitHub <https://github.com/DolbyLaboratories>, though I 
doubt you’ll find anything that old.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Please help confirm if this hard, system crash is reproducible

2020-04-06 Thread Ted Park

>> ffplay -f lavfi -i "anoisesrc[a1];sine,[a1]amix"
>> ffplay version 4.2.2 Copyright (c) 2003-2019 the FFmpeg developers
> 
> Apart from "is the issue reproducible with current FFmpeg git head?"
> The problem with ffplay-specific issues is that ffplay depends on an
> external library that can easily be the issue for such (hard to debug)
> crashes.
> That being said: valgrind shows nothing unusual for above command line.
> 
> Carl Eugen

I mean, it was kind of stale, the build I encountered it on was one commit 
behind on master I think.
I switched to the default binary installation in homebrew just to see if it’s 
my build configuration, but anyway, you’re probably right, I’m starting to 
think it’s probably some hardware or the mess I’ve made of my sdl2 
installation, I don’t think there’s anything that could cause this kind of 
failure reliably with any typical build of ffmpeg.

> is this a correct syntax if you specify only one input for amix? I'm not 
> sure. Does it mean amix gets [a1] as the first input and sine as the second 
> input, or vice versa? Did you try
> 
> "anoisesrc[a1];sine[a2];[a1][a2]amix”
> 
> 
> Michael

I did, and nope, nothing happened. This was on OSX 10.14 (first machine was 
running 10.15.5) so I might try on yet another machine but I’ll leave that for 
later if I end up not being able to fix the issue on this machine.

If anyone has ffmpeg on 10.15.3~5 frameworks try it out, I think it’s just me 
and not a bug at this point.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Upmixing Dolby Pro Logic audio - still not possible?

2020-04-06 Thread Ted Park
Hi,

> I see no mention of Dolby Pro Logic on that page, and according to this
> comment[1] that filter uses its own (necessarily heuristic) algorithm;
> it does not use the Pro Logic channel information embedded in the
> source.
> 
> I'm interested in doing this in a "lossless" fashion, where the
> resulting multi-channel audio is identical to that produced by a
> Dolby Pro Logic-enabled AVR.


Do you mean the old DPL, or even DPLII, without the various enhancements Dolby 
put on it to extend patents?
If so I think all there is to it is doing the reverse operation for the 
matrixing applied to the channels (I don’t think there is any metadata if it’s 
not the IIz, IIx, etc variants.)

And the matrices have been reverse engineered pretty much to perfection so if 
you really don't want to get a reference decoder (which aren’t that expensive, 
though you might want an empty equipment rack handy) you could get the matrix 
coefficients by googling for them and do the math and get the separated 
channels by hand.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Please help confirm if this hard, system crash is reproducible

2020-04-06 Thread Ted Park
Hello,

>> % ffplay -f lavfi -i "anoisesrc[a1];sine,[a1]amix"
> 
> is this a correct syntax if you specify only one input for amix? I'm not 
> sure. Does it mean amix gets [a1] as the first input and sine as the second 
> input, or vice versa?

It is vice versa, I don’t know if it is proper syntax, something I picked up, 
or a shorthand I found to work, but I have thought of the comma as the same as 
a semicolon with all unlabeled outputs on the left being implicitly mapped to 
any unlabeled inputs on the right, with any necessary resamplers auto-inserted 
in between. 

> Did you try

> "anoisesrc[a1];sine[a2];[a1][a2]amix”

I will try on a secondary machine (I was not expecting that crash the first 
time x_x ) and  be sure to report back. 

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] Please help confirm if this hard, system crash is reproducible

2020-04-06 Thread Ted Park
0KB sq=0B f=0/0   
   8.75 M-A:  0.000 fd=   0 aq=  364KB vq=0KB sq=0B f=0/0   
   8.78 M-A:  0.000 fd=   0 aq=  380KB vq=0KB sq=0B f=0/0   
   8.81 M-A:  0.000 fd=   0 aq=  380KB vq=0KB sq=0B f=0/0   
   8.85 M-A:  0.000 fd=   0 aq=  380KB vq=0KB sq=0B f=0/0   
   8.88 M-A:  0.000 fd=   0 aq=  380KB vq=0KB sq=0B f=0/0   
   8.91 M-A:  0.000 fd=   0 aq=  380KB vq=0KB sq=0B f=0/0   
   8.94 M-A:  0.000 fd=   0 aq=  380KB vq=0KB sq=0B f=0/0   
   8.97 M-A:  0.000 fd=   0 aq=  380KB vq=0KB sq=0B f=0/0   
   9.00 M-A:  0.000 fd=   0 aq=  380KB vq=0KB sq=0B f=0/0   
   9.03 M-A:  0.000 fd=   0 aq=  380KB vq=0KB sq=0B f=0/0   
   9.06 M-A:  0.000 fd=   0 aq=  380KB vq=0KB sq=0B f=0/0   
   9.09 M-A:  0.000 fd=   0 aq=  364KB vq=0KB sq=0B f=0/0   
   9.12 M-A:  0.000 fd=   0 aq=  380KB vq=0KB sq=0B f=0/0   
   9.15 M-A:  0.000 fd=   0 aq=  380KB vq=0KB sq=0B f=0/0   
   9.18 M-A:  0.000 fd=   0 aq=  380KB vq=0KB sq=0B f=0/0   
   9.21 M-A:  0.000 fd=   0 aq=  380KB vq=0KB sq=0B f=0/0   
   9.24 M-A:  0.000 fd=   0 aq=  380KB vq=0KB sq=0B f=0/0   
   9.28 M-A:  0.000 fd=   0 aq=  380KB vq=0KB sq=0B f=0/0   

Please try running the command on a system you can afford to lose for a few 
minutes, or if you see a gaping hole of a problem please let me know. 

Thanks in advance,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Need help splitting stereo FLAC into two separate files

2020-04-05 Thread Ted Park
Hi,
> As the others speculated: Under Windows, "'" is not a command line
> quotation character. If parts of the command line need to be quoted in
> order to be collated, you need to use the double quotation mark '"'.
> The single quotation mark is passed directly to ffmpeg on Windows,
> making your filter argument unparsable.

Yet another thing I learn about windows… This is surprising to me, I remember 
fighting a machine not binding to AD because it had a single quote (iirc) in 
its machine name. Does the single quote character have any special significance 
win the windows command line? (and while I’m asking, where do back-ticks and 
apostrophes fall?)

So is the ffescape tool compiled differently to work properly when compiled 
on/for windows?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] telecine pattern 5555 - Judder-free, 60 FPS telecine (?)

2020-04-05 Thread Ted Park
Hi,

I’m a bit lost here to be honest. Do you have a plasma panel or something else 
that is not progressive scanning? It just sounds like what happens if you watch 
interlaced content, but from a progressive source, at a progressive framerate, 
like the typical “video cam” effect put on film camera footage.

> If telecine is performed in advance by the user and it's a 5-5-5-5 pull-down 
> telecine, there is no judder on playback on a 60Hz TV because there is no 
> cadence and because the video is already 60fps.
Like here, you mean 60 fields per second right? Or is this a 60Hz lcd panel? 
(Also is it just convention to use 4 film frames when denoting telecine 
patterns? Because as you say this is just repeating each field 5 times, maybe 
5-5 is necessary for the interlaced “frame”)

> A 5-5-5-5 pull-down telecine does have combing, but the combing is 2 frames 
> out of every 10 frames (i.e., 20%). The combing (C) versus progressive (P) 
> frames looks like this:
> [A/a__][B/b__][C/c__][D/d__]   ...original 
> p24 video
> [A/a_][A/a_][A/b_][B/b_][B/b_][C/c_][C/c_][C/d_][D/d_][D/d_]   ...5-5-5-5 
> telecine
> P P C P P P P C P P   20% combing 
> @ 12Hz
> A-AB-BC-CD-D  judder free
> <1/24s><1/24s><1/24s><1/24s>
This is what makes me unsure of whether you actually have 
non-progressive-scanning viewing equipment… Those “combing” frames are just the 
regular m.o. for how an NTSC tv scans from one picture to the next, isn't it? 
Like when you say the results of your experiment looks good, do you mean it 
resembles what the 240+Hz TV’s have playing on store demo mode? If so I think 
that’s largely personal preference, if you like buttery smooth movement in 
everything, then you may have found a telecine pattern that works for you, but 
some people (purists, as I am tempted to call them) insist on the cadence, if 
not judder, that is indicative of shutter angle and the like analog traits of 
the source.

> Those are the facts. I don't think 5-5-5-5 telecine to 60fps has ever been 
> done before. Some people are opposed to it as though it is religious heresy. 
> That's not going to stop me.
> 
> Earlier today I posted a request for help with crafting a special 
> minterpolate filter that operates solely on the combed frames (frames 3, 8, 
> 13, 18, etc.). That post is titled
> "minterpolate only frames 3, 8, 13, 18, etc.".
> I hope that someone will respond to it. But even without decombing, I 
> consider the 60fps video to be superior to any 2-3-2-3 pull-down telecine 
> because I hate judder.
You probably mean a different filter, minterpolate literally interpolates 
between/around frames, like drawing the middle pages of a flip comic when given 
the first and end scene. Maybe I am missing something but you can’t operate 
solely on a certain frame. You can start at a certain frame I guess, but I 
would think an interlaced frame is a bad candidate for this.

>> ... if asked if I knew how so take this with a grain of salt but I think the 
>> general “every encode reduces information” caveat applies, especially as 
>> you’re doing some image processing in between.
>> You might say that technically, it is a reversible process, but multiple 
>> iterations of de/compression with an efficiency-focused codec such as H.264 
>> will quickly show diminishing returns despite any novel techniques to 
>> enhance perceived quality that you use in between decompression and 
>> compression.
> 
> Once transcoded to 60fps, why would I ever reverse it? If I ever get a 120Hz 
> TV, I'll simply drag out the original disc and remux to a p24 MKV container 
> with zero loss. In the meantime I will have a 60fps video to watch that's as 
> good as the original and is free of telecine judder.

I meant the transcoding part. It’s already been mastered onto the dvd/bd with 
h.264. But if you’re going to keep the original I guess it’s no harm.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Need help splitting stereo FLAC into two separate files

2020-04-05 Thread Ted Park
Hi,

> [AVFilterGraph @ 01d1ff964340] No such filter:
> 'channelsplit=channel_layout=2.0[FL][FR]'
> Error initializing complex filters.
> Invalid argument

> [AVFilterGraph @ 02963fbe4340] No such filter:
> 'channelsplit=channel_layout=2.[FL][FR]'
> Error initializing complex filters.
> Invalid argument

> [AVFilterGraph @ 01a429734300] No such filter:
> 'channelsplit=channel_layout=[FL][FR]'
> Error initializing complex filters.
> Invalid argument

That is weird. Maybe single quotes work differently in your command line, or 
are those apostrophes or something?? try moving the first quote after the 
equals sign, e.g. channelsplit=‘channel…..[FL][FR]’

But even after it gets the channelsplit working the right channel layout name 
is probably stereo, and then you might have to move the channels from left and 
right to center (maybe aformat=cl=mono will suffice) take a look at the 
-map_channel option, it might be simpler for you.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] telecine pattern 5555 - Judder-free, 60 FPS telecine (?)

2020-04-05 Thread Ted Park

>> Even if that were true (which I have no idea to be honest) are you sure 
>> you’re not assuming a raw uncompressed source and no compression afterwards 
>> either?
> 
> Frames 3, 8, 13, 18, etc. are combed. The rest of the frames are progressive. 
> The progressive frames display the effects of decombing.
> 
> The source is a 24fps progressive H.264 AVC video. I'm transcoding it to, 
> among other test transcodes, 5-5-5-5 pull-down @ 60fps progressive, H.264 
> AVC. I'm not sure what you mean by "assuming a raw uncompressed source and no 
> compression afterwards", but I hope I've responded appropriately.

I mean I know what combing artifacts look like, but I wouldn’t know how to 
produce combing artifacts in an encode deliberately if asked if I knew how so 
take this with a grain of salt but I think the general “every encode reduces 
information” caveat applies, especially as you’re doing some image processing 
in between.

You might say that technically, it is a reversible process, but multiple 
iterations of de/compression with an efficiency-focused codec such as H.264 
will quickly show diminishing returns despite any novel techniques to enhance 
perceived quality that you use in between decompression and compression.
 
I liken it to one of those pitch-shifting or retiming plugins/effects in audio 
apps; you can use those to maintain pitch while you stretch the duration of the 
recording but if you make edits and “freeze” it as part of a submix, it’s not 
going to sound the same when you shrink it back down to its original time.

I don’t know if there would be other limitations if you kept the raw yuv video 
throughout, but then as efficient as H.264 is, you’d be talking something like 
~100,000kbps in that case :/

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] telecine pattern 5555 - Judder-free, 60 FPS telecine (?)

2020-04-04 Thread Ted Park
Hey,

>> Apart from the telecine process damaging the image...
> 
> Yes, telecine damages the output image.
> 
> I assume you agree that a telecine that produces
> 20% combing @ 12Hz & no cadence (i.e., 5-5-5-5 pull-down in the raw frames)
> is better than
> 40% combing @ 6Hz & 2-3-2-3 cadence (i.e., 4-6-4-6 pull-down by the TV).
> 
>> ...and the deinterlacer permanently ruining it?
> 
> Does a deinterlacer ruin the image? If the picture is progressive and if all 
> the deinterlacer does is deinterlace?
> 
> But it appears that 'bwdif=mode=send_frame' is doing more than deinterlace. 
> It appears to be decombing, even for the 8 of 10 frames that are not combed.

Even if that were true (which I have no idea to be honest) are you sure you’re 
not assuming a raw uncompressed source and no compression afterwards either?


> Carl Eugen,
> are you a developer or a user? I ask that question innocently because I 
> really don't know.
> 
> I looked at the libavfilter source code. I searched for telecine. Who's name 
> do I see in vf_telecine.c? "Paul B Mahol". I had no idea that Paul was a 
> developer -- why would I?

Maybe not for that filter/file but as someone *trying* to contribute, I can 
attest to Carl being an active maintainer, his name is hard to miss in the 
commit history. 

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Question on Segment TImes

2020-04-04 Thread Ted Park
Hi,

> OK: That solved the timestamp problem and created a missing stream.
> 
> Does anyone one know the magic option for getting all three streams into MOV.
> 
> If I use this to create a DV file (from DVR-DV) it works and creates this 
> file:

> Input #0, dv, from ‘output.dv’:
>   Metadata:
> timecode : 00:00:00:00
>   Duration: 00:00:10.01, start: 0.00, bitrate: 28771 kb/s
> Stream #0:0: Video: dvvideo, yuv411p, 720x480 [SAR 8:9 DAR 4:3], 25000 
> kb/s, 29.97 fps, 29.97 tbr, 29.97 tbn, 29.97 tbc
> Stream #0:1: Audio: pcm_s16le, 32000 Hz, stereo, s16, 1024 kb/s
> Stream #0:2: Audio: pcm_s16le, 32000 Hz, stereo, s16, 1024 kb/s
> 
> Three streams
> 
> If I just change the container to “.mov”

> I get one stream:
> 
> nput #0, mov,mp4,m4a,3gp,3g2,mj2, from ‘output.mov’:
>   Metadata:
> major_brand : qt
> minor_version : 512
> compatible_brands: qt
> encoder : Lavf58.35.101
>   Duration: 00:00:10.01, start: 0.00, bitrate: 28773 kb/s
> Stream #0:0: Video: dvvideo (dvc / 0x20637664), yuv411p, 720x480 [SAR 8:9 
> DAR 4:3], 28771 kb/s, 29.97 fps, 29.97 tbr, 1000k tbn, 29.97 tbc (default)
> Metadata:
>   handler_name : VideoHandler
I think this has to do with AVFoundation’s “raw data” option and how audio and 
video is stored on the actual metallic sublimate tape (iirc, the 
video/audio/timecode “streams” do not exist as discrete entities, but they are 
all interleaved into an atomic “DV” tape-format (hence the 1M reported tbn). 
I’m not sure how AVFoundation splits them up or isolates the detected timecode 
or decide to join all the streams together, but I presume it’s similar to the 
audio fixup menu options in FCPX import media window.

I’m curious if this would work, but have you tried artificially increasing the 
video size (via framerate, frame size, etc) to make it into what would be 
considered HDV, or even DVCPRO and try adding the audio streams then? 
Apparently it supports multiple streams (DVCPRO50, that is)


> I have tried every variation of -map.
> If I send the output to pipe and then to a file, all of the streams are their 
> for DV
> 
> If I do the import with Final Cut Pro X then I get this from the file. Also, 
> FCPX seems to be able to find the original timestamp from when the media  was 
> created.

I don't know if -map will work as usual with dv from a tape deck. Also I don't 
know how AVFoundation actually archives the data, I would really get a build 
with native IEEE1394 device control/communication library support, since if you 
use AVFoundation then I doubt the problems in FCPX are going to go away, or at 
least not all of them.



> On Apr 4, 2020, 2:11 AM -0700, Gyan Doshi , wrote:
>> 
>> 
>> On 04-04-2020 11:59 am, Colin Bitterfield wrote:
>>> I am trying to segment split a stream coming in from AVFOUNDATION
>>> 
>>> ffmpeg -benchmark_all -stats -loglevel debug -copyts \
>>>  -f avfoundation -capture_raw_data true -pix_fmt 0rgb -i DV-VCR -q 0 \
>>>  -map 0 -c:v copy -c:a copy -segment_time 00:00:10 \
>>>  -f segment  374_%03d.dv -y
>>> 
>>> _ I have tried various combinations of “-increment_tc” and 
>>> "-reset_timestamps 0"
>>> 
>>> The video splits flawless at the time requ
>> 
>> .dv is a bare bones container and does not support timestamps. Try MOV.

I am under the understanding that DV timestamps/timecodes are pretty much the 
same thing since they use either LTC or VITC on the tape that encodes the date 
(timestamp) as well as the timecodes, in a “striped” tape anyway. It’s how your 
deck knows when one shot ends and another begins. So it doesn't support 
timestamps as metadata but I don't think they are lost, just embedded in the dv 
stream in a way that’s just a little harder to access.


Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Timestamps are unset & Error writing trailer

2020-04-03 Thread Ted Park
Hi,

I don’t think it builds by default, but there is a tool, “dvd2concat” in the 
tools folder that doesn’t really work.

But you can transparently bypass CSS on DVD’s with your setup so it might work, 
if you want to give it a try. IIRC libdvdread and libdvdcss are dependencies, 
basically you point it to the dvd mount point and it generates a ffconcat file 
containing a detailed description of the layout of the dvd filesystem: the 
actual elementary stream id’s and which file they are in, the byte-range of the 
file that they are in, and how they are “striped” across multiple VOBs, since 
the way they track are not always intuitive.

You can feed that file to the concat demuxer specifying -safe 0 and adding 
subfile to protocol_whitelist to sort of seek in a dvd, if it all works as it’s 
meant to.

I’m only suggesting this based on the assumption that your system can seemingly 
pretend CSS/AACS isn’t implemented, and even then, I wouldn’t bet on it working 
100% perfect. 

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Drop frames during framemd5 calculation of DPX files

2020-04-02 Thread Ted Park
Hi,

> The image2 demuxer (that is used for dpx files) has never
> heard of images with a framerate and therefore doesn't
> try to read it.
> Afaict, the dpx decoder uses the framerates.
I see… That would explain the weird 24 tbr 25 tbn 24 tbc, if I understand 
correctly?


>> Isn’t the framerate the film source original if it’s there?
> 
> I don't know for sure but if you are talking about scans,
> this simply doesn't matter.

I was actually asking because I was wondering if there were any systems out 
there that save the scanned fps instead.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Convert audio stream to AC4

2020-04-02 Thread Ted Park
Hi,

>> I would like to convert an audio stream of a movie into AC4 for a Samsung TV.
> 
> Do you have a sample file that plays with your TV?
> We don't have many so far.


Do you mean you don’t have many that you are free to distribute because of 
copyright/patents or in general?
Or is there some additional requirement for the stream to play on a Samsung TV?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Failed to open codec in avformat_find_stream_info

2020-04-02 Thread Ted Park
ration: 00:00:00.03, start: 600.00, bitrate: 13257 kb/s
  Program 1 
Stream #3:0[0x1011]: Video: h264 (High) (HDMV / 0x564D4448), yuv420p(tv, 
bt709, top first), 1920x1080 [SAR 1:1 DAR 16:9], 29.97 tbr, 90k tbn, 59.94 tbc
[mpegts @ 0x7fdd3482e000] probed stream 0 failed
[mpegts @ 0x7fdd3482e000] Could not find codec parameters for stream 0 
(Unknown: none ([145][0][0][0] / 0x0091)): unknown codec
Consider increasing the value for the 'analyzeduration' and 'probesize' options
Input #4, mpegts, from 'TITLE/BDMV/STREAM/4.m2ts':
  Duration: N/A, start: 600.00, bitrate: N/A
  Program 1 
Stream #4:0[0x1400]: Unknown: none ([145][0][0][0] / 0x0091)
Input #5, mpegts, from 'TITLE/BDMV/STREAM/5.m2ts':
  Duration: 00:00:00.03, start: 600.00, bitrate: 13257 kb/s
  Program 1 
Stream #5:0[0x1011]: Video: h264 (High) (HDMV / 0x564D4448), yuv420p(tv, 
bt709, top first), 1920x1080 [SAR 1:1 DAR 16:9], 29.97 tbr, 90k tbn, 59.94 tbc
[mpegts @ 0x7fdd3484a200] probed stream 0 failed
[mpegts @ 0x7fdd3484a200] Could not find codec parameters for stream 0 
(Unknown: none ([145][0][0][0] / 0x0091)): unknown codec
Consider increasing the value for the 'analyzeduration' and 'probesize' options
Input #6, mpegts, from 'TITLE/BDMV/STREAM/6.m2ts':
  Duration: N/A, start: 600.00, bitrate: N/A
  Program 1 
Stream #6:0[0x1400]: Unknown: none ([145][0][0][0] / 0x0091)
Input #7, mpegts, from 'TITLE/BDMV/STREAM/7.m2ts':
  Duration: 00:00:00.03, start: 600.00, bitrate: 11784 kb/s
  Program 1 
Stream #7:0[0x1011]: Video: h264 (High) (HDMV / 0x564D4448), yuv420p(tv, 
bt709, top first), 1920x1080 [SAR 1:1 DAR 16:9], 29.97 tbr, 90k tbn, 59.94 tbc
[mpegts @ 0x7fdd34864600] probed stream 0 failed
[mpegts @ 0x7fdd34864600] Could not find codec parameters for stream 0 
(Unknown: none ([145][0][0][0] / 0x0091)): unknown codec
Consider increasing the value for the 'analyzeduration' and 'probesize' options
Input #8, mpegts, from 'TITLE/BDMV/STREAM/8.m2ts':
  Duration: N/A, start: 600.00, bitrate: N/A
  Program 1 
Stream #8:0[0x1400]: Unknown: none ([145][0][0][0] / 0x0091)
At least one output file must be specified


> 
> On Apr 1, 2020, at 00:51, Carl Zwanzig  wrote:
> 
> On 3/31/2020 7:52 PM, Ted Park wrote:
>> Mail acting weird as always :| If anyone could suggest a good (less randomly 
>> behaving) alternative to the default mail app on Mac I’d be much obliged.
> 
> Thunderbird? Its served me well for years (I'm on a pc, but friends use it on 
> mac and linux).
> 
> z!


Thanks for that tip! I did not know Thunderbird was still alive and kicking, 
friends at FireFox should help them with some exposure. I’ve only just started 
setting it up but it checks all the boxes for me :D

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Failed to open codec in avformat_find_stream_info

2020-04-01 Thread Ted Park
Hey there,

> Of course you are considering that possibility. No issue there. One word: 
> AnyDVD-HD.


That is one interesting tool… So that handles decryption on the fly, and you 
can access the disc as a transparent UDF if I understand correctly. Looks like 
the maintainer is pretty diligent in keeping a database of encryption keys as 
well as keeping the tool’s host keys valid whenever they are “poisoned” by a 
user, seems unreal, I’m sure it’s worth its price tag :o

Did you make an exact copy of the disc, like to a disc image file? Or are you 
reading straight from the disc, with the tool in the middle? The folder/file 
hierarchy is one of the few things well known and relatively consistent in 
format, that’s how libbluray would find the playlist. Then it executes the 
playlist to determine which media files, and the range within the files to read.

Either way, even if the copy protection has been defeated, if you have the 
intact disc filesystem you will need to specify the root BDMV directory/mount 
point and the playlist if it isn’t auto-detected with the bluray: scheme and 
its parameters, so ffmpeg can use libbluray to seek in the BD specific 
transport streams.

You know how DVD menus, content and sub pictures/subtitles are basically one 
giant continuously running program implemented in special DVD “machine code” 
and basically run on tiny virtual machines? Well, blu-ray’s are different, but 
they are definitely no less complex. The “playlist” isn’t a plaintext listing 
but at least partially consists of machine code that drives the blu-ray “vm” 
complete with state machines and registers just like a DVD has. 

Essentially the .MTS or .m2ts files in a BDMV systems aren’t simple MPEG 
transport streams, so trying to decode them as one won’t usually work. 

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Failed to open codec in avformat_find_stream_info

2020-04-01 Thread Ted Park
Hi,

> Oh, I see. So you think the inability to get the subtitle streams is related 
> to the component that accesses the BD? Right? If so, I'm a bit mystified 
> because I don't see how ffmpeg would know which MPLS (playlist file) relates 
> to the M2TSs that I concatenated or even where the MPLS was located on the 
> disc.
> 
> Does that mean I should also point ffmpeg to the particular MPLS? I think I 
> know how to suss out the particular MPLS, but how would I point ffmpeg to it? 
> Or am I completely missing the point -- it would be the first time in the 
> last minute or so. :-)

I was assuming it was a commercial pressed Blu-Ray. If so, then it is most 
likely copy protected so that for example you can’t just drag the m2ts files 
from the disc and drop them on your desktop to get a perfect backup or working 
copy. So if you point at a file on a BDMV disc as if it was on your unencrypted 
drive, the data is *supposed* to be scrambled, or further encrypted, depending 
on studio. Or, it can refuse/fail to read, which is also what happens if it 
hits a region restriction (not so much a problem as it is on DVDs but they 
exist) Software exists, such as libbluray, that is aware of this storage scheme 
and handles access to the disc accordingly.

And of course like practically any copy protection scheme in history, the DRM 
tech’s been reverse-engineered/private-keys-leaked/algorithm-cracked and you 
can configure your drive with libraries and keys to circumvent it.

The general idea is that unless this is a blank blu-ray that you burned a BDMV 
filesystem onto, it’s going to stop you from reading it like a normal thumb 
drive. You said that you concatenated the m2ts files, so I’m a little confused 
by that though. From the command line I thought you were just pointing at a 
path on a BDMV disc. If you pull off the files from a commercial blu-ray disc 
as if it was a regular block device and simply concatenate them by the filename 
number sequence, they were probably copied in their original mangled state, and 
with no info about where e.g. the main program is, if there are multiple angles 
available, which would be in the playlist file.


If this is something you authored yourself, then the drm aspect most likely 
doesn’t apply, but the part about the data structure on the bdmv disk does, if 
it was authored to play in regular blu-ray players. Those m2ts files are a bit 
different from the typical mpeg program streams.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Failed to open codec in avformat_find_stream_info

2020-03-31 Thread Ted Park
Hi,

>> Mail acting weird as always :| If anyone could suggest a good (less randomly 
>> behaving) alternative to the default mail app on Mac I’d be much obliged.
>> Anyway I was typing that I’m pretty sure 50M is the default for probesize so 
>> starting way down at 5M probably didn’t help.
>> Regards,
>> Ted Park
> 
> Per https://ffmpeg.org/ffmpeg-all.html#Format-Options,
> 
> "probesize integer (input)
> Set probing size in bytes, i.e. the size of the data to analyze to get stream 
> information. A higher value will enable detecting more information in case it 
> is dispersed into the stream, but will increase latency. Must be an integer 
> not lesser than 32. It is 500 by default.”
Oh wow, my mistake, I must have misread it by a zero, I’ve thought it was 50MB 
forever...


>> bluray
>>Read BluRay playlist.
>>The accepted options are:
>>angle
>>BluRay angle >
>>chapter
>>Start chapter (1...N)
>>playlist
>>Playlist to read (BDMV/PLAYLIST/?.mpls)
>>Examples:
>>Read longest playlist from BluRay mounted to /mnt/bluray:
>>bluray:/mnt/bluray
>>Read angle 2 of playlist 4 from BluRay mounted to /mnt/bluray, start 
>> from chapter 2:
>>-playlist 4 -angle 2 -chapter 2 bluray:/mnt/bluray
> 
> I must confess that I don't know what that's about.

It’s like any other protocol, like how you could use concat:INPUT1|INPUT2|… to 
literally concatenate the inputs, on the block level i think? Or 
crypto:encryptedfile.ts with key parameters to decrypt, for BD-ROMs you would 
put bluray:/mnt/bd or bluray:/dev/loop0 or whatever your bluray device is, and 
provide the playlist in the BDMV filesystem along with the other parameters to 
specify what to read from the disk. For the mount point in windows, maybe it 
just takes a drive letter? 



Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Failed to open codec in avformat_find_stream_info

2020-03-31 Thread Ted Park
Mail acting weird as always :| If anyone could suggest a good (less randomly 
behaving) alternative to the default mail app on Mac I’d be much obliged.

Anyway I was typing that I’m pretty sure 50M is the default for probesize so 
starting way down at 5M probably didn’t help.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Failed to open codec in avformat_find_stream_info

2020-03-31 Thread Ted Park
Hi,

> C:\CMD & tiny apps\ffmpeg>ffmpeg -i B:\BDMV\STREAM\0.m2ts -vf 
> "telecine=pattern=,bwdif=mode=send_frame" -avoid_negative_ts 1 
> -analyzeduration 8000 -probesize 8000 -c:v libx265 -crf 20 -preset 
> medium -c:a copy -c:s copy C:\AVOut\8.MKV

Not sure how this translates windows’ model of devices and how aacs is handled, 
but here is the man page section on libbluray which is implemented as a 
protocol.

bluray
   Read BluRay playlist.

   The accepted options are:

   angle
   BluRay angle

   chapter
   Start chapter (1...N)

   playlist
   Playlist to read (BDMV/PLAYLIST/?.mpls)

   Examples:

   Read longest playlist from BluRay mounted to /mnt/bluray:

   bluray:/mnt/bluray

   Read angle 2 of playlist 4 from BluRay mounted to /mnt/bluray, start 
from chapter 2:

   -playlist 4 -angle 2 -chapter 2 bluray:/mnt/bluray

Not By 


Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Drop frames during framemd5 calculation of DPX files

2020-03-31 Thread Ted Park
Hi,

>> can you explain what causes misdetected
>> frame rates?
> 
> The dpx files store a frame rate that is ignored by the demuxer (and 
> therefore unused for the default output frame rate) but used to calculate 
> timestamps on decoding, leading to frame drops.

I did not know that, why would it ignore the frame rate? Isn’t the framerate 
the film source original if it’s there?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] How to compress .MOV file compatible to Canon camera

2020-03-31 Thread Ted Park

> Here is the atom structure of the original file from the camera:
> 
> $ MP4Box -v MVI_1324.MOV
> [iso file] Starting to parse a top-level box at position 0
> [iso file] Read Box type ftyp size 24 start 0
> [iso file] Starting to parse a top-level box at position 24
> [iso file] Read Box type moov size 98280 start 24
...
> [iso file] Read Box type 4CF2D149 size 3078257872 start 9452
> [iso file] Delete box type UNKN
> [iso file] Delete box type UNKN
> [iso file] Read Box type mvhd size 108 start 65628
???...
> [iso file] Unknown box type free in parent moov
> [iso file] Starting to parse a top-level box at position 98304
> [iso file] Read Box type mdat size 84830220 start 98304
> [iso file] Delete box type ftyp
> [iso file] Delete box type moov
> [iso file] Delete box type udta
> [iso file] Delete box type UNKN
> [iso file] Delete box type UNKN
> [iso file] Delete box type UNKN
> [iso file] Delete box type mvhd
> [iso file] Delete box type trak
> [iso file] Delete box type tkhd
> [iso file] Delete box type mdia
> [iso file] Delete box type mdhd
> [iso file] Delete box type hdlr
> [iso file] Delete box type minf
> [iso file] Delete box type vmhd
> [iso file] Delete box type hdlr
> [iso file] Delete box type dinf
> [iso file] Delete box type dref
> [iso file] Delete box type alis
> [iso file] Delete box type stbl
> [iso file] Delete box type stsd
> [iso file] Delete box type avc1
> [iso file] Delete box type colr
> [iso file] Delete box type gama
> [iso file] Delete box type avcC
> [iso file] Delete box type stts
> [iso file] Delete box type stss
> [iso file] Delete box type stsc
> [iso file] Delete box type stsz
> [iso file] Delete box type stco
> [iso file] Delete box type trak
> [iso file] Delete box type tkhd
> [iso file] Delete box type mdia
> [iso file] Delete box type mdhd
> [iso file] Delete box type hdlr
> [iso file] Delete box type minf
> [iso file] Delete box type smhd
> [iso file] Delete box type hdlr
> [iso file] Delete box type dinf
> [iso file] Delete box type dref
> [iso file] Delete box type alis
> [iso file] Delete box type stbl
> [iso file] Delete box type stsd
> [iso file] Delete box type raw
> [iso file] Delete box type chan
> [iso file] Delete box type stts
> [iso file] Delete box type stsc
> [iso file] Delete box type stsz
> [iso file] Delete box type stco
> [iso file] Delete box type UNKN
> [iso file] Delete box type mdat


This doesn’t look too promising in the first place… I’m not sure what mp4box 
does when you just give it a file and nothing else in its arguments, but there 
is a command to parse the atom structure of qt files, and I remember it being 
returned in xml format. 
This is reading box types with 8 letter long names and 2GB in size that is not 
the media data, I don’t think it’s that trustworthy. Maybe it has a hard time 
reading vendor defined custom types or something.

As for how to manage the offsets, that is one of the uses for the pretty big 
free box located before the mdat, and also I’m guessing you can delete the null 
byte regions in update if necessary to keep the offsets same.

But I tried changing the file names/numbers and for the same file, it would 
play if I named it a certain number but not for a different number so I really 
think there’s some external factor involved.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Drop frames during framemd5 calculation of DPX files

2020-03-30 Thread Ted Park
Hi,

> I tried: ffmpeg -start_number 86400 -i finalDPX_forDCP\Respire%08d.dpx
> *-vsync* 0 -f framemd5
> allframes_md5.txt
> 
> I still get a similar result without the stated drop frames. framemd5
> checksum spits out 867 frames less than total frames.

Shouldn’t that be vsync drop? Since the issue is the misdetected frame rate?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] BT.709 -color_primaries -color_trc -colorspace ?

2020-03-30 Thread Ted Park
Hi,

> I did search for answers, but this subject is apparently too esoteric.
> 
> 1, Do I need to explicitly specify BT.709 for an encoder or does ffmpeg 
> default to it?

It would depend on your target format, and I don’t think ffmpeg makes any 
assumptions as for defaults. the encoder might do that, or not.

I am pretty sure most encoders just indicate the colorspace of the content in 
the video parameters, they don’t do any conversions or anything, so it’s 
inferred by the consumer from the context and/or content. 


> 2, Should I specify '-color_primaries' or '-color_trc' or '-colorspace'?
> 
> Codec documentation (https://ffmpeg.org/ffmpeg-codecs.html) references to 
> 'bt709' [1]
> -color_primaries 0
> -color_trc 0
> -colorspace 1
> [1] The integers I use assume that the documentation lists are ascending from 
> zero.

You shouldn’t make that assumption… In this case apparently they mirror ISO 
23001-8, and it looks like bt709 is 1 for all three.
If you are going to specify BT.709 I think specifying colorspace would suffice, 
otherwise if the encoder took some random combination and went with it, you’d 
have a pretty unusual color model .


> Aside from the integer assigned, what's the difference between the 
> '-color_primaries' & '-color_trc' & '-colorspace' directives?

-color_primaries is for indicating the color primaries of the source. They used 
to be called phosphors, basically what is considered to be the primary colors 
in the colorspace.

The trc in -color_trc stands for transfer characteristics. It chooses the 
characteristic response, i guess kind of like how much of an increase in 
brightness a certain increase in signal voltage corresponds to. They were 
represented by gamma functions since CRTs, there are ones that are very 
different and not just a simple gamma function now (e.g. 2084, HLG, etc) 

Colorspace I would say specifies the other two, or at least limits them to a 
few alternatives based on ntsc or pal. I think -colorspace 709 precludes the 
need to set primaries and trc but I could be wrong.


> As you can see below, the transcode 'Output' status doesn't say what it 
> defaults to.

Yeah, from that I’m thinking neither libx265 nor ffmpeg has any defaults. Left 
to be inferred by the decoder.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Copy all subtitle streams? Possible?

2020-03-29 Thread Ted Park
Hey,

> '-c:a copy' or '-acodec copy' will copy (all?) audio tracks. I have 2 
> questions:
> Do I need to also specify a '-map' directive? and
> Is there an equivalent directive for copying subtitle streams?

IIRC, -codec only specifies the codec, and stream selection is done by -map, 
except when you explicitly specify an encoder for a stream type and there is at 
least one stream that matches that specifier, then it will select one of that 
stream as well, according to the normal stream selection rules, in addition to 
the ones already selected.

So it might stream copy one audio track to the output when it wasn’t before at 
most, which might have been all of them for many input files. But I don’t think 
there is a muxer that only selects a video stream by default and also has a 
default audio stream encoder.  I think it’s more of a convenience feature for 
subtitles, since if you specify a subtitle format then you obviously want at 
least one subtitle track included in the output and you probably picked a 
compatible format at that.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] Is there a common build configuration with 10+bpc rendering capability in ffplay?

2020-03-28 Thread Ted Park
Good morning,

So macOS 10.15.4 is finally out of beta, and now has the ability to output HDR 
signals in video without having to use a serial video breakout box or anything. 
The 8 bpc firmware limit has been lifted, now even if you didn’t buy apple’s 
XDR display, you just plug in a compatible HDR display into an HDMI or USB/DP 
port on a post-2018 machine and you can get HDR10 signal, just like it has 
worked on windows for a while now.

So that’s great, it eliminates the asinine setting of having workflows that had 
their entire image processing pipeline in DCI or BT2020 only to have the 
monitoring output be tone mapped to SDR conformance, then put up on HDR 
displays.

But that is a handful of apps, FCPX, Resolve, etc. I’m finding that taking 
advantage of this with ffplay is going to be more involved, with SDL2 + OpenGL 
it looks like limited to sRGB, and with x11, the latest port of the x server 
and XF86V only support up to 24 deep. I’m not at HDR functions yet, simply 
trying to get a 2+30 bit packed argb format. This was actually the default in 
many cases for a while now at the framebuffer level, but the LSB’s were not 
cared about except for a few short routes in the iMac Pro and its wide gamut 
display, as well as a couple of the newer MBPs.

Would it be possible to get just the wider color range with SDL2? What is its 
behavior in windows, is it limited to getting yuv420p on all platforms? Or 
would a significantly reworked/different library be needed? (again I’m only 
thinking of 10bpc, sdr at the moment)

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] drawtext, suggestion for improvement

2020-03-28 Thread Ted Park
Hi,

> On Mar 27, 2020, at 17:58, Reino Wijnsma  wrote:
> 
> Hello Michael,
> 
> On 2020-03-26T20:28:02+0100, Michael Koch  wrote:
>> How to reproduce:
>> Use Windows Notepad to create a UTF-8 text file which contains only the word 
>> "test".
>> This is how the file looks in a hex editor:
>> EF BB BF 74 65 73 74
>> 
>> Now run this command line and have a look at the output image.
>> 
>> C:\Users\mKoch\Desktop>c:\ffmpeg\ffmpeg -f lavfi -i color=c=white -vf 
>> drawtext=textfile=test.txt -frames 1 -y out.png
>> ffmpeg version git-2020-03-23-ba698a2 Copyright (c) 2000-2020 the FFmpeg 
>> developers
>>  built with gcc 9.2.1 (GCC) 20200122
>>  [...]
> Your FFmpeg binary looks very recent, but what about the FontConfig library 
> you compiled this FFmpeg binary with?
> 
> I was only able to reproduce this with one of the very first FFmpeg binaries 
> I compiled myself (N-86393, dated 2017-06-06 and compiled with FontConfig 
> 2.12.1).
> The next FFmpeg binary I compiled (N-86763, dated 2017-07-12 and compiled 
> with FontConfig 2.12.4), and all that come after it, don't show this behavior.
> 
> I think updating FontConfig (and recompiling FFmpeg) would most likely solve 
> your problem.
> 
> -- Reino
> 

That is bizarre, is there no way to change that behavior (in Notepad?) A byte 
order mark in UTF-8 is a bit confounding, I wonder if there is some history 
behind it.

Manually entering the code point in a UTF-8 document goes so far as to crash 
the character viewer pane on my Mac, I’ll have to test on more. In the drawtext 
filter it seems to depend on the actual font and its glyph table, some of them 
render it as literally a “zero-width nonbreaking space” which is 
indistinguishable to it not being there and some fonts do the missing glyph box 
thing.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Copying a EIA-608 subtitle stream in an m4v

2020-03-27 Thread Ted Park
Hi,

> Sure. I don't necessarily require *literal* removal of some data from a file. 
> But I'm looking for a process that will *logically* amount to removal of some 
> data. This process could look like:
> 
> mv filename.m4v filename-bak.m4v
> some-command -from filename-bak.m4v -to filename.m4v
> 
> The process is successful if the result of the process looks like what you 
> would expect from literal removal.
Yes, I didn’t think so either, and I don’t think you’ve been misconstrued in 
that way. In the first place you very rarely gouge out a chunk of bytes from a 
file on-disk, it is going to be copied to and from memory, kind of like the 
process you illustrated.

The main point is that ffmpeg will in almost all cases disassemble the 
multiplexed file apart into its constituent elementary streams and pieces of 
metadata, then reassemble them to produce its output. So apart from the default 
stream selection and metadata mapping behavior, which is usually a video stream 
and audio stream, you will need to specify everything you want.

> -map_metadata 0 doesn't help—the metadata is still stripped (some well-known 
> tags are preserved, but only a few). It appears ffmpeg is not willing to put 
> unrecognized tags in the output when copying from m4v to mov.

Several people have reported the creation_time not being copied, but as I can 
only guess, I think the metadata you are referring to are extensive comments or 
synopsis type strings? If it reads like the jacket cover of a book, or some 
library catalog record, they were probably in their own “box” at the same level 
as other streams in some sense, instead of short strings in the headers for 
each track which is the default. I’d suggest trying -movflags 
+use_metadata_tags. The uncut console output would really be helpful in the 
absence of a sample for its copyright status. Why leave out which tags were 
omitted?? ._.

> Stream #0:4(eng): Data: bin_data (tx3g / 0x67337874), 0 kb/s
>Metadata:
>  creation_time   : 2016-07-15T19:12:05.00Z
>  handler_name: Core Media Text
>Stream #0:5(und): Video: mjpeg (Baseline) (jpeg / 0x6765706A), 
> yuvj444p(pc, bt470bg/unknown/unknown, progressive), 640x360 [SAR 72:72 DAR 
> 16:9], 8 kb/s, SAR 65536:62805 DAR 1048576:565245, 0.0042 fps, 1 tbr, 1k tbn, 
> 1k tbc (attached pic) (timed thumbnails)
>Metadata:
>  rotate  : 0
>  creation_time   : 2016-07-15T19:12:05.00Z
>  handler_name: Core Media Video
>Side data:
>  displaymatrix: rotation of -0.00 degrees

Again, only a guess, but I think this might be chapter/scene titles track from 
the timed thumbnails disposition on the image track? I don’t think I’ve seen it 
like that, it reminds me of mkv’s chapters, curious about it now, I don’t seem 
to remember iTunes being this way ~2016. Does the movie have working chapter 
markers? What application saved a file in this format?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Copying a EIA-608 subtitle stream in an m4v

2020-03-27 Thread Ted Park
Hi,

> Okay, using 'mov' is progress—it outputs a usable file without error.
> 
> (-strict seems to have no effect.)
> 
> To restate my goal clearly: I want to process some m4v files, removing an 
> unwanted "cover art" image appearing as mjpeg streams.
> 
> Some problems with the "output to mov" approach:
> 
> 1) I'd really like a video in the original file format when I'm done 
> (m4v/mp4), not a mov file. I think mov files are less widely compatible, and 
> in any case I'm going for in-place modification.

You are correct more or less, QuickTime file format or ISO base media file 
format is what the long name for “mov” files would be, and it serves as a 
template of sorts that many other formats (especially IEC family) are based on, 
including MP4.

m4v is just an alternate extension for mp4 files that Apple used to use on the 
content they distributed indicating certain profile requirements or DRM 
presence.

Now, the original mov files were a big superset including mp4 files, and also 
pretty much any A/V content, even stuff like interactive games (when QuickTime 
had that brushed metal look and had a 4 figure price tag for server+client 
license). But this is far from what a typical QuickTime file contains nowadays, 
to the point where if the media uses a supported codec, and you did not use any 
quicktime features to edit the file, more often than not you can replace the 
extension with mp4 or m4v, and it will be recognized by the application that 
consumes it.

> 2) It strips a number of metadata tags (I counted about 13). I want to remove 
> the mjpeg stream without perturbing the rest of the file.

This can be done, add -map_metadata 0 for the global file-level metadata and 
-map_metadata:s: 0:s: for stream-level metadata. As I’ll explain, you can’t 
remove the mjpeg stream, but you can copy everything else but the mjpeg stream. 
Add -map 0 to copy everything and then add -map -0:# (where # is the stream 
index of the cover image) to not copy it.

> 3) Testing on another file (also Apple-encoded and supported), the 'mov' 
> output chokes on another stream:
> 
>   Stream #0:4(eng): Data: bin_data (tx3g / 0x67337874), 0 kb/s
>   Metadata:
> creation_time   : 2016-07-15T19:12:05.00Z
> handler_name: Core Media Text

This I am curious about, tx3g literally indicates mp4:17 text streams, with 
decoder name “mov_text”. Do you have a sample you can provide?


> It sounds like, at this point, the answer is "ffmpeg is not the tool for 
> you", but I'm happy to hear other suggestions.

I mean, it does seem like there are easier alternatives if this is all you are 
trying to do. Ffmpeg covers a lot of ground. But one thing that ffmpeg doesn’t 
do (I believe) is “in-place modification.” Personally I don’t think you should 
do this anyways on binary files you don’t want to risk breaking, but 
mov/mp4/m4v has a pretty solid definition of its structure that tools exist 
that work on the files in-place.

If by other suggestions you meant alternative tools, gpac, l-smash, mp4box are 
some tools that can work on ISO-family format files, if not in-place, piece by 
piece. If you have a Mac and are okay with a GUI solution, Subler is a very 
popular tool for apple-device-compatible-file editing.


Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] How to change overlay parameter at runtime

2020-03-27 Thread Ted Park
Morning,

> realtime filter is irrelevant here since you're using -re.  sendcmd parses 
> its argument once at init, so can't be used to add or change arguments.


But wouldn’t it make a difference if zmq was being used to send commands? Or do 
you mean because it's reading at realtime? I think you could set up a 
filterchain that reads input at realtime but outputs (and encode/render/write) 
frames with polynomial PTS.

What I’ve always wanted to add to the examples section is a sort of 
libavfilter/libavformat REPL using zmq, with the filtergraph dot graph 
visualized in the terminal, and maybe some real time monitoring capability, 
though that might need to automatically render fractionally (which I’m lost on) 
in some situations, viz fps drops on sending command.

Not that I have any of the hard part of this done yet, but recently I noticed 
rabbitmq support in the configure script, in addition to zmq. I’ve never heard 
of rabbitmq, can someone comment on the differences between them? Is it more 
popular? It’d be hard to be easier or lighter than 0mq but if anyone is 
familiar with it I’d be much obliged.


Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] First 10 seconds don't work well

2020-03-27 Thread Ted Park
Hello,

> I give an example to be more precise:
> ffmpeg -i input.mp4 -ss 60 -to 600 -c copy output.mp4.
Thanks for the example but it is lacking if you are going for precision. The 
console output has far more useful information, you should post that also.

> output.mp4 first 10 seconds is not visible, but the audio yes. After about 10 
> seconds of output.mp4, both are present. It's not dependent of output.mp4 
> lenght or duration.
I’m thinking it would be dependent of the start point you choose, with there 
being a certain point every ~20 seconds that you can seek to in the video.

You can try putting the -ss argument before the input to see if it provides the 
behavior you are looking for.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] FFmpeg not making videos unique after re-encode

2020-03-27 Thread Ted Park
Hi,

> In case you rendering the same project in premiere pro 2 times (the raw
> code of output file in one place have differences, which makes raw code not
> totally same). When rendering 2 times same project in FFmpeg, 2 output
> files have Totally Same Raw Code.

I didn’t know this happened, I thought tags about the encoded time etc were 
added but apparently not. Same build ffmpeg, same parameters, same metadata, 
software encoder, and I get bit identical output as well (at least for simple 
files).

AME almost definitely adds encoded time at least, plus whatever id it adds to 
associate it with the bin or project. If they were encoded by cpu only 
(specifying 10+ bpc on a machine with a not-so-great GPU is one way to do this) 
with the same parameters, I think the result would be the same once you 
sanitize the output of metadata.

Though with AME I don’t know how to ensure the same encoding parameters… Since 
not all of them are available for me to specify I assume at least some of them 
are managed by the program.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] screen cast windows 10 desktop with audio

2020-03-26 Thread Ted Park
Hi,

>  Duration: N/A, start: 1584387532.163434, bitrate: 1006131 kb/s
>Stream #0:0: Video: bmp, bgra, 1366x768, 1006131 kb/s, 29.97 fps, 29.25
> tbr, 1000k tbn, 1000k tbc
> Guessed Channel Layout for Input Stream #1.0 : stereo
> Input #1, dshow, from 'audio=Microphone (Realtek High Definition Audio)':
>  Duration: N/A, start: 123897.214000, bitrate: 1411 kb/s
>Stream #1:0: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s

Why not use dshow for both? 
Anyhow, do you mean that if you don’t stream audio just the video works fine?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Make Video Unique by ffmpeg

2020-03-26 Thread Ted Park
Hey,

> Does FFMPEG encoding the video?
I don’t know, is it? Show us the command you typed in and the output.

> Because if encode one video twice with same
> arguments, it's giving the exactly same file. For example, if rendering on
> Adobe Premiere 2 times same file it gives 2 unique video files. Is it
> possible to make Unique videos after encoding by ffmpeg? If yes, than how?
How are you determining they are exactly the same? Identical as in they hash to 
the same digest? That would have to be a pretty contrived scenario imo, with 
the metadata that is inserted/parallel processing and all.

On the other hand you say that premier produces 2 different outputs, so I doubt 
you mean they are different visually, and am kinda confused as to how you 
evaluated the files for “uniqueness."

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Question on AVFoundation and importing from a DV-VCR

2020-03-26 Thread Ted Park
Hi,

> I have been trying to get a direct import from a Sony DSR-45 DV Deck. Using 
> OS X Final Cut Pro hot annoying with various issues.
> 
> Does anyone have suggestions or a method for doing this better? I have 
> 500-600 more DV tapes to ingest.
> 
> Using this command brings in a 12GB DVVIDEO files

Do you mean how it reaches the end of tape and refuses to finish saving, and if 
you force quit the files not usable?? And also does FCPX seem to lose control 
over the deck for no reason at all sometimes? I had a hell of a time and ended 
up using boot camp and premier pro as a workaround, macOS Catalina doesn't seem 
to like dv at all and hates HDV only marginally less.

Scene detection for splitting doesn’t seem too appealing, try if more direct 
access to the deck gives you more options with libdc1394. (personally I would 
either downgrade to mojave and use final cut there, or install windows 10 and 
premier with their trial/nfr licenses and archive all the dv tapes you have 
now. Even on mojave the newer versions of FCPX are finicky with tape decks :/)


Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] "Invalid argument" when running ffmpeg.exe with a pipe as video input

2020-03-26 Thread Ted Park
Hi,

I do not know much about windows but I remember many times using named pipes 
tripped people up, as they are not simple fifo pipes as in linux, but a more 
transactional and object-oriented construct. Not sure if it’s any help but look 
into the different types of pipes you can create and how each end opens it 
(permission wise).

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] How to slow down a video

2020-03-26 Thread Ted Park
Hi,

> for video: -vf setpts=0.5*PTS
> for audio: -af atempo=2


Actually, he wanted to slow down the video so you probably meant the reciprocal 
of this, 2*PTS and atempo 1/2

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Copying a EIA-608 subtitle stream in an m4v

2020-03-26 Thread Ted Park
Hi,

> I'm trying to process an m4v video with the following subtitle stream:
> 
>Stream #0:3(eng): Subtitle: eia_608 (c608 / 0x38303663), 1920x1080, 0 kb/s 
> (default)
>Metadata:
>  creation_time   : 2014-03-29T03:43:15.00Z
>  handler_name: Apple Closed Caption Media Handler
> 
> When I play the video in QuickTime Player, the subtitles are available for 
> selection and display properly.

Keep in mind closed captions and subtitles have subtle differences, though it 
seems more like a subtitle in formats like these, where the captions are 
carried as a separate stream, they are still captions, not text or bitmap 
subtitles.


> If I copy the stream, I get an error:

> [ipod @ 0x7f8074810600] Could not find tag for codec eia_608 in stream #0, 
> codec not currently supported in container
> Could not write header for output file #0 (incorrect codec parameters ?): 
> Invalid argument

> (I also tried with extension mp4, behavior is the same.)

Try with extension mov, and/or -strict -2.


> If I transcode the stream, I get a number of errors, and the resulting 
> subtitles are garbage:

>  Stream #0:3 -> #0:0 (eia_608 (cc_dec) -> mov_text (native))

> [Closed caption Decoder @ 0x7fb894079800] Data Ignored since exceeding screen 
> width
>Last message repeated 1529 times
> size=   2kB time=00:07:20.53 bitrate=   0.0kbits/s speed=1.2e+04x
> video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing 
> overhead: 2963.333252%
> 
> ~
> 
> (To test the output, I used a slightly different invocation, copying the 
> video/audio streams, and played the file in QuickTime Player. Subtitles are 
> gibberish.)
> 
> How can I get ffmpeg to play nicely with these subtitles?

I think this might be a bug? I didn’t think the captions decoder could decode 
in a way that direct conversion like cea-708 -> text subtitles was possible, I 
think it should be saying that and not do this. The gibberish is like if you 
used cat on a compiled executable I bet.


> Suggestions:
> - Is there an option to override the "codec not currently supported" error? 
> Can't ffmpeg just copy the bits and ignore the content?
> - Is there another path/tool for transcoding the eia_608 subtitle stream 
> correctly?
> - All I really want at the moment is to strip a "cover art" image appearing 
> as stream 0:4. Is there another way to accomplish that?
- codec not currently supported _in container_ so you can dump the data or use 
a format that works. Usually broadcast captioning is done with scenarist .scc
- Builds with libzvbi support can extract teletext pages to side data, then you 
can assemble the subtitle track from that I think. Transcoding is tricky 
because eia_608 is not just text, or even multiple pages of text, it is 
comparable to the machine language used in dvd subtitles.
- Try mapping all input, negative mapping the stream you don’t want, and stream 
copying into QuickTime (.mov). -strict -2 might be needed.


Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Drop frames during framemd5 calculation of DPX files

2020-03-26 Thread Ted Park
Hi,

> frame=20837 fps=5.2 q=-0.0 Lsize=1628kB time=00:14:28.20 bitrate=
> 15.4kbits/s dup=0 *drop=867* speed=0.216x
> video:1107194832kB audio:0kB subtitle:0kB other streams:0kB global
> headers:0kB muxing overhead: unknown
867 frames dropped in 868 seconds, maybe ffmpeg has estimated 1 fps off? Try 
specifying 1 fps higher from what is detected explicitly at the input.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] How to assessment the memory will use when motion interpolation on a 4k video?

2020-03-26 Thread Ted Park
Hi,

> Hi all
> I have a 4k video, resolution: 4320x2880, as a result form EDSR-pytorch

>  Duration: 00:00:03.00, start: 0.00, bitrate: 9673 kb/s
>Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p,
> 4320x2880, 9669 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)

As far as bitrate goes, ballpark estimate, counting uncompressed y’uv 4:2:0 8 
bpc as 3/2 bytes per pixel (luma is just 1 byte, for chroma, 1 byte over 4 
pixels per channel => 1 + (1/4)*2 => around 3/2 bytes per pixel) you have 
around 4000 × 3000 = 12,000,000 pixels per frame, 12MP × 3/2 B = 18MB per 
frame, 25 fps input so 500MB/s or 4Gbps.


> I want to use motion interpolate it to 60fps, so command like this:
> ffmpeg -i test.mp4 -filter "minterpolate=fps=60" output.mp4

> It always failed with "Cannot allocate memory", so my question:
> How to assessment the memory usuage for that operation?

On the other hand interpolation involves making frames up by referencing other 
frames close by (time-wise) so with only 3 seconds input nearly all of them 
might be kept in memory, plus the interpolated frames to be encoded. And with 
the filtering/decoding/encoding all being multithreaded, I think it would take 
maybe 1.5×(# of cores) GB or some multiple of that from the start.


> ffprobe version 3.4.6-0ubuntu0.18.04.1 Copyright (c) 2007-2019 the FFmpeg
> developers

Could be something else though. Does the same command work on lower data rate 
movies? Try using a newer a newer build as well.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Using convolution files with ffmpeg

2020-03-25 Thread Ted Park
Hi,

>> I currently use MPD (https://www.musicpd.org/) to play music. MPD can use
>> ffmpeg to decode the audio coming in (local or cloud). Since ffmpeg can
>> handle convolution files in wav format, I was looking for a way to "pass"
>> these files to ffmpeg. I would like to implement the following scenario:
>> 1. Have one convolution file per sampling rate (44.1kHz, 48kHz, 88,2kHz,
>> 96kHz, 176.4kHz and 192kHz)2. Store the convolution files locally to
>> ffmpeg3. When ffmpeg is used, it will use the convolution file that matches
>> with the incoming file's sampling rate
>> I want this to work as a global setting without the need to do it manually
>> for every audio file. I know that minimserver has implemented this but it is
>> a closed source project.
>> Any ideas?
> 
> You will need to use FFmpeg audio convolution filter, afir.
> And write some kind of script for processing of all files.

What is a convolution file? When you get a venue with a weird frequency 
response and you come in before the sound check to record spectrum sweeps from 
the speakers with a figure 8 mic, the tool had a convolution function that 
would generate a model of the venue that served as a general starting point for 
mixing, is it something similar?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Screenrecording, audio is sped up when other process starts

2020-03-24 Thread Ted Park
Hi,

> I "disabled" audio from Pygame by
> 
> import os
> os.putenv('SDL_AUDIODRIVER', 'dummy')
> os.putenv('SDL_AUDIODEV', '/dev/null')
> 
> which worked. Thanks for help!

Oh.. I guess I was wrong, I feel like there must be a way to configure the 
sound from within pygame instead of setting an environment variable...


> How, however, could I ensure something like this not happening when I
> launch another program who would like access to my default sound driver?

Yeah, I suppose more likely than not you will want to record sound coming from 
the game also.


>> Also, does the card have 4 separate streams? If you have no particular
>> reason to get all 4 channels maybe there is a way to only record one.
> It's an external sound card with 4 separate inputs, yes. The reason for my
> not specifying hardware numbers is that those tend to reset when I unplug
> the card.

That’s understandable for removable interface. 

I did see that your sound card outputs 4 channels what about the stream count? 
I think they present as subdevices if they are actual discrete streams, but not 
if they are coupled channels.

> There's that, and the raise in pitch in my voice throughout the rest of the
> clip.
> 
> Either way, I have something that works now, so many thanks!

I… your welcome but if you bought a dedicated multichannel audio card or 
interface you should be able to route multiple sources when/if you need in the 
future…

If you made sure to use the same settings (sample rate, format, different 
streams/subdevices) you could actually get exclusive control and share the 
hardware device. I could be wrong but a 4 ch card is probably going to have at 
least two streams available, maybe separate clocks, usually not, or even 4 
discrete. 

You could not use a hardware device at all, and use alsa’s software 
converted/mixed/resampled software plugins/devices sharing input/output will be 
much easier. I believe you can also set a system default mixer or plugin, 
unfortunately the only thing I remember was that I couldn't not figure out how 
to set up alsa the way I wanted it and gave up halfway through :/

But see if $ aplay -L lists software device systemdefaults that seem adequate 
for you, then you could try if using those names instead give satisfactory 
results, since the audio format resampling will be handled for you by alsa even 
if clients use different configurations.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Screenrecording, audio is sped up when other process starts

2020-03-24 Thread Ted Park
Hi,

> With the risk of sounding ignorant ... how would I "grab hardware device
> exclusively"?

To use an account part of the audio group and specifying an actual hardware 
device should be enough to do this. I am not sure what the difference between 
cards and devices are to alsa tbh, but I think using hw and index to specify 
the device makes it clear its not a mixer or other.

Also, does the card have 4 separate streams? If you have no particular reason 
to get all 4 channels maybe there is a way to only record one. 

> I can't with my limiting googling capabilities find out how to not enable
> the sound mixer in pygame.
> 
> However, the "glitch" persists even after completely closing the python
> process.

But maybe I was wrong in assuming that the contention in controlling the device 
was the problem; iirc, the way to not enable the sound mixer in pygame is to 
not enable it… So if there’s no sound I don’t think that is the issue.

I thought the glitch was the short high pitched noise in the video at 13 
seconds in, is it observable anywhere else in that clip?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Screenrecording, audio is sped up when other process starts

2020-03-24 Thread Ted Park
Hi,

If you only need mic audio, grab hardware device exclusively or make sure 
pygame doesn’t touch the same card/device.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Converting Quicktime (.mov ) from iPhone to mp4

2020-03-24 Thread Ted Park
Hi,

> FFmpeg version CVS, Copyright (c) 2000-2004 Fabrice Bellard
> Mac OSX universal build for ffmpegX

Are you running a PPC Mac? I wonder how you were led to download this.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Understanding how to use FFmpeg on macOS

2020-03-23 Thread Ted Park
Hi,

You don’t need binaries to be signed to run on Catalina. If you can run ffmpeg 
manually other programs probably will be able to as well. If apps have hardened 
runtime enabled (which is to say all notarized apps) then Gabry is right, 
usually they can’t use anything other than stuff included in the app and system 
libraries, but an app that asks for an external library location presumably has 
the entitlement granting them that exception.

As far as support, the quicktime framework has been deprecated on macOS and is 
unavailable in Catalina. The replacement is AVFoundtion which dropped support 
for a lot of features in the format, including some that are still in use 
(basically any function/codec not available 64bit). How did you convert the mts 
to a mov? I am pretty sure you have to change the underlying structure of the 
video to convert from one to the other.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] FFMpeg and H.323

2020-03-17 Thread Ted Park
Hi,

> nothing in FFmpeg is (by itself) a videoconferencing software.

Ah, right thank you, I couldn’t think of the word “videoconferencing,” was on 
the tip of my tongue (or fingers).

It makes it easier to explain the context as an analogy to regular telephone, 
which happens to be described by, H.324. The H.3xx series all describe how 
audiovisual terminals network with each other.

H.323 describes the videoconferencing equivalent of the PSTN for telephones. It 
specifies how addresses are resolved to route the call, the signaling protocol 
used to set up the connection, etc. It doesn’t specify how the media is 
packaged, it describes how terminals negotiate those details.

> I thought H.323 was a packaging a bit like HLS might be, or Fragmented
> MP4.  The hope is to be able to integrate a camera system generating H.264
> into Zoom and other web-conferencing systems which require H.323 to work.
So you have a camera system with built in H.264. ffmpeg could 
compress/transcode the required audio and stream RTP. 

Everything else is beyond ffmpeg. H.323 configuration commands show up in stuff 
like branch routers with application/service integration, dedicated 
conferencing gateways, and more recently, software implementations on general 
servers.

> So what you're saying is I'd need to generate my own communications handler
> that manages the H.323 traffic, and passing the H.264 stream to that
> handler to pass on to the endpoint?
Not that you need to build one yourself, that would be a pretty big project, 
but yes, a video stream is only a small part of the system. You mentioned Zoom, 
that’s a possible vendor that could provide the “everything else”. Tandberg 
(Cisco) is also a big name.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] FFMpeg and H.323

2020-03-17 Thread Ted Park
Hi,

> Is it possible for ffmpeg to produce a stream conforming to H.323?  As I
> understand it H.323 supports H.264 video and G.711 or OPUS audio.  I have
> an H.264 video stream, so would need to re-encode the audio, but then it
> needs packaging as H.323 and I haven't found anything on the web that does
> this yet.

I’m not surprised, H.323 covers infrastructure at a scope that is on a 
different level than ffmpeg, or any other single application for that matter.

Since it’s not a single standard I don’t really know what to say it supports, 
but it stipulates all endpoint (terminal) equipment be capable of both G.711 as 
a minimum, and H.261 if it has video capability. Any additional codec support 
is H.245 negotiated by connecting equipment. H.264 is commonly implemented, as 
well as speex (which I think you mean when you say opus) but neither capability 
is required.

Can you tell us more about the situation where you need to encode AV streams 
usable in a H.323 system out of band? There isn’t really a “packaging” step to 
speak of, and If you are creating a software based implementation the most 
ffmpeg is going to be of help to you is RTP. H.323 is more of a protocol than 
format.

Speaking generally, I guess you could say ffmpeg can produce a stream that 
conforms to H.323, (by encoding mu-law/a-law and optionally H.261 and using 
RTP) but anything else is going to depend on (all) the equipment facilitating 
session communication.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Facing issues in streaming videos

2020-03-17 Thread Ted Park
Hi,

So it starts out like this:
> top - 11:02:19 up 16:48,  3 users,  load average: 0.04, 0.01, 0.00

> KiB Mem : 16423264 total, 14990164 free,   642272 used,   790828 buff/cache
> KiB Swap:   999420 total,   999420 free,0 used. 15408016 avail Mem
> 
>  PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
> 5979 root  20   0   76104  23360  10300 R   2.2  0.1   0:00.50 ffmpeg

And looks like this in an hour..
> top - 11:59:05 up 17:45,  3 users,  load average: 0.82, 0.56, 0.24

> KiB Mem : 16423264 total, 11981684 free,   645936 used,  3795644 buff/cache
> KiB Swap:   999420 total,   999420 free,0 used. 15363132 avail Mem
> 
>  PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
> 5979 root  20   0   76104  23360  10300 S   2.0  0.1   1:09.99 ffmpeg
> 6305 root  20   0   49120   4044   3188 R   0.3  0.0   0:00.02 top
> 
> 
> In one hour of duration it used memory is gradually increasing.
> 
> Kindly give us solution for this.

In ~1hr buff/cache used (probably buffers mostly) increased by 
(3795644-790828)KiB=3004816KiB
So an average of about 900KiB/s

I assume this only happens when streaming with ffmpeg?

It does look like they are closely related if so,
> And console output of the streaming as follows
> 
> 
> 
> root@TESTING-FFMPEG:/var/www/html/hls/live/mobile/testing#  ffmpeg -threads
> 1 -i udp://231.1.1.108:1026 -c:v copy -c:a copy -f mpegts
> /home/user/ffmpeg-4.2.2/Ajk_live_Telecast.m3u8

> Output #0, mpegts, to '/home/user/ffmpeg-4.2.2/Ajk_live_Telecast.m3u8':
>  Metadata:
>encoder : Lavf58.29.100
>Stream #0:0: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(top
> first), 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 25 fps, 50 tbr, 90k tbn, 90k
> tbc
>Stream #0:1: Audio: mp2 ([3][0][0][0] / 0x0003), 48000 Hz, stereo,
> fltp, 128 kb/s
> Stream mapping:
>  Stream #0:0 -> #0:0 (copy)
>  Stream #0:1 -> #0:1 (copy)
> 
> Stream #0:1 -> #0:1 (copy)
> Press [q] to stop, [?] for help
> frame=  447 fps=0.0 q=-1.0 size=7680kB time=00:00:07.80
> bitrate=8059.7kbits/s speed=15.4x
> frame=  477 fps=474 q=-1.0 size=8448kB time=00:00:08.30
> bitrate=8332.0kbits/s speed=8.25x

> frame= 1281 fps= 89 q=-1.0 size=   22784kB time=00:00:21.70
> bitrate=8598.8kbits/s speed=1.51x
> frame= 1290 fps= 88 q=-1.0 Lsize=   23163kB time=00:00:21.85
> bitrate=8681.7kbits/s speed= 1.5x


The stream bitrate reaches ~1000kB/s which is similar so it’s probably the disk 
buffer that’s taking up the memory.

But if you look at the available memory, it only drops from 15408016 KiB to 
15363132KiB, about 50MB. If you start another program that requires a lot of 
memory, that buffer would probably be cleared immediately.

The other situation where you are downloading the stream and starting a dozen 
transcoding jobs is different, in that case that memory is taken. 

What I mean is this doesn’t seem out of the ordinary at all.


Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Selecting MPEG TS child streams for filter_complex

2020-03-17 Thread Ted Park
Hello,

> Is there any reason, then, not to always put -filter_complex as the first 
> parameters of the command line? The Synopsis line in the docs shows global 
> options first but most of the doc examples don't.
> 
> Example-
> ffmpeg -i video.mkv -i image.png -filter_complex '[0:v][1:v]overlay[out]' 
> -map '[out]' out.mkv"
> 
> Or is there another reason for listing inputs, then -filter_complex, then 
> outputs?

Unless they were precomposed filter “script” files (and even then) I think 
having input before the filter followed by the output is the intuitive way to 
structure the command, if that counts as a reason.

Iirc, “complex” filtergraphs as opposed to the older/simpler -vf, -af filters 
have implicit input and output pads for each chain that can be remapped pretty 
freely, and not according to its position.


Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Remove everything but a single color (range)

2020-03-17 Thread Ted Park
u want. So all you need to do is invert the alpha values.

There’s probably a way to invert the values of just a single plane/channel, but 
I don't know how. Instead, there’s a filter that inverts (takes 1’s complement, 
i believe) of all pixel values, called “negate”.
Since it inverts values, obviously if you use it twice in series, the result is 
more or less identical to the input.
This is useful because it has an option, “negate_alpha” that controls whether 
the filter works on the alpha channel or not. We want to invert just the alpha 
values, we can achieve this by inverting everything, then inverting everything 
except the alpha back.
ffmpeg -i $INPUT -filter_complex 
"format=yuva444p,chromakey=0xC8438A:yuv=1,negate=negate_alpha=1,negate=negate_alpha=0”

That should give you the yellow box sort of rotoscoped out. You might still 
have to refine the mask, which you can do by isolating the alpha plane as a 
separate movie then working on that. 

> Here's the complete output for ffmpeg colorhold,chromakey:
> 
> $ ffmpeg -i test.mkv -vf 
> "colorhold=color=0xe6e65c:similarity=0.25:blend=0.0,chromakey=color=black:similarity=.1"
>  test-highlight.mkv

Really stumped on what chromakey does when you give it black, which is 
essentially no chroma. Does it just do the smart thing and key out everything 
with no color?
I think that would be the logical way to do this actually, remove color 
information other than the one you want, then extract just the areas with 
color. if you were to do this, you would need much more complex filtering. In 
yuv, you could check where u and v are both 0, but in rgb, I think it varies. 
Flip flopping the colors is kinda hacky but easier, probably faster too.

> Can anyone explain to me why this displays only the yellow bar with 
> everything else black:
> 
>  ffplay -f lavfi -i 'smptehdbars=duration=5' -vf 
> "colorhold=color=0xbcc906:similarity=0.25,chromakey=color=black:similarity=.2"
> 
> but this displays all bars in gray scale except the yellow bar which is in 
> color:
> 
>  ffmpeg -f lavfi -i 'smptehdbars=duration=5' -vf 
> "colorhold=color=0xbcc906:similarity=0.25,chromakey=color=black:similarity=.2"
>  TEST-smptehdbars-yellow-only.mkv
> 
> What I want is ONLY the yellow bar and everything else black, as in the 
> ffplay command.  How can I get the same thing with ffmpeg?

A rudimentary explanation, but I think it's like the renderer used by ffplay 
draws on a black screen, and libx264 uses a white canvas.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Facing issues in streaming videos

2020-03-16 Thread Ted Park
Hi,

> When CPU memory utilization is below
> 
> top - 15:31:22 up 6 days,  2:39,  4 users,  load average: 0.00, 0.00, 0.00
> Tasks: 376 total,   2 running, 275 sleeping,   0 stopped,   1 zombie
> %Cpu(s):  0.0 us,  0.1 sy,  0.0 ni, 99.8 id,  0.0 wa,  0.0 hi,  0.0 si,
> 0.0 st
> KiB Mem : 16423264 total,   174636 free,   969200 used, 15279428 buff/cache
> KiB Swap:   999420 total,   964224 free,35196 used. 15005416 avail Mem
> 
>  PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND
> 9694 user  20   0   63104  22116  10460 R   1.7  0.1   0:04.65 ffmpeg
> 9752 root  20   0   49224   3804   2984 R   0.3  0.0   0:00.67 top
> 17063 root  20   0  402016  68548  37332 S   0.3  0.4   2:18.72 Xorg
> 17647 user  20   0 1489812 110572  71540 S   0.3  0.7  20:47.17 compiz

I don’t know how accurate top is here, but doesn't that indicate that by the 
time you launched ffmpeg, only ~1% memory was available?

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] How to compress .MOV file compatible to Canon camera

2020-03-16 Thread Ted Park
Hi,

>> There’s a huge user data box in the moov, upon a quick glance it has the 
>> camera model, firmware version, etc. I have to imagine it is used somehow.
> 
> Same question:
> Is the (original) file still playable if you edit this atom?

I’m not sure if you were asking about playback on the camera, but if you zero 
out the udta atom and change its type to “free” then it plays with no problem 
in standard players, it’s supposed to be non-specific metadata.


>> I now inserted the following:
>> 1. "CEAP" to ftyp (0x18 instead 0x14 bytes)
>> 2. moov atom with qt-faststart
>> 2. udta atom from original at the start of moov atom (increases it from 
>> 0x1340E to 0x1344A)
>> Result:
>> Instead of a big "?" I now see a the preview picture on the camera. 
>> Unfortunately I still can't play the video because of "Not identified 
>> Picture".
>> So we are a little step closer to the solution.
>> Any additional ideas?
I took another look, and saw EXIF, TIFF, thumbnail images. I don’t know if the 
camera would not play files without this data, you could test it out by doing 
what I described to a working movie, replace the udta box with null bytes and 
change its tag to free.

The file that works was very simple, it had a moov with some proprietary data, 
then the header with codec setup, etc. and free space until the media, which 
extends all the way to the end of the file.

If there was no wide marker/atom in the file that plays, I would try to make 
sure it’s not in the file you're producing. It might not be there because the 
camera might not handle it (maybe some limitation in its specs precludes 64bit 
size fields).

I took a look at the DCIM hierarchy on a canon point and shoot I found, and 
there seems to be some sort of index type metadata stored separately in a 
folder named (in my case) CANONMSC. 
I don’t know what it is, or if it’s used, but see if deleting those files makes 
everything break.

>> Yes, it would be a great help of a good tool to show and edit other atoms.
>> Which tool could be this?
>> Which ffmpeg loglevel command would show the atoms, even with less nice 
>> format?
> As I see from the qt-faststart output, I see, that there are some other atoms 
> patched:
> $ qt-faststart MVI_1324_copy_git.mov MVI_1324_copy_git.MOV
> ftyp  0 20
> wide 20 8
> mdat 28 84830220
> moov   84830248 13326
>  patching stco atom...
>  patching stco atom...
>  writing ftyp atom...
>  writing moov atom...
>  copying rest of file...
> 
> So maybe I have to patch them again after inserting the udta atom, but how?

For ISO/MP4 type formats, if you want really fine grained control you most 
likely need to use one of the tools that are specific for that format, gpac or 
l-smash probably. But while this lets you control more specific attributes, it 
also means you need to know what to change to get the result you want. Off the 
top of my head mp4box and atomicparsley are some programs that do this. If you 
have a PPC Mac there’s an ancient version of “Atom Inspector” for quicktime 
that Apple provides.

Just curious, but what are you trying to do? Are you looking to play footage 
from other cameras? Or watch completely unrelated sources, like maybe saving a 
movie to watch it on your camera??

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Remove everything but a single color (range)

2020-03-15 Thread Ted Park
Hello,

> Is it possible to "remove" everything in a video except a specific color (or 
> maybe a range... ie close to a specific color)?
> 
> By "remove" I mean covert every that is NOT the color(s) I want to black or 
> transparent.
> 
> I have a video that contains a yellowish box that moves about the screen. I 
> want to isolate ONLY the yellowish box.  Everything else should become black 
> or transparent.  
I think colorkey filter can still do this. Basically you want the converse of 
what color keying does right? It should work the same (that is the opposite) 
way, and you just have to invert the alpha value. 

> The specific color is "fbed54" according to the color picker in Gimp.
> 
> FYI. My knowledge of "colors" in general is extremely limited... let's put it 
> this way, I'm pretty sure RGB stands for Red Green Blue... that's about the 
> extent of it.   So, a specific example would be very helpful.

RGB does stand for red, green and blue, and the color “#FBED54” would be a 
base-16 representation of how much of the three primary colors make up that 
specific color. The three colors are represented by two hexadecimal (base-16) 
digits each, with the hidden radix point on the left.
So in this case, it would mean Red has a value of (0.)FB, which is 15/16 + 
11/256 = 98.5%, green would be 14/16+13/256=92.6% and so on.

The thing is video usually uses a different scheme of representing/storing 
color, so using rgb values to key the subject out might not be ideal. But 
still, try using colorkey or colorhold with higher similarity values first.

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] How to compress .MOV file compatible to Canon camera

2020-03-14 Thread Ted Park
Hi,
>> Did the OP have the chance to upload some working camera-generated preview 
>> files?
> 
> I didn't ask for one because I don't have the necessary hardware to test...


I was the one who asked for them, I was thinking I would sort of inspect it for 
some properties, like if the bitstream actually conforms to the  profile and 
level indicated, and also which features are used in the previews, or 
noticeably absent, to guess what the requirements of the hardware decoder are.

So there is no doubt that the camera uses a hardware decoder then?

I don’t have the hardware to test either, I mean I have Canon cameras but 
chances are not the one OP has :p

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] How to compress .MOV file compatible to Canon camera

2020-03-14 Thread Ted Park
Hi,

>> Here it is: https://cloud.disroot.org/s/fBeePRoA4JGZNMB

Ah, yes I did, thank you, sorry, I must have made a mental note to download it 
after 15 mins then forgot about it because I have the attention span of a 
goldfish. 

Regards,
Ted Park

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

  1   2   3   4   >