Re: [FFmpeg-user] Speeding up a script.

2017-08-06 Thread DopeLabs
you can search the ffmpeg-all man pages or online documentation for 'threads'

-filter_complex_threads nb_threads (global)
Defines how many threads are used to process a filter_complex graph. Similar to 
filter_threads but used for -filter_complex graphs only. The default is the 
number of available CPUs.

-filter_threads nb_threads (global)
Defines how many threads are used to process a filter pipeline. Each pipeline 
will produce a thread pool with this many threads available for parallel 
processing. The default is the number of available CPUs.

threads integer (decoding/encoding,video)
Set the number of threads to be used, in case the selected codec implementation 
supports multi-threading.

you can also set -preset ultrafast.

then there are the hevc threading options (man x265).

Threading, performance:
   --threads 
  Number of threads for thread pool (0: detect CPU core count, 
default)

   -F/--frame-threads 
  Number of concurrently encoded frames. 0: auto-determined by core 
count

   --[no-]wpp
  Enable Wavefront Parallel Processing. Default enabled

   --[no-]pmode
  Parallel mode analysis. Default disabled

   --[no-]pme
  Parallel motion estimation. Default disabled

   --[no-]asm 
  Override CPU detection. Default: auto

 
> On Aug 6, 2017, at 6:16 01AM, Evert Vorster  wrote:
> 
> Hi there.
> I am using a quite convoluted filter in ffmpeg.
> -
> #!/bin/bash
> #This will split, defish, blend and re-assemble Samsung Gear 360 video
> map_dir="/data/Projects/RemapFilter"
> ffmpeg -y -i "$1" \
> -i $map_dir/lx.pgm -i $map_dir/ly.pgm -loop 1 \
> -i $map_dir/Alpha-Map.png \
> -i $map_dir/rx.pgm -i $map_dir/ry.pgm \
> -c:v hevc_nvenc -rc constqp -qp 26 -cq 26 \
> -filter_complex \
> "[0:v]eq=contrast=0.8:brightness=-0.01:gamma=0.7:saturation=0.8[bright]; \
> [bright]split=2[in1][in2]; \
> [in1]crop=in_w/2:in_h:0:in_h[l_crop];\
> [in2]crop=in_w/2:in_h:in_w/2:in_h[r_crop]; \
> [3]alphaextract[alf]; \
> [l_crop]vignette=angle=PI/4.6:mode=backward[l_vignette]; \
> [l_vignette][1][2]remap[l_remap]; \
> [r_crop]vignette=angle=PI/4.8:mode=backward[r_vignette]; \
> [r_vignette][4][5]remap[r_remap]; \
> [l_remap]crop=in_w:1920:0:(in_h-1920)/2[l_rm_crop]; \
> [r_remap]crop=in_w:1920:0:(in_h-1920)/2[r_rm_crop]; \
> [l_rm_crop][alf]alphamerge[l_rm_crop_a]; \
> [l_rm_crop_a]split=2[l_rm_crop1][l_rm_crop2]; \
> [l_rm_crop1]crop=in_w/2:in_h:0:0[l_rm_crop_l]; \
> [l_rm_crop2]crop=in_w/2:in_h:in_w/2:0[l_rm_crop_r]; \
> [0:v][r_rm_crop]overlay=(1920-(2028/2)):0[ov1]; \
> [ov1][l_rm_crop_l]overlay=((1920+2028/2)-(2028-1920)):0[ov2]; \
> [ov2][l_rm_crop_r]overlay=0:0[out]" \
> -map [out] -map 0:a "$1_Remapped.mp4"
> -
> 
> When this runs, only one of my CPU is showing any activity.
> Is there a way of telling ffmpeg to process these steps in the filter in
> parallel?
> 
> Kind regards,
> Evert Vorster
> 
> Isometrix Acquistion Superchief
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
> 
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] displaying real time text on output

2017-08-06 Thread DopeLabs
yea right now i have a cron fired script that will query the audio streaming 
server for the title, and if its different from the title thats in the txt 
file, atomically updates it.

it works just fine, but if i can get it all natively within ffmpeg that would 
be preferred . =]

oh.. and the code example i provided should place the text at the bottom 
center, not the top. sorry for the typo...


> On Aug 6, 2017, at 2:50 19PM, tasos  wrote:
> 
> Hello.
> Thank you very much i will try give it a try!
> Well you could use an external program that would read the metadata and write 
> to your txt file.
> But i suppose there's another solution by using ffmpeg and not an external 
> program.
> I hope someone will tell you how to do it.
> Thanks again!
> 
> 
> On 8/7/2017 12:43 AM, DopeLabs wrote:
>> you can use the drawtext filter.
>> 
>> i have several drawtext filters i run, displaying audio track titles, site 
>> name, original air date (or 'LIVE' if live).
>> 
>> the trick to having the text update mid stream or mid encode is to specify 
>> textfile="/path/file.txt":reload=1
>> 
>> reload=1 will cause ffmpeg to check the file every frame.
>> 
>> you need to update the file atomically or ffmpeg may partially read the 
>> file, or even fail.
>> 
>> you can achieve this by not editing the file with an editor or even using 
>> echo "text" > file.txt...
>> 
>> but instead save a new temp file, then mv tempfile.txt file.txt
>> 
>> here is an example of one that i use:
>> 
>> __
>> 
>> "drawtext=fontfile=/path/font.ttf:fontsize=28:fix_bounds=true:fontcolor=gray:alpha=.5:textfile=/path/file.txt:reload=1:y=(h-text_h)-5:x=(w-text_w)/2"
>> __
>> 
>> 
>> this creates semi transparent gray text at the top center of the frame, will 
>> auto scale if the amount of text exceeds the width of the frame so text wont 
>> get truncated.
>> 
>> if you are using this in a livestream scenario, you can see the text updates 
>> in real time.
>> 
>> 
>> 
>> i have not tried this when the input or output are not live streams...
>> 
>> i would suspect it would be kind of hard to get the timing right since the 
>> input/output can be read/written faster than real time.
>> 
>> 
>> on a side note:
>> im currently trying to figure out if its possible to use text expansion to 
>> pull metadata values from within the stream, and use in drawtext output.
>> 
>> an example would be an mp3 audio live stream such as icecast or shoutcast, 
>> when song titles change in the stream, to display that value in drawtext 
>> output which updates mid-stream.
>> 
>> it looks like it could be done but im just struggling a bit trying to figure 
>> out the correct syntax, metadata keys, etc.
>> 
>> so if anyone has any insight on that. im all ears.
>> 
>> -DL
>> 
>>> On Aug 6, 2017, at 11:39 43AM, tasos  wrote:
>>> 
>>> Hello.
>>> I know that this may be a stupid question but is it possible while encoding 
>>> to apply a filter that displays text for example?
>>> I mean ffmpeg is already running and produces output.
>>> I don't want to take that output and apply the text filter there.I want to 
>>> apply it on the already
>>> running ffmpeg process.
>>> Thanks!
>>> ___
>>> ffmpeg-user mailing list
>>> ffmpeg-user@ffmpeg.org
>>> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
>>> 
>>> To unsubscribe, visit link above, or email
>>> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
>> ___
>> ffmpeg-user mailing list
>> ffmpeg-user@ffmpeg.org
>> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
>> 
>> To unsubscribe, visit link above, or email
>> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
> 
> 
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
> 
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] displaying real time text on output

2017-08-06 Thread tasos

Hello.
Thank you very much i will try give it a try!
Well you could use an external program that would read the metadata and 
write to your txt file.
But i suppose there's another solution by using ffmpeg and not an 
external program.

I hope someone will tell you how to do it.
Thanks again!


On 8/7/2017 12:43 AM, DopeLabs wrote:

you can use the drawtext filter.

i have several drawtext filters i run, displaying audio track titles, site 
name, original air date (or 'LIVE' if live).

the trick to having the text update mid stream or mid encode is to specify 
textfile="/path/file.txt":reload=1

reload=1 will cause ffmpeg to check the file every frame.

you need to update the file atomically or ffmpeg may partially read the file, 
or even fail.

you can achieve this by not editing the file with an editor or even using echo 
"text" > file.txt...

but instead save a new temp file, then mv tempfile.txt file.txt

here is an example of one that i use:

__

"drawtext=fontfile=/path/font.ttf:fontsize=28:fix_bounds=true:fontcolor=gray:alpha=.5:textfile=/path/file.txt:reload=1:y=(h-text_h)-5:x=(w-text_w)/2"
__


this creates semi transparent gray text at the top center of the frame, will 
auto scale if the amount of text exceeds the width of the frame so text wont 
get truncated.

if you are using this in a livestream scenario, you can see the text updates in 
real time.



i have not tried this when the input or output are not live streams...

i would suspect it would be kind of hard to get the timing right since the 
input/output can be read/written faster than real time.


on a side note:
im currently trying to figure out if its possible to use text expansion to pull 
metadata values from within the stream, and use in drawtext output.

an example would be an mp3 audio live stream such as icecast or shoutcast, when 
song titles change in the stream, to display that value in drawtext output 
which updates mid-stream.

it looks like it could be done but im just struggling a bit trying to figure 
out the correct syntax, metadata keys, etc.

so if anyone has any insight on that. im all ears.

-DL


On Aug 6, 2017, at 11:39 43AM, tasos  wrote:

Hello.
I know that this may be a stupid question but is it possible while encoding to 
apply a filter that displays text for example?
I mean ffmpeg is already running and produces output.
I don't want to take that output and apply the text filter there.I want to 
apply it on the already
running ffmpeg process.
Thanks!
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".



___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] displaying real time text on output

2017-08-06 Thread DopeLabs
you can use the drawtext filter.

i have several drawtext filters i run, displaying audio track titles, site 
name, original air date (or 'LIVE' if live).

the trick to having the text update mid stream or mid encode is to specify 
textfile="/path/file.txt":reload=1

reload=1 will cause ffmpeg to check the file every frame. 

you need to update the file atomically or ffmpeg may partially read the file, 
or even fail.

you can achieve this by not editing the file with an editor or even using echo 
"text" > file.txt... 

but instead save a new temp file, then mv tempfile.txt file.txt

here is an example of one that i use:

__

"drawtext=fontfile=/path/font.ttf:fontsize=28:fix_bounds=true:fontcolor=gray:alpha=.5:textfile=/path/file.txt:reload=1:y=(h-text_h)-5:x=(w-text_w)/2"
__


this creates semi transparent gray text at the top center of the frame, will 
auto scale if the amount of text exceeds the width of the frame so text wont 
get truncated.

if you are using this in a livestream scenario, you can see the text updates in 
real time.



i have not tried this when the input or output are not live streams... 

i would suspect it would be kind of hard to get the timing right since the 
input/output can be read/written faster than real time.  


on a side note:
im currently trying to figure out if its possible to use text expansion to pull 
metadata values from within the stream, and use in drawtext output. 

an example would be an mp3 audio live stream such as icecast or shoutcast, when 
song titles change in the stream, to display that value in drawtext output 
which updates mid-stream.

it looks like it could be done but im just struggling a bit trying to figure 
out the correct syntax, metadata keys, etc. 

so if anyone has any insight on that. im all ears.

-DL

> On Aug 6, 2017, at 11:39 43AM, tasos  wrote:
> 
> Hello.
> I know that this may be a stupid question but is it possible while encoding 
> to apply a filter that displays text for example?
> I mean ffmpeg is already running and produces output.
> I don't want to take that output and apply the text filter there.I want to 
> apply it on the already
> running ffmpeg process.
> Thanks!
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
> 
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Speeding up a script.

2017-08-06 Thread Paul B Mahol
On 8/6/17, Evert Vorster  wrote:
> Hi there.
> I am using a quite convoluted filter in ffmpeg.
> -
> #!/bin/bash
> #This will split, defish, blend and re-assemble Samsung Gear 360 video
> map_dir="/data/Projects/RemapFilter"
> ffmpeg -y -i "$1" \
> -i $map_dir/lx.pgm -i $map_dir/ly.pgm -loop 1 \
> -i $map_dir/Alpha-Map.png \
> -i $map_dir/rx.pgm -i $map_dir/ry.pgm \
> -c:v hevc_nvenc -rc constqp -qp 26 -cq 26 \
> -filter_complex \
> "[0:v]eq=contrast=0.8:brightness=-0.01:gamma=0.7:saturation=0.8[bright]; \
>  [bright]split=2[in1][in2]; \
>  [in1]crop=in_w/2:in_h:0:in_h[l_crop];\
>  [in2]crop=in_w/2:in_h:in_w/2:in_h[r_crop]; \
>  [3]alphaextract[alf]; \
>  [l_crop]vignette=angle=PI/4.6:mode=backward[l_vignette]; \
>  [l_vignette][1][2]remap[l_remap]; \
>  [r_crop]vignette=angle=PI/4.8:mode=backward[r_vignette]; \
>  [r_vignette][4][5]remap[r_remap]; \
>  [l_remap]crop=in_w:1920:0:(in_h-1920)/2[l_rm_crop]; \
>  [r_remap]crop=in_w:1920:0:(in_h-1920)/2[r_rm_crop]; \
>  [l_rm_crop][alf]alphamerge[l_rm_crop_a]; \
>  [l_rm_crop_a]split=2[l_rm_crop1][l_rm_crop2]; \
>  [l_rm_crop1]crop=in_w/2:in_h:0:0[l_rm_crop_l]; \
>  [l_rm_crop2]crop=in_w/2:in_h:in_w/2:0[l_rm_crop_r]; \
>  [0:v][r_rm_crop]overlay=(1920-(2028/2)):0[ov1]; \
>  [ov1][l_rm_crop_l]overlay=((1920+2028/2)-(2028-1920)):0[ov2]; \
>  [ov2][l_rm_crop_r]overlay=0:0[out]" \
> -map [out] -map 0:a "$1_Remapped.mp4"
> -
>
> When this runs, only one of my CPU is showing any activity.
> Is there a way of telling ffmpeg to process these steps in the filter in
> parallel?

Have you checked which filter takes most of processing times?

>
> Kind regards,
> Evert Vorster
>
> Isometrix Acquistion Superchief
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
>
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Documentation for Ffmpeg remap filter

2017-08-06 Thread Paul B Mahol
On 8/6/17, Evert Vorster  wrote:
> Hi there.
>
> I have been able to use ffmpeg to convert dual fisheye samsung gear 360
> footage to equirectangular footage with ffmpeg.
>
> The example program given in the manual does an OK job of what the remap
> filter is capable of, however to really get the best from it the .pgm maps
> needs to be created with something that takes lens chataristics into
> account.
>
> Passing the -c parameter to Panotool's nona generates the required
> translation maps in .tif format, and then it's a doddle to convert to .pgm
> with ImageMagick.

FFmpeg can read tif format just fine last time I checked.

>
> Is there a way to update the manual a little for other people who wants to
> folow in my footsteps?
>
> For the curious, here is the contents of the script that I use.
> --
> #!/bin/bash
> #This will split, defish, blend and re-assemble Samsung Gear 360 video
> map_dir="/data/Projects/RemapFilter"
> ffmpeg -y -i "$1" \
> -i $map_dir/lx.pgm -i $map_dir/ly.pgm -loop 1 \
> -i $map_dir/Alpha-Map.png \
> -i $map_dir/rx.pgm -i $map_dir/ry.pgm \
> -c:v hevc_nvenc -rc constqp -qp 26 -cq 26 \
> -filter_complex \
> "[0:v]eq=contrast=0.8:brightness=-0.01:gamma=0.7:saturation=0.8[bright]; \
>  [bright]split=2[in1][in2]; \
>  [in1]crop=in_w/2:in_h:0:in_h[l_crop];\
>  [in2]crop=in_w/2:in_h:in_w/2:in_h[r_crop]; \
>  [3]alphaextract[alf]; \
>  [l_crop]vignette=angle=PI/4.6:mode=backward[l_vignette]; \
>  [l_vignette][1][2]remap[l_remap]; \
>  [r_crop]vignette=angle=PI/4.8:mode=backward[r_vignette]; \
>  [r_vignette][4][5]remap[r_remap]; \
>  [l_remap]crop=in_w:1920:0:(in_h-1920)/2[l_rm_crop]; \
>  [r_remap]crop=in_w:1920:0:(in_h-1920)/2[r_rm_crop]; \
>  [l_rm_crop][alf]alphamerge[l_rm_crop_a]; \
>  [l_rm_crop_a]split=2[l_rm_crop1][l_rm_crop2]; \
>  [l_rm_crop1]crop=in_w/2:in_h:0:0[l_rm_crop_l]; \
>  [l_rm_crop2]crop=in_w/2:in_h:in_w/2:0[l_rm_crop_r]; \
>  [0:v][r_rm_crop]overlay=(1920-(2028/2)):0[ov1]; \
>  [ov1][l_rm_crop_l]overlay=((1920+2028/2)-(2028-1920)):0[ov2]; \
>  [ov2][l_rm_crop_r]overlay=0:0[out]" \
> -map [out] -map 0:a "$1_Remapped.mp4"
> 
> I hope this helps somebody.
>
> Kind regards
> --
> Evert Vorster
> Isometrix Acquistion Superchief
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
>
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] [hw] strange stream behaviour

2017-08-06 Thread hw
Hi,

there are video streams showing a strange behaviour when recording them
with ffmpeg in that ffmpeg will record the stream for an apparently
random amount of time, ranging from one minute to ten minutes, and then
stop recording.  Once it stops recording, ffmpeg continues to run
indefinitely, not receiving and not recording anything.

I have programmed my recording application that uses ffmpeg to overcome
this problem such that it monitors the size of the recorded file.  If
the size doesn´t change for at least eight seconds, the ffmpeg process
supposed to record it is being killed and a new one is started to record
the same stream.

When the recording is finished, I have a number of files (chunks) that
were recorded, and I can use ffmpeg to concatenate these files.  When
watching the concatenated version of the recording, the video that has
been recorded shows "backjumps": That means there are a few seconds that
were at the end of chunk N which also are at the beginning of chunk
N+1.  How many seconds are overlapping varies.

This is, of course, annoying.  It is also puzzling because I used to
think that the recording stops because there sometimes is insufficient
bandwidth to continue to record.  This doesn´t seem to be the case,
though:


The stream being recorded is an hls stream accessed via the http URL of
an m3u8 playlist.

There are such streams that can be played with ffplay without
interruption for hours.  (I haven´t tried recording those yet.  Since
the streams that can not be recorded continuously also cut off when
being played with ffplay, I can assume it would be possible to record
the streams that play uninterrupted as continuously as they play.)


Now I´m assuming that the problem isn´t insufficient bandwidth or an
issue with ffmepg, but more likely an incompatible or troubled server.
That hls provides the "stream" (I wouldn´t call that a stream because
"stream" implies a continuous flow) in small chunks can explain the
annoying overlaps I am observing:

When the recording is restarted, it starts with a chunk that already has
been received and continues with the next chunks provided, some of which
also have already been recorded.

It can take about 10 seconds for ffmpeg to be restarted.  I have not
observed a single missing chunk, not even with a two hour movie
scattered over thirty single files.  There have only been overlaps.

If there was a bandwidth problem, I would expect chunks to be missing.
Since no chunks are missing even after a rather lengthy break in
recording, all chunks seem to be available on the server for more then
ten seconds.

So what if the time between the server telling ffmpeg 'use this next
chunk' and the availability of the chunk on the server is somewhat long?
Is there a time window within which ffmpeg tries to get the next chunk,
and is that window too short?  Is there an option to make that window
longer, or a place in the source where I could make it longer?

What else might cause the interruptions?

Why is ffmpeg unaware that recording has ceased and doesn´t quit when
that happens?  It is silly that the control program needs to poll the
size of the output file every eight seconds to figure out if something
is still being recorded or not.  Doing that isn´t really feasible,
either, because it is not guaranteed that eight seconds is always the
right polling interval.

Why doesn´t ffmpeg quit after it has been recording for the amount of
time specified with the '-t' option?  For example:


ffmpeg -n -loglevel 8 -i http://example.com/foo.m3u8 -codec copy -t 00:10:00 -f 
mpegts example


Ffmpeg does not stop after the ten minutes have passed when the
recording has stopped before that.  That doesn´t make any sense because
ffmpeg obviously either figures it is still recording --- in which case
it needs to stop after ten minutes --- or it figures it is not
recording, in which case it should quit before the ten minutes have
passed, or at least when they have passed.  Having it run indefinitely
is not the right choice.

So what can I do to record without interruptions, or at least without
the annoying overlaps?
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] Documentation for Ffmpeg remap filter

2017-08-06 Thread Evert Vorster
Hi there.

I have been able to use ffmpeg to convert dual fisheye samsung gear 360
footage to equirectangular footage with ffmpeg.

The example program given in the manual does an OK job of what the remap
filter is capable of, however to really get the best from it the .pgm maps
needs to be created with something that takes lens chataristics into
account.

Passing the -c parameter to Panotool's nona generates the required
translation maps in .tif format, and then it's a doddle to convert to .pgm
with ImageMagick.

Is there a way to update the manual a little for other people who wants to
folow in my footsteps?

For the curious, here is the contents of the script that I use.
--
#!/bin/bash
#This will split, defish, blend and re-assemble Samsung Gear 360 video
map_dir="/data/Projects/RemapFilter"
ffmpeg -y -i "$1" \
-i $map_dir/lx.pgm -i $map_dir/ly.pgm -loop 1 \
-i $map_dir/Alpha-Map.png \
-i $map_dir/rx.pgm -i $map_dir/ry.pgm \
-c:v hevc_nvenc -rc constqp -qp 26 -cq 26 \
-filter_complex \
"[0:v]eq=contrast=0.8:brightness=-0.01:gamma=0.7:saturation=0.8[bright]; \
 [bright]split=2[in1][in2]; \
 [in1]crop=in_w/2:in_h:0:in_h[l_crop];\
 [in2]crop=in_w/2:in_h:in_w/2:in_h[r_crop]; \
 [3]alphaextract[alf]; \
 [l_crop]vignette=angle=PI/4.6:mode=backward[l_vignette]; \
 [l_vignette][1][2]remap[l_remap]; \
 [r_crop]vignette=angle=PI/4.8:mode=backward[r_vignette]; \
 [r_vignette][4][5]remap[r_remap]; \
 [l_remap]crop=in_w:1920:0:(in_h-1920)/2[l_rm_crop]; \
 [r_remap]crop=in_w:1920:0:(in_h-1920)/2[r_rm_crop]; \
 [l_rm_crop][alf]alphamerge[l_rm_crop_a]; \
 [l_rm_crop_a]split=2[l_rm_crop1][l_rm_crop2]; \
 [l_rm_crop1]crop=in_w/2:in_h:0:0[l_rm_crop_l]; \
 [l_rm_crop2]crop=in_w/2:in_h:in_w/2:0[l_rm_crop_r]; \
 [0:v][r_rm_crop]overlay=(1920-(2028/2)):0[ov1]; \
 [ov1][l_rm_crop_l]overlay=((1920+2028/2)-(2028-1920)):0[ov2]; \
 [ov2][l_rm_crop_r]overlay=0:0[out]" \
-map [out] -map 0:a "$1_Remapped.mp4"

I hope this helps somebody.

Kind regards
-- 
Evert Vorster
Isometrix Acquistion Superchief
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] Speeding up a script.

2017-08-06 Thread Evert Vorster
Hi there.
I am using a quite convoluted filter in ffmpeg.
-
#!/bin/bash
#This will split, defish, blend and re-assemble Samsung Gear 360 video
map_dir="/data/Projects/RemapFilter"
ffmpeg -y -i "$1" \
-i $map_dir/lx.pgm -i $map_dir/ly.pgm -loop 1 \
-i $map_dir/Alpha-Map.png \
-i $map_dir/rx.pgm -i $map_dir/ry.pgm \
-c:v hevc_nvenc -rc constqp -qp 26 -cq 26 \
-filter_complex \
"[0:v]eq=contrast=0.8:brightness=-0.01:gamma=0.7:saturation=0.8[bright]; \
 [bright]split=2[in1][in2]; \
 [in1]crop=in_w/2:in_h:0:in_h[l_crop];\
 [in2]crop=in_w/2:in_h:in_w/2:in_h[r_crop]; \
 [3]alphaextract[alf]; \
 [l_crop]vignette=angle=PI/4.6:mode=backward[l_vignette]; \
 [l_vignette][1][2]remap[l_remap]; \
 [r_crop]vignette=angle=PI/4.8:mode=backward[r_vignette]; \
 [r_vignette][4][5]remap[r_remap]; \
 [l_remap]crop=in_w:1920:0:(in_h-1920)/2[l_rm_crop]; \
 [r_remap]crop=in_w:1920:0:(in_h-1920)/2[r_rm_crop]; \
 [l_rm_crop][alf]alphamerge[l_rm_crop_a]; \
 [l_rm_crop_a]split=2[l_rm_crop1][l_rm_crop2]; \
 [l_rm_crop1]crop=in_w/2:in_h:0:0[l_rm_crop_l]; \
 [l_rm_crop2]crop=in_w/2:in_h:in_w/2:0[l_rm_crop_r]; \
 [0:v][r_rm_crop]overlay=(1920-(2028/2)):0[ov1]; \
 [ov1][l_rm_crop_l]overlay=((1920+2028/2)-(2028-1920)):0[ov2]; \
 [ov2][l_rm_crop_r]overlay=0:0[out]" \
-map [out] -map 0:a "$1_Remapped.mp4"
-

When this runs, only one of my CPU is showing any activity.
Is there a way of telling ffmpeg to process these steps in the filter in
parallel?

Kind regards,
Evert Vorster

Isometrix Acquistion Superchief
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] Using ffmpeg to sanitize video with CyberLink PowerDirect 14

2017-08-06 Thread Paul Sheer
Hi there,

I use ffmpeg as follows to transfer video from my Samsung Galaxy Note
2 and Samsung Galaxy Note 4 to PowerDirect 14:

ffmpeg.exe -i samsung-raw-video.mp4 \
  -qscale:v 1 -r 30 -codec:v msmpeg4  cooked-import-video.wmv

Now youtube wants 16:9 video and 1280x720 res seems like a good
compromise between quality and storage. I really see no need for
higher resolution, but anything lower starts to get closer to "regular
TV" video which is not acceptable. Also, 1280x720 is a sort-of
standard resolution on a Samsung Galaxy Note. Similarly for 30fps. So
this my reason for shooting all my video in 1280x720@30fps.

The problem is that many video editing suites get my audio out-of-sync
with the video -- this can really make your hair turn gray.  I was
told this is because those suites don't understand variable-frame-rate
encoding. Is that the reason?  So the trick is to convert the video
into "ANYTHING" that is 30fps and 1280x720 and Not loose quality.

What is this "arbitrary" video format that would be import-able into
PowerDirector I had no idea. I looked through the plethora of
different formats supported by ffmpeg. Some do not allow the specific
resolution of 1280x720@30fps.

After trying many formats I found that,

-qscale:v 1 -r 30 -codec:v msmpeg4  output.wmv

, works well.


So that is the background. I have been doing all my videos like this.
My procedure is to process all the raw footage with the above options
as a first step before I start editing. Here is my channel:

   https://www.youtube.com/channel/UCmbbkAng3soRYK9xV53Qu5g

So my problem is solved.


So these are my questions please:

What really is the "proper" way of processing video from Android with
this problem of the "variable-frame-rate (??)" un-syncing the audio?
How do other people do it?
What is a good intermediate format that does not alter the resolution,
quality, and uses 30 fps?
What is special about "msmpeg4" that it seems like the only format I
could find that worked well for importing into PowerDirector and also
met my criteria?
Why does ffmpeg reduce the resolution down to worse-than-VHS quality
when I omit the "qscale" option?
Surely retaining the quality should be the default when converting to
a different format?

Thanks

Paul
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".