[FFmpeg-user] Scaling a video with multiple different resolutions

2017-01-12 Thread Alex Speller
I have a webm file that has multiple different resolutions in it (it's a
screen capture of a window that changes dimensions).

https://www.dropbox.com/s/ptueirabmmht0fr/4be7fdb7-d7e9-41b4-ba26-e20a3eeb6026.webm?dl=0

Is there any way to "normalize" the dimensions and output a video of a
constant resolution, having e.g. black borders around the video when the
resolutions change

If you play the video on the dropbox page above, in chrome or firefox, how
that looks is what I'd like to output. Tried messing around with scale,
setsar and setdar parameters but I can't figure out how to get output
that's not just distorted

Any help appreciated, thanks!
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-user] Question about video compositing

2017-01-09 Thread Alex Speller
Ah, thanks a lot for the suggestion, but I should have been clearer that I
need to do this in an automated fashion for arbitrary sets of videos so it
has to be command-line (or a library I guess) so that I can integrate it
into an automated pipeline in my app.

Thanks,
Alex

On Tue, Jan 10, 2017 at 3:37 AM Steve Boyer  wrote:

> > Any suggestions on if either of these approaches is better, or any
> > alternatives? Thanks!
> >
>
> Hi! I've done something similar to doing this, but I ended up using a
> non-linear video editor. Specifically, I used kdenlive. It can do keyframe
> animation, so combine that with fade-ins/fade from blacks (audio/video
> filters) as well as fade-outs/fade to blacks, and you can do multiple
> tracks combined into a single output video with animations/fades when a
> stream ends. The downsides are that kdenlive, despite being the best video
> editor on linux (IMHO), is a little buggy, is all CPU-based when it comes
> to rendering and output file, for stability purposes it is recommended to
> use a single thread, you will have to manually put things together and time
> them, and need to use a GUI to do it all.
>
> I'd be happy to help with suggestions if you go this route, but understand
> if you want to go a different way (and I'd be interested if anyone has
> other suggestions how this can be accomplished via FFmpeg or CLI tools).
>
> Steve
>
>
> > ___
> > ffmpeg-user mailing list
> > ffmpeg-user@ffmpeg.org
> > http://ffmpeg.org/mailman/listinfo/ffmpeg-user
> >
> > To unsubscribe, visit link above, or email
> > ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
> ___
> ffmpeg-user mailing list
> ffmpeg-user@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-user
>
> To unsubscribe, visit link above, or email
> ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-user] Question about video compositing

2017-01-09 Thread Alex Speller
I have a question about video compositing. I’ve included the text of the
question below but I’ve also put it in a gist for easier to read formatting
here: https://gist.github.com/alexspeller/aefdd5a6d7100d28d0bbc4838527f797

I have multiple mp4 video files and I want to composite them into a
single video. Each stream is an mp4 video. They are of different
lengths, and each file also has audio.

The tricky thing is, I want the layout to change depending on how many
streams are currently visible.

As a concrete example, say I have 3 video files:

| File  | Duration | Start | End |
|---|--|---|-|
| a.mp4 | 30s  | 0s| 30s |
| b.mp4 | 10s  | 10s   | 20s |
| c.mp4 | 15s  | 15s   | 30s |


So at t=0 seconds, I want the video to look like this:

```
   +-+
   | |
   | |
   | |
   | |
   |  a.mp4  |
   | |
   | |
   | |
   | |
   | |
   +-+


```

At t=10s, I want the video to look like this:

```
+--++
|  ||
|  ||
|  | a.mp4  |
|  ||
|  ++
| b.mp4|
|  |
|  |
|  |
|  |
|  |
+--+
```

At t=15s, I want the video to look like this:

```
+--++
|  ||
|  ||
|  | a.mp4  |
|  ||
|  ++
| b.mp4||
|  ||
|  | c.mp4  |
|  ||
|  ++
|  |
+--+
```

And at t=20s until the end, I want the video to look like this:

```
+--++
|  ||
|  ||
|  | a.mp4  |
|  ||
|  ++
| c.mp4|
|  |
|  |
|  |
|  |
|  |
+--+
```

Ideally there would be some animated transitions between the states,
but that's not essential.

I have found two possible approaches that might work, but I'm not sure
what the best one is. The first is using
[filters](https://trac.ffmpeg.org/wiki/Create%20a%20mosaic%20out%20of%20several%20input%20videos)
to acheive the result, but I'm not sure if it will cope well with (a)
the changing layouts and (b) keeping the audio without any artefacts
when the layout changes.

The other approach I thought of would be exporting all frames to
images, building new frames with imagemagick, and then layering the
new frames on top of the audio like in [this blog
post](https://broadcasterproject.wordpress.com/2010/05/18/how-to-layerremix-videos-with-free-command-line-tools/).

Any suggestions on if either of these approaches is better, or any
alternatives? Thanks!
___
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".