Hi,

As far as we know (and here in Vigo we have been using ffmpeg in our
encoders for years), the -thread option is present in all the major ffmpeg
versions. Josh, I don't know if you mean that the use of different threads
doesn't work with all the codecs, but adding an argument "-thread 0" won't
harm anyway (and I believe it works for most of them). BTW, a "0" means that
ffmpeg will calculate the optimal number of threads for the encoding, and we
also use that argument all the time with good results.

Just my two cents.

Happy New Year :P

Rubén


2010/12/31 Josh Holtzman <[email protected]>

> On Fri, Dec 31, 2010 at 5:54 AM, Brian Bolt <[email protected]>wrote:
>
>> I have a distributed install running 1.0.1.  The Admin and Engage servers
>> each have 16 cores (Intel).  The Worker server has 24 cores (AMD).  My first
>> one minute capture took about five minutes to process.  My second capture is
>> an hour long, and at 1 hour into processing it is currently encoding the
>> presentation (screen) for preview.
>>
>> I expected that the Worker server would endure the greatest processing
>> load, and 'top' shows that ffmpeg is using 95 - 110% cpu.  However, when I
>> run mpstat -P ALL, all processors are greater than 95% idle.  Based on what
>> I saw in my VM test environment, I anticipated that a 1 hour capture would
>> take well under 15 minutes to process.
>>
>
> Unless you tell ffmpeg to use mutltiple threads, it won't.  I've seen
> references to a "-threads [n]" option, but from what I understand, it's
> codec dependent.  All of those cores you've got in your worker will help
> you process more videos concurrently, but they won't help you process *one*
> video any faster than a single core machine.
>
> We're actively investigating gstreamer as a replacement for ffmpeg, which
> should enable multithreaded encoding.  You can follow the discussion on the
> matterhorn (dev) list.
>
>
>> On another note, the amount of space consumed is very large, 345 MB for
>> the one minute capture an 15.6 GB for the one hour capture.  Note that I
>> don't yet have the workflow yet configured for streaming, so I anticipate
>> that these numbers would be even larger if streaming were in the mix.
>>
>> [r...@server matterhorn]# du -h --max-depth=1
>> 25M     ./streams
>> 8.8G    ./files
>> 6.5G    ./workspace
>> 68K     ./server1
>> 2.5K    ./inbox
>> 78K     ./searchindex
>> 22M     ./downloads
>> 16G     .
>>
>
> Are ./workspace and ./files on the same volume?  If so, the 6.5G in
> workspace should be hard links, and therefore these won't actually consume
> space on the drive.  Of the 8.8G in ./files, do you have any idea how large
> the source files are?  If you're capturing large source files, there's not
> much you can do to save space here.  If the problem is that there are
> multiple copies of the same file, you can tweak the workflow (e.g. remove
> unneeded encoding formats such as AVI) to reduce the storage requirements.
>
> Josh
>
>
>>
>> Thoughts and ideas are greatly appreciated.
>>
>> Thanks,
>> Brian
>>
>>
>
> _______________________________________________
> Matterhorn-users mailing list
> [email protected]
> http://lists.opencastproject.org/mailman/listinfo/matterhorn-users
>
>
_______________________________________________
Matterhorn-users mailing list
[email protected]
http://lists.opencastproject.org/mailman/listinfo/matterhorn-users

Reply via email to