On Wed, Jan 26, 2000 at 01:19:20AM -0700, Mark Taylor wrote:
> But it is possible to parallelize at the frame level
> as Acy suggests: each cpu encodes 100 frames,
> but with a 2 frame overlap because of the above problems.
> i.e. cpu0: encodes frames 0-99
> cpu1: encodes frames 98-199 (and discards frames 98 and 99)
> cpu2: encodes frames 198-299 (discards frames 198, 199)
> etc...
> And each block of 100 frames would have to make sure the bitreservoir was
> not used for the first 3 frames.
please excuse me for my ignorance of the mpeg format - is a frame a
consistent size? i've had an on-the-side project to do a very basic
distributed mp3 encoder which simply splits up the original .pcm file into
smaller (eg: 30 second) chunks, runs a certain number of lame/mp3enc/etc
processes on each chunk, and recombines them back into the final .mp3.
if each mpeg frame is always a set number of samples (in the original
.pcm), then it should be easy to do this process. while it's not completely
efficient, i hopefully plan to make the chunk size arbitrary, so that you
can split the file up into 30 second, 60 second, 5 second, etc. chunks,
depending on how big your original file is and how many computers
(processors) you have available. i don't even need an option in the mp3
encoder to discard the encoded frames, because ideally this front-end will
be intelligent enough to know about frames and manipulate the final stream
from the encoder.
but all this is possible only if each frame represents a consistent set
of samples. from my limited knowledge, i am assuming that this _is_ the
case, even with vbr.
can somebody please verify this for me? :)
--
: Andre Pang <[EMAIL PROTECTED]> - Purruna Pty Ltd - ph# 0411.882299 :
: #ozone - http://www.vjolnir.org/ozone/ :
--
MP3 ENCODER mailing list ( http://geek.rcc.se/mp3encoder/ )