Am 13.09.2013 09:27, schrieb Tim E. Real: > On September 12, 2013 07:18:37 PM Florian Jung wrote: >> Hi, >> >> I wonder whether MusE can use multiple CPU cores for Audio processing, >> like plugins or my AudioStretchers. >> >> Audio stretching is placed in void AudioPrefetch::prefetch(bool doSeek). >> Instead of reading from the file, I'm now reading from the AudioStream, >> which reads from the file, handles MP3/codecs, and handles stretching >> internally. >> >> I guess that we can easily make the AudioPrefetch multithreaded (just >> create multiple objects, and instead of iterating through >> song->allWavetracks(), do some kind of load balancing.) >> >> Comments on this? >> >> >> >> >> >> But can we support multithreading with the audio effects as well? >> >> My thoughts on this: >> - it would be cool! >> - instead of having Audio::process1() iterate through all tracks, let it >> communicate with his "worker threads". >> - each of them may call AudioTracks::copyData on just some of all >> AudioTracks >> - If inter-dependencies occur, solve them using Mutexes (*) >> - Communication with the workers works like this: each worker has >> a "new data available"-semaphore and a "i'm done"-semaphore. >> Audio::process1 will feed the data into the workers, and then release >> the "new data avail". This will cause the worker to start. >> - After all workers are running, Audio::process will wait for all >> workers' semaphores (*), when all are done, continue. >> >> You probably though "whoa, no, we may not use mutexes or semaphores >> inside the realtime thread!" (*). >> >> Are you sure? I think this rule only applies when the Mutex is shared >> between a "realtime" thread and a "non-realtime" thread. >> However, our workers are "realtime threads" as well. So the worst thing >> that can happen is: >> >> No parallelisation is possible, all workers except one are waiting for a >> mutex. Then we have effectively serialized the work, and we have what >> Audio::process1() is doing right now anyway (with a little added >> overhead, though). >> >> A worker thread may not call any "slow" system calls that might block, >> nor may it do any bad worst-case-operations >> >> >> >> Note that this is a plan for the future, I have plenty to do right now. >> (audiostreams :)). But i'd like to know whether this is possible, or >> whether i made a mistake in thinking. >> >> >> Please comment :) >> Cheers, >> flo > > Yeah I've wondered how we might leverage multi-core CPU power. > I don't know a lot about doing that. (Still single-core CPU here.) > > But I was thinking: > So far we've talked about the two available mechanisms in MusE: > > 1) The "do some light work within one cycle" with msgXXX functions.
let me note one thing here: I am trying to *remove* most of the msgXXX functions, because the work they do is duplicated over doUndo123. But that will conceptually not change anything. However, I don't understand how msgXXX can do work? > 2) The "do some heavy work over several cycles" by putting the audio > in the idle state with the special msgIdle function, which causes a silence > glitch over those cycles. I think we should eliminate all uses of msgIdle, so: no ;) If you disagree, please tell me where msgIdle is really needed; I can only imagine "loading a file". All other use-cases should be done with audio messages. [...] > I will not pretend to understand fully, just vaguely :) I had a talk on #lad in the IRC today, and multithreading seems like a tough problem. Additionally, I talked about prefetching and glitches. Results: multithreading is a really complex thing, because we may *not* -- as i have suggested -- use semaphores. While we will indeed not have the problem of "waiting for a slow thread", we *must* go through the OS scheduler. And this scheduler is not realtime-capable; it has no guaranteed scheduling time. Using semaphores may work most of the time, but no guarantees, and thus unsuitable for live situations. Instead, we could use jack's async API. No idea about this, though. Or, we could create multiple jack clients, and let JACK do the hard work for us. Jack2 is capable of multiprocessing (as long there are no interdependencies). This would mean that MusE is not "one sequencer" any more, but in fact is "multiple sequencers" which communicate mainly via JACK transports. Stuff that goes through msgWhatever sent to *every* of these jack clients (aka threads), and their processMsg function decides whether this message applies for them or not. Our AudioPrefetch is sub-optimal IMO, because things *will* glitch if you touch stuff "in the near future" during playback. However, fixing this would require LOTS of changes and LOTS of work, and I don't think it's worth the effort. Such a small gain... Additionally, only this AudioPrefetchers allow us to *trivially* parallelize at least the audio stretchers (which is the largest part of the processing work!). Since AudioPrefetch is not time-critical, we may use conventional locking mechanisms, which makes life really easy. My roadmap: soon: Parallelize AudioPrefetch in distant future, maybe never: make MusE be many JACK clients, leading to some parallelization of audio processing/effects. Reality is a bitch :( Cheers, flo
signature.asc
Description: OpenPGP digital signature
------------------------------------------------------------------------------ How ServiceNow helps IT people transform IT departments: 1. Consolidate legacy IT systems to a single system of record for IT 2. Standardize and globalize service processes across IT 3. Implement zero-touch automation to replace manual, redundant tasks http://pubads.g.doubleclick.net/gampad/clk?id=51271111&iu=/4140/ostg.clktrk
_______________________________________________ Lmuse-developer mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/lmuse-developer
