On 06/09/2014 03:28 PM, Vesa wrote: > I have also other ideas about thread optimization: again working on the > assumption that "waiting in lines is bad"... I've identified two places > in the code where we're using a push model when a pull model would > probably be more efficient: first is AudioPort. When multiple NPH's push > their output into the shared AudioPort, they have to wait in a mutex > because the writes have to be serialized, so this is probably > inefficient? So to optimize this, we should probably do something where > each NPH just gives the AudioPort a pointer to its buffer, and the > AudioPort then pulls the buffers in and mixes them in a single thread. > > Another similar point is where instrument/sample tracks push their > output to a mixer channel. If several tracks push to the same channel, > there's another "wait in line" situation there, and we could apply the > same kind of fix there too.
Continuing on this... I've thought of an implementation plan: 1. Deprecate Mixer::bufferToPort() 2. Instead, each AudioPort will contain a member variable: a QList< QPair< sampleFrame*, f_cnt_t >> which holds in pairs the buffers and offsets, so it knows how to grab input from the PlayHandles. 3. Mixer::addPlayHandle() will check which AudioPort the playhandle is connected to, and adds the PlayHandle's buffer and offset to the list 4. Mixer::removePlayHandle() (or maybe the playhandle's destructor, in fact that'd probably be more reliable) will remove the playhandle from the AudioPort's list 5. AudioPort::doProcessing() will go through the list, read from the pointed buffers, and mix them into its own buffer at the specified offset If we then extend the same logic to the next stage: 6. Deprecate FxMixer::mixToChannel() 7. Instead, each FxChannel contains a QList< sampleFrame *> (no need to specify offsets here, as they're dealt with and normalized by the AudioPort) 8. Not sure how we'd update these lists... maybe with a signal from the fx channel selector? Or, modify the setNextFxChannel() function in AudioPort to do this 9. FxChannel::doProcessing() will then just treat the AudioPorts like any other sender Now, we'd still probably have to use some mutexes, mainly for the "writing to the list" part, but since we'd only have to do that when the lists are changed, we wouldn't have to be doing it every period. Compare to current situation, where we now have two bottlenecks, where we have threads waiting in mutexes every period while buffers are being written... with a pull model, the different AudioPorts could still be processed in parallel, but they'd each be handling their own non-parallelizable mixing internally. Combine this with the idea of adding thread priorities, and I think it's possible that we could gain substantial performance gains. Again, as a caveat, my understanding of threads is less than perfect, so feel free to point out if I'm mistaken in some of this. ------------------------------------------------------------------------------ HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions Find What Matters Most in Your Big Data with HPCC Systems Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. Leverages Graph Analysis for Fast Processing & Easy Data Exploration http://p.sf.net/sfu/hpccsystems _______________________________________________ LMMS-devel mailing list LMMS-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lmms-devel