On March 11, 2013 11:52:29 AM Florian Jung wrote:
> "Tim E. Real" <[email protected]> schrieb:
> >I'm finally adding sample-accurate track controllers
> 
> Hi Tim,
> cool stuff.
> How are you going to achieve sample-accurateness? I'd like to ask you to
> keep the subticks in mind, they're kind of dormant, but some day i'll find
> the time to finish them, plus stretching :) Please tell me.a bit more about
> it
> 
> Thanks,
> flo

I'm handling them exactly the same way as for plugins and synths as in 
 PluginI::apply(), DssiSynthIF::getData(), and VstNativeSynthIF::getData()
 where we break up the processing period into chunks depending on
 when the various control changes happen.

Looking in your branch I see you have not changed those routines - yet.
But it raises an important issue, that not only do our tracks and parts
 need to respect sub-ticks, but also our audio controllers as well - they
 will need to 'stretch' along with the waves and so on.
I'm not sure how sub-ticks will be incorporated here but I'll defer that 
 to you for now.
Likely the whole indexing of audio controllers will have to change from 
 frames to sub-ticks,  as well as the ControlFifo system - a scary thought :) 

Virtually all my changes here are happening in AudioTrack::copyData().
I've given AudioTrack its own proper ControlFifo, just like plugins
 and synths, for sample-accurate volume and pan GUI control movements 
 and so on, opening the door for OSC control of volume and pan later.

There is one crucial difference between this code and the code in the
 plugins and synths:
Here in AudioTrack::copyData(), it's the place where more than one 
 node can call upon a node for its data, this is why we have the audio 
 data cache system 'AudioTrack::outBuffers' to save CPU time for the 
 next node that calls upon the data, so that processing is only done once
 upon the first call, and any further calls are just copying operations.

So because of that, entirely because of the ControlFifo, it was a bit tricky, 
 I had to add a *second* audio data cache buffer which I call 
 'AudioTrack::outBuffersExtraMix', this cache holds extra pre-processed, 
 pre-mixed data for the cases of routing one channel into two or two channels 
 into one. It was not possible to have just one cache and construct the
 other required mixes upon further calls later, from just that one cache.
Well, still open for change there, to save CPU time I might instead cache 
 the control values instead of the actual audio data, still playing around 
 with ideas there trying to optimize CPU time as much as I can.

----
I found a stupid multi-channel bug which has been there for a while. 
Towards the end of AudioTrack::copyData() where we mix the data 
 and send to the caller's buffers, we have lines like this:

        if(srcStartChan > 2 || _prefader) // Don't apply pan or volume to extra 
                                                                                
                channels above 2. Or if prefader on.

D'oh! That should have been:

        if(srcStartChan >= 2 ...

I verified with Addictive Drums that yes, in fact channel 3, the kick drum,  
 cannot be heard and in fact I was fully expecting MusE to crash because 
 it should have been attempting to apply vol[3] where vol[] is really only 
 an array of vol[2], so it should have crashed but it doesn't for some 
 reason... not sure why. Possibly it is just reading the next memory
 location without triggering a segfault.

Cheers.
Tim.

------------------------------------------------------------------------------
Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester  
Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the  
endpoint security space. For insight on selecting the right partner to 
tackle endpoint security challenges, access the full report. 
http://p.sf.net/sfu/symantec-dev2dev
_______________________________________________
Lmuse-developer mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/lmuse-developer

Reply via email to