Re: [music-dsp] Synth thread timing strategies

2013-03-26 Thread Ross Bencina

On 26/03/2013 4:55 PM, Alan Wolfe wrote:

I just wanted to chime in real quick to say that unless you need to go
multithreaded for some reason, you are far better off doing things
single threaded.

Introducing more threads does give you more processing power, but the
communication between threads isn't free, so it comes at a cost of
possible latency etc.  When you do it all on a single thread, that
disappears and you get a lot more bang for your buck.


Well that's true. But if you want to send MIDI events with millesecond 
resolution and your audio callback is running at a 5ms period with 90% 
processor load the only way you're going to get your 1ms MIDI 
granularity is with a separate thread that is either (1) running on 
another core or (2) is pre-empting the audio callback compute.


Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Synth thread timing strategies

2013-03-26 Thread ChordWizard Software
Hi Didier,

 you should be aware that DirectMusic is deprecated for a long time now
 ( was pretty much born dead)

Thanks, yes I know that.  I like retro :-)

Seriously though, I hardly touch any of DM (and never have), it's just that 
synth that is still relevant.  

It's the underlying engine that implements the default Windows Microsoft GS 
Wavetable Synth.  If you access that synth via MME, it behaves as an external 
device and you get little control over it.

But if you access it via DM, you can adjust the global reverb and latency (for 
what that's worth) but you can also capture the output via 
IDirectMusicSynthSink (took me some time to work out how, it has to be one of 
the least documented APIs out there).

I also know the sound is very ordinary, but DM/MME is all the current version 
of my app can use, so my users will still want their songs to sounds the same, 
before they start experimenting with all the other options I'm about to give 
them.

The DM synth is still fully accessible via Windows 8, deprecated or not.  And 
seems set to remain that way until Microsoft decides to provide some other 
default midi synth.  And that seems somehow even less likely than DM 
disappearingÂ…

Regards,

Stephen Clarke
Managing Director
ChordWizard Software Pty Ltd
corpor...@chordwizard.com
http://www.chordwizard.com
ph: (+61) 2 4960 9520
fax: (+61) 2 4960 9580

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Synth thread timing strategies

2013-03-26 Thread ChordWizard Software
Hi Ross,

Thanks, couple more questions then:

 - There can be significant jitter in the time at which an audio callback 
 is called.

Can you define jitter?  Callbacks with different frame counts, or dropped 
frames?  

If the former, it would seem my proposed mechanism could adapt, as long as the 
callback is flexible about using each new frame count as the midi event horizon.


 The way I do it is to recover a time base for the audio callback using 
 some variant of a delay-locked-loop. Then use this time base to map 
 between sample time, midi beat time and system time (QPC time). Then I 
 schedule the MIDI events to be output at a future QPC time in another 
 thread (where the future is adjusted by the audio latency). In that 
 other thread I run a loop that polls every millisecond. With work you 
 can make it poll less often when there are no events.

Are these well-known techniques?  Don't suppose you could point me to any 
articles that might help me get my head around them?

Regards,

Stephen Clarke
Managing Director
ChordWizard Software Pty Ltd
corpor...@chordwizard.com
http://www.chordwizard.com
ph: (+61) 2 4960 9520
fax: (+61) 2 4960 9580

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Synth thread timing strategies

2013-03-26 Thread Ross Bencina

On 26/03/2013 5:28 PM, ChordWizard Software wrote:

Hi Ross,

Thanks, couple more questions then:


- There can be significant jitter in the time at which an audio callback
is called.


Can you define jitter?  Callbacks with different frame counts, or dropped 
frames?


If you call QueryPerformanceCounter() at the start of each callback you 
may notice significant deviation from the expected callback time if your 
expectation is that they period is constant.


Ideally you don't want this jitter to be added to other sources of jitter.




If the former, it would seem my proposed mechanism could adapt,

 as long as the callback is flexible about using each new frame
 count as the midi event horizon.

Callbacks with different framecounts is a separate but related matter.



The way I do it is to recover a time base for the audio callback using
some variant of a delay-locked-loop. Then use this time base to map
between sample time, midi beat time and system time (QPC time). Then I
schedule the MIDI events to be output at a future QPC time in another
thread (where the future is adjusted by the audio latency). In that
other thread I run a loop that polls every millisecond. With work you
can make it poll less often when there are no events.


Are these well-known techniques?  Don't suppose you could point me to any

 articles that might help me get my head around them?

I am not aware of a good clear overview. It's well known in the lore. 
CoreAudio does a lot of this under the hood I think. If you're on Mac 
you get relatively stabe timestamps for free.



This is a good introduction to DLLs for buffer time smoothing. Although 
I have found the result of that filter to be numercially poor without 
extra tweaks (since time is always increasing you lose precision):


http://kokkinizita.linuxaudio.org/papers/usingdll.pdf


Ross.



Regards,

Stephen Clarke
Managing Director
ChordWizard Software Pty Ltd
corpor...@chordwizard.com
http://www.chordwizard.com
ph: (+61) 2 4960 9520
fax: (+61) 2 4960 9580

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Synth thread timing strategies

2013-03-26 Thread Theo Verelst


  It's not so easy nowadays to program a real time audio windows app, I 
think. It's been a long time since I've done such a thing, but think 
about it as that thread-switching, kernel heartbeat interrupts and 
buffering aren't what you're supposed to do much with. In general, one 
process assuming samples are to be loaded in a (callback) buffer on 
time should be fine, and probably it is a good idea at some level of 
abstraction to presume that works mostly flawless (like the Jack on 
Linux, but also existing on windows.


  Dealing with the apparently non-deterministic response of the timing 
of the midi and audio routines should be solved at the audio side by 
anticipating samples have mostly a constant timing, and that midi 
messages are either time-stamped or generate an interrupt as soon as 
available.


  Of course there is a difference between live midi events (and also 
between some software channel and a 31.25 kb/s actual midi connection) 
and a file or long-beforehand-generated set of events. In the case a 
actual midi line, the speed is low, and the latency in the order of a 
millisecond, but the sender has already serialized the events, and 
reordered the stream when many events are on a row, and there is the 
option of presuming the exact received midi messages time is sub-frame 
accurate. In software midi, the process, kernel or thread switching 
going on is making the exact timing of the messages less clear. On the 
other hand, giving the relative slowness of most midi message sequences, 
on modern machines when not too heavily multitasking,  an interrupt call 
or thread wake-up time can be quite low.


  For the audio the main thing that is always done on a PC (I've worked 
on DSP designs with a different method see http://www.theover.org/Synth 
for those interested and not all to fresh in computer design) is making 
use of some amount of buffering. This is to make better use of the 
processor, which especially at high sampling frequencies cannot easily 
switch context on a sample per sample basis, and also the computation 
and possibly one or more of the acceleration pipe lines (like sse, mmx) 
must be shared with other processes and the kernel, which can't work 
efficient if the pipelines need to start up, work for one result, which 
is to be gotten out of the pipeline. The percentage of useful work the 
various part of the contemporary processors can do in many cases depends 
on making use of various pipelines being made good use of.


  That brings us to the main bottlenecks in most audio programs: memory 
access through the cache, for both data and program pre-fetching. The 
cache, and when important, the bandwidth to the memory shared with other 
processes on the machine probably is most decisive concerning most 
performance in current practice. This becomes less of a problem with 
longer audio buffers, in most cases: so to get more efficient processor 
use (more effective computations per second) use bigger buffers. Of 
course this costs latency, which is more a problem for real time 
interactiveness than for complicated off-line musical rendering tasks.


  The exact timing of the actual audio-interrupts and the actual clock 
of the soundcard/device can be directly connected, sometimes there's 
resampling in the card, in certain cases the sound cards clock can be 
corrected by the real-time clock of the pc driving the kernel time, so 
it is possible some some glitches exist in that path, but never more 
than a few samples in normal cases, so no matter what, the global sample 
timing only needs correction when there are serious machine overloads 
going on, which can be ok, but for instance for live use, or reliable 
sequencer type of use aren't desirable.


So a musician using a software plugin/program mostly will adjust buffer 
length to get a compromise between low latency and reliable operation. 
Mostly I'd say for the live interaction it's important to have a as 
constant as possible delay, and never the situation where machine 
software breakdown will create dangerous audio streams. Of course 
managing the delay preparation and making the machine stable enough are 
a challenge, also in a perfect software world.


Theo V.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp