On 27/02/2012 10:58 AM, robert bristow-johnson wrote:
in my opinion, the optimal division of labor becomes obvious if your
system modularizes specific low-level algs.

I'm not sure it's what you meant, but I would say: if your system modularizes specific low-level algs there is perhaps an optimal division of labour.

If you're writing in a single-language system on a single instruction architecture the options for modularisation will be different than on a MPU+DSP architecture.

If you look at the modularisation in something like Apple's vDSP, the operations are low level, and make total sense as basic building blocks, but they (mostly) don't look anything like what go by the name of unit generators in computer music DSLs.


when i worked at Eventide 2
decades ago, i thought that division of high level vs. low level was
pretty much natural and optimal.

on the low level we programmed blocks where the sample processing was in
the 56K assembly and the "coefficient cooking" was in C. they wrote some
pretty good tools that fished the symbols outa the linkable object code
that came outa the assembler and the coefficient cooking code could
write to specific DSP variables as symbols. you didn't need to note
where the DSP address was, it would find it for you. the coefficient
cooking code was executed only when a knob was twisted, but the sample
processing code was running all the time after it was loaded. a typical
example would be an EQ filter where user parameters (like resonant
frequency, Q, boost/cut gain) would go into the block from the outside
and the coefficient cooking code would cook those parameters and send
the coefficients to the DSP where the DSP was expecting them to go.

A couple of interesting things here:

- The coefficients are recomputed using a push architecture ("when a knob was twisted"). This can be correlated to Pd's dataflow message passing.

- The structure of the "dsp building blocks" is actually a bit different from the Csound "unit generator" model where all the processing (typically, although not always, including the coefficient cooking) is bundled up into the unit generator. Notably, in the Eventide, coefficients become a kind of first class data type.


but patching these blocks together was (eventually) done with a visual
patch editor. if, to create an overall "effect", you're laying down a
modulatable delay line here, a modulatable filter there, and some other
algorithms that have already been written and tested, with definable
inputs and outputs, why not use a visual editor for that?

but, if you need a block that does not yet exist, you need to be able to
write hard-core sample processing code in a general purpose language
like C (or the natural asm for a particular chip). then turn that into a
block, then hook it up with a visual editor like all of the other blocks.

That works up to the point that you encounter algorithms that can't be easily implemented by combining blocks, and are at the same time too complex to be "simple building blocks" at the hard-core sample processing level (by which I mean that they end up requiring additional modularisation internally within a block). Then you end up with a bunch of "monster blocks" that kind of break the elegance and obviousness of the presumed optimal division of labour. I'm not sure about the Eventide, but many unit generator based systems seem to have acquired monster blocks -- Reaktor has grain objects with a bazillion paramaters, CSound has ugens for entire physical models, even though the bulk of the language is low-level unit generators MP4-SAOL defines a few effects opcodes (chorus, dynamics processors).

Ross.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Reply via email to