On 3/8/13 5:53 PM, ChordWizard Software wrote:
Ross, Alan, Robert, thanks for the comments.
It's all good sense and very helpful as a reality check. I had considered some
of these concepts already, it's good to get these validated (and expanded).
But some are quite new - I never realised that multiplication ops were more
efficient than divisions.
always multiply by 1/C rather than divide by C. if C is a constant. it
means your coefficient cooking code has to compute the reciprocal, but
that is not something that need be done at sample time but in the code
that gets executed when a knob is twisted.
I totally agree with the idea that macro efficiency is a more rewarding
starting point than micro. I generally do try to avoid copy routines that
don't add any other value to the process at the same time.
But it's a tradeoff, isn't it, between efficiency and trying to keep the code
modular
what i don't understand is why your modular code needs to make
unnecessary copy operations. *every* instantiation of every module owns
its own output buffers. and the inputs to every module are other
modules' outputs (or the same module if you wanna do some delayed
feedback). why and when do you need to copy? well, other than into a
delay line buffer (like for FIR or multitap or reverb or similar). but
that is an integral function of the module to begin with.
with the system I/O i can surely imagine the need to copy out of the
system buffer to some nice de-interleaved signal buffers. and if your
system is floating point, it makes sense to me to convert from fixed
(what comes from the A/D buffer) to float and detangle the left and
right channel samples. and if there is a global input gain knob, to
apply that gain on the samples as they are being passed from one buffer
into another. that's a piece of system code, not part of a module that
may or may not be instantiated.
enough that you don't end up with some arcane multi-op tangle that has to get
duplicated and tweaked for every special case.
Anyway, if the general consensus is that memset and memcpy are reasonably
efficient then that's my immediate need taken care of, as I’m trying very hard
to stay cross-platform ready.
Maybe you can advise me on a related question - what's the best approach to
implementing attenuation? I'm guessing it is not linear, since perceived
sound loudness has a logarithmic profile - or am I confusing amplifier wattage
with signal amplitude?
i've never understood attenuation being anything other than a gain
coefficient with magnitude less than 1. inside your DSP engine,
amplitude is just a number (but we often like to have the rails
defined at -1 and +1), and when that signal goes out into an amplifier
and loud speaker, there can be talk of wattage in an absolute sense.
but inside your alg, only relative wattage makes any sense. at least to
me (maybe i'm missing something, like an obscure standard). multiply
your signal by a gain coefficient equal to 1/2 (or -1/2) and your
voltage level (and r.m.s. voltage) in the amp drops to half, your
wattage drops to 1/4 of the previous level and it's a -6.02 dB change.
The design of my audio engine is to drive a default GM softsynth,
are you coding the softsynth? or hooking up to someone else's?
with optional overrides for each channel to use a VSTi or alternate
synth/font instead.
Sysex Master Volume support is by no means assured for all of these possible
outputs, particularly the VSTs, so I'm realising that I probably need to
implement my own master volume control at the output.
well you're system output samples come from the output buffer that is
owned by the module that is connected to the system output. at the end
of your block processing time, after all of the modules got to process
their input into their outputs, your system has a pointer to where the
output blocks are and as you fetch those samples, you might have to
interlace the samples from multiple channels, you might have to convert
from float to fixed, and it appears to me that you might want to apply
that Master Volume gain just before the float-to-fixed conversion.
The obvious approach of course is linear scaling, but something tells me there
might be a better way to balance the increments of perceived volume difference
across the whole range?
dunno what that is. a dB step issue?
--
r b-j r...@audioimagination.com
Imagination is more important than knowledge.
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp