>Plus consider the fact the most DSP algorithms won't be able to retain a very
>high precision, due to truncation, rounding and chaining errors.
>In some cases you need all math done in 64bit fp (double precision), in order
>to get the desired 16bits of precision at the DAC, using 32bit floating point
>is not enough sometimes.
>Plus some filters produce small ripples by definition, thus rendering the
>added precision almost useless.
>and at 24bit , we have some 144dB dynamic range which means quite
>some headroom when recording.
>Going above the 24bit audio is for me like trying to make movies with
>200 frames per second ... no one will notice the difference, but
>the cost and complexity of the system rises expotentially while not
>giving us acceptable levels of improvement.

I recently spoke with an audio old timer who owns a Studer 1 inch half track 30 ips open reel mastering deck.  If analog tape resolution is proportional to tape surface area, then the resolution of his deck is 256x the resolution of consumer cassettes (1/8 inch quarter track 1 7/8 ips).  For him the switch to digital has been torture.  One of the complaints of daw's is that the editing is done at the same resolution as the consumer playback format.  You already see the challenge of maintaining precision at float 32 for a single process.  Imagine processing one track with a compressor, two bands of parametric eq, panning, and a gain fader.  That was 5 separate processes each contributing cumulatively to the truncating/rounding errors of the LSB.  Now mix 64 channels that have each been similarly processed and how far beyond the LSB do you think the quantization noise has migrated?  I feel that if the consumer format provides a resolution with quantization noise that is marginally beyond the threshold of audible perception, then the daw used for recording, editing, and mixing the audio should probably be operating at 4x to 8x of the consumer format.

>Whats the output level of professional mics now days?

My newest mic, an AT4050, outputs 15 mV.  Stand alone a/d's reference 0dbfs with 24 dbu (roughly 12 V?) so noise due to amplification is a pay me now or pay me later issue.

>I think that with out doing the A/D directly at the mic you would have a hard time getting a
>system that didn't wash the last 4-5 bits of resolution with noise. Of course that noise
>would be pretty white so with a little bit of math you might could digitally remove it.

Current 24 bit a/d's already wash the lowest 4-6 bits, so if a new design is only as good as the current one in this area, but provides improvement in other areas, it is still worth doing.

>Sorry if I'm dense; I understand the
>counting-up-to-the-exponent idea, but how do you then
>measure the mantissa?

Any way you want to.  Once the signal is normalized and the exponent computed, then this normalized signal simply lives in a buffer and can be treated like the s/h signal in a traditional a/d.

>You'll be
>getting all the analog noise and distortion introduced by
>however many amplification stages it took to get the signal
>above the reference level. If as you suggest we have a chain
>of amplifiers, each with gain of 2, then the lowest-level
>input signals will get correspondingly more noise and
>distortion because they'll go through lots of gain stages.

One 2x gain stage with a feedback loop.

>I can think of a way to measure the exponent without
>amplifying the input (by feeding it to a bunch of gates in
>parallel;

My understanding is that current a/d moved from successive approximation to 1 bit sigma delta because fast serial circuits are economically cheaper to build than slower parallel ones

> The difficulty would be, again, calibrating the
>gain of each amplifier precisely enough.

Maybe this is one reason why parallel circuits are so expensive.

>Also, the amps would need to have very flat frequency
>response - you wouldn't want to hear the tone quality change
>as the signal level changed!!

The beauty of this design is that frequency response becomes a non-issue for amp design.  The amps see only the dc output of the s/h which changes state at the sampling rate.  At 96 khz, this gives roughly 10 us between samples.  Amp design is simplified to dc step response with 10 us of settling time to allow for the inevitable ringing to decay.  This ringing is distortion.  Traditional analog amplification provides no time for ringing to decay because the input signal is the continuously varying ac audio signal.  This requires careful damping in the design, because all amps ring in response to a change of input voltage.  This also requires multiple stages with lower gain instead of one stage with high gain.  The careful balancing of these issues is why high quality mic pre-amps can cost $100 - $1000 per channel.  10 us of settling time was impossible in the days of all analog, and to my knowledge, while mic-pre a/d combos are becoming common, no designs break from the norm of putting the amp pre s/h.  This one feature makes it a worthy project.  Float was an afterthought, and I have found out that the 2x iterations to compute the exponent was patented in the 90's by a company working on video digitization.

I could skip the 2x stuff and design a variable gain amp with as few stages and as little damping as 10 us will allow and produce an amp with less self noise and distortion than current state of the art.  Then I would have a standard integer a/d that could eliminate the market for high end mic pre's ( those desirable artifacts created by classic gear could be emulated in software with convolution dsp) that would still have the problems of clipping and low resolution for low signals inherent in integer designs.  However, making the gain to be 2x gives the perfect opportunity to provide float output, and some fine tuning could eliminate some of the cumulative noise.

>But aren't standard high quality 24 bit DACs enough ?

My question to software developers is; is an all float software environment desirable enough to warrant a/d with native float output?

Tom

Reply via email to