On 2/9/15 10:19 PM, Nigel Redmon wrote:
But it matters, because the whole point of dithering to 16bit depends on how 
common that ability is.
Depends on how common? I’m not sure what qualifies for common, but if it’s 1 in 
100, or 5 in 100, it’s still a no-brainer because it costs nothing, effectively.

i have had a similar argument with Andrew Horner about tossing phase information outa the line spectrum of wavetables for wavetable synthesis. why bother to do that? why not just keep the phase information and the waveshape when it costs nothing to do it.

regarding dithering and quantization, if it were me, for 32-bit or 24-bit fixed-point *intermediate* values (like multiple internal nodes of an algorithm), simply because of the cost of dithering, i would simply use "fraction saving", which is 1st-order noise shaping with a zero at DC, and not dither. or just simply round, but the fraction saving is better and just about as cheap in computational cost.

but for quantizing to 16 bits (like for mastering a CD or a 16-bit uncompressed .wav or .aif file), i would certainly dither and optimally noise-shape that. it costs a little more, but like the wavetable phase, once you do it the ongoing costs are nothing. and you have better data stored in your lower resolution format. so why not?

16 bits is just barely enough for high-quality audio. and it wouldn't have been if Stanley Lipshitz and John Vanderkooy and Robert Wanamaker didn't tell us in the 80's how to extract another few more dB outa the dynamic range of the 16-bit word. they really rescued the 80 minute Red Book CD.

--

r b-j                  r...@audioimagination.com

"Imagination is more important than knowledge."



--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Reply via email to